One reason to use CGI is legacy systems. A large, complex, and important system that I inherited was still using CGI (and it worked, because a rare "10x genuinely more productive" developer built it). Many years later, to reduce peak resource usage, and speed up a few things, I made an almost drop-in replacement library, to permit it to also run with SCGI (and back out easily to CGI if there was a problem in production). https://docs.racket-lang.org/scgi/
Another reason to use CGI is if you have a very small and simple system. Say, a Web UI on a small home router or appliance. You're not going to want the 200 NPM packages, transpilers and build tools and 'environment' managers, Linux containers, Kubernetes, and 4 different observability platforms. (Resume-driven-development aside.)
A disheartening thing about most my recent Web full-stack project was that I'd put a lot of work into wrangling it the way Svelte and SvelteKit wanted, but upon finishing, wasn't happy with the complicated and surprisingly inefficient runtime execution. I realized that I could've done it in a fraction of the time and complexity -- in any language with convenient HTML generation, a SQL DB library, and an HTTP/CGI/SCGI-ish library, plus a little client-side JS).
I found that ChatGPT revived vanilla javascript and jquery for me.
Most of the chore part is done by chatgpt and the mental model of understanding what it wrote is very light and often single file. It is also easily embedded in static file generators.
On the contrary Vue/React have a lot of context required to understand and mentally parse. On react the useCallback/useEffect/useMemo make me need to manually manage dependencies. This really reminds me of manual memory management in C, with perhaps even more pitfalls. On vue the difference between computed, props and vanilla variables. I am amazed that the supposed more approachable part of tech is actually more complex than regular library/script programming.
I really like the code that accompanies this as an example of how to build the same SQLite powered guestbook across Bash, Python, Perl, Rust, Go, JavaScript and C: https://github.com/Jacob2161/cgi-bin
Checked the rust version, it has a toctou error right at the start, which likely would not happen in a non-cgi system because you’d do your db setup on load and only then would accept requests. I assume the others are similar.
This neatly demonstrates one of the issues with CGI: they add synchronisation issues while removing synchronisation tooling.
I have recently been writing CGI scripts for the web server of our universities computer lab in Go, and it has been a nice experience. In my case, the Guestbook doesn't use SQLite but I just encode the list of entries using Go's native https://pkg.go.dev/encoding/gob format, and it worked out well -- and critically frees me from using CGO to use SQLite!
But in the end efficiency isn't my concern, as I have almost not visitors, what turns out to be more important is that Go has a lot of useful stuff in the standard library, especially the HTML templates, that allow me to write safe code easily. To test the statement, I'll even provide the link and invite anyone to try and break it: https://wwwcip.cs.fau.de/~oj14ozun/guestbook.cgi (the worst I anticipate happening is that someone could use up my storage quota, but even that should take a while).
How do you protect against concurrency bugs when two visitors make guestbook entries at the same time? With a lockfile? Are you sure you won't write an empty guestbook if the machine gets unexpectedly powered down during a write? To me, that's one of the biggest benefits of using something like SQLite.
This is a followup to Gold's previous post that served 200 million requests per day with CGI, which Simon Willison wrote a post about, which we had a thread about three days ago at https://news.ycombinator.com/item?id=44476716. It addresses some of the misconceptions that were common in that thread.
Summary:
- 60 virtual AMD Genoa CPUs with 240 GB (!!!) of RAM
- bash guestbook CGI: 40 requests per second (and a warning not to do such a thing)
- Perl guestbook CGI: 500 requests per second
- JS (Node) guestbook CGI: 600 requests per second
It's so simple and it can run anything, and it was also relatively easy to have the CGI script run inside a Docker container provided by the extension.
In other words, it's so flexible that it means the extension developers would be able to use any language they want and wouldn't have to learn much about Disco.
I would probably not push to use it to serve big production sites, but I definitely think there's still a place for CGI.
In a corporate environment, for internal use, I often see egregiously specced VMs or machines for sites that have very low requests per second. There's a commercial monitoring app that runs on K8s, 3 VMs of 128GB RAM each, to monitor 600 systems; using 500MB per system, basically, just to poll it each 5 minutes, do some pretty graphs, etc. Of course it has a complex app server integrated into the web server and so forth.
Yep. ERP vendors are the worst offenders. Last deployment for 40-ish users "needed" an 22 CPU cores and 44 GB of RAM. After long back and forths I negotiated down to 8 CPU cores and 32 GB. Looking at the usage statistics, it's 10% MAX... And it's cloud infra so paying a lot for RAM and CPU sitting unused.
> No one should ever run a Bash script under CGI. It’s almost impossible to do so securely, and performance is terrible.
Actually shell scripting is the perfect language for CGI on embedded devices. Bash is ~500k and other shells are 10x smaller. It can output headers and html just fine, you can call other programs to do complex stuff. Obviously the source compresses down to a tiny size too, and since it's a script you can edit it or upload new versions on the fly. Performance is good enough for basic work. Just don't let the internet or unauthenticated requests at it (use an embedded web server with basic http auth).
Easy uploading of new versions is a good point, and I agree that the likely security holes in the bash script are less of a concern if only trusted users have access to it. However, about 99% of embedded devices lack an MMU, much less 50K of storage, which makes it hard to run Unix shells on them.
Honestly, I'm just trying to understand why people want to return to CGI. It's cool that you can fork+exec 5000 times per second, but if you don't have to, isn't that significantly better? Plus, with FastCGI, it's trivial to have separate privileges for the application server and the webserver. The CGI model may still work fine, but it is an outdated execution model that we left behind for more than one reason, not just security or performance. I can absolutely see the appeal in a world where a lot of people are using cPanel shared hosting and stuff like that, but in the modern era when many are using unmanaged Linux VPSes you may as well just set up another service for your application server.
Plus, honestly, even if you are relatively careful and configure everything perfectly correct, having the web server execute stuff in a specific folder inside the document root just seems like a recipe for problems.
Having completely isolated ephemeral request handlers with no shared state and no persistent runtime makes very clean and nice programming model. It also makes deployments simple because there is no graceful shutdown or service management to worry about; in simplest case you can just drop in new executables and they will be automatically taken into use without any service interruption. Fundamentally CGI model allows leveraging lot of tools that Linux/UNIX has to offer.
I guess multiprocessing got a bad reputation because it used to be slow and simple so it got looked down upon as a primitive tool for less capable developers.
But the world has changed. Modern systems are excellent for multiprocessing, CPUs are fast, cores are plentiful and memory bandwidth just continues getting better and better. Single thread performance has stalled.
It really is time to reconsider the old mantras. Setting up highly complicated containerized environments to manage a fleet of anemic VMs because NodeJS' single threaded event loop chokes on real traffic is not the future.
I thought the general view was that leaving the CGI model was not necessarily better for most people? In particular, I know I was at a bigger company that tried and failed many times to replace essentially a CGI model with a JVM based solution. Most of the benefits that they were supposed to see from not having the outdated execution model, as you call it, typically turned into liabilities and actually kept them from hitting the performance they claimed they would get to.
And, sadly, there is no getting around the "configure everything perfectly" problem. :(
Serverless is a marketing term for CGI, and you can observe that serverless is very popular.
A couple of years ago my (now) wife and I wrote a single-event Evite clone for our wedding invitations, using Django and SQLite. We used FastCGI to hook it up to the nginx on the server. When we pushed changes, we had to not just run the migrations (if any) but also remember to restart the FastCGI server, or we would waste time debugging why the problem we'd just fixed wasn't fixed. I forget what was supposed to start the FastCGI process, but it's not running now. I wish we'd used CGI, because it's not working right now, so I can't go back and check the wedding invitations until I can relogin to the server. I know that password is around here somewhere...
A VPS would barely have simplified any of these problems, and would have added other things to worry about keeping patched. Our wedding invitation RSVP did need its own database, but it didn't need its own IPv4 address or its own installation of Alpine Linux.
It probably handled less than 1000 total requests over the months that we were using it, so, no, it was not significantly better to not fork+exec for each page load.
You say "outdated", I say "boring". Boring is good. There's no need to make things more complicated and fragile than they need to be, certainly not in order to save 500 milliseconds of CPU time over months.
At this point, the contention might be the single SQL database. Throwing a beefy server like in the original post would increase the read performance numbers pretty significantly, but wouldn't do much on the write path.
I'm also thinking that at this age, one needs to go out of their way to do something with CGI. All macro, micro web frameworks comes with a HTTP server and there are plenty of options. I wouldn't do this for anything apart from fun.
For smaller things, and I mean single-script stuff, I pretty much always use php-fpm. It’s fast, it scales, it’s low effort to run on a VPS. Shipped a side-project with a couple of PHP scripts a couple of years ago. It works to this day.
I suppose because they can. While there were other good reasons leave CGI behind, performance was really the only reason it got left behind. Now that performance isn't the same concern it once was...
Think about all the problems associated with process life cycle - is a process stalled? How often should I restart a crashed process? Why is that process using so much memory? How should my process count change with demand? All of those go away when the lifecycle is tied to the request.
It’s also more secure because each request is isolated at the process level. Long lived processes leak information to other requests.
I would turn it around and say it’s the ideal model for many applications. The only concern is performance. So it makes sense that we revisit this question given that we make all kinds of other performance tradeoffs and have better hardware.
Or you know not every site is about scaling requests. It’s another way you can simplify.
> but it is an outdated execution model
Not an argument.
The opposite trend of ignoring OS level security and hoping your language lib does it right seems like the wrong direction.
processless is the new serverless, it lets you fit infinite jobs in RAM thus enabling impressive economies of scale. only dinosaurs run their own processes
It's the same reason people are using SQLite for their startup's production database, or why they self-host their own e-mail server. They're tech hipsters. Old stuff is cool, man. Now if you'll excuse me, I need to typewrite a letter and OCR it into Markdown so I can save it in CVS and e-mail my editor an ar'd version of the repo so they can edit the new pages of my upcoming book ("Antique Tech: Escaping Techno-Feudalism with Old Solutions to New Problems")
It was traditional 30 years ago to describe web site traffic levels in terms of hits per day, perhaps because "two hundred thousand hits per day" sounds more impressive than "2.3 hits per second". Consequently a lot of us have some kind of intuition for what kind of service might need to handle a thousand hits per day, a million hits per day, or a billion hits per day.
As other commenters have pointed out, peak traffic is actually more important.
As a comparison between implementations it can be useful. It is more than a big enough number that, if the test was actually done over a day, temporary oddities are dwarfed. If the test was done over an hour and multiplied then it is meaningless: just quote the per hour figure. Same, but more so, if the tests were much shorter than an hour.
What is the reason to choose gohttpd? I mean there are a lot of non standard libraries for go that are pretty fast or faster then gohttpd - https://github.com/valyala/fasthttp/ as example
Currently in Europe. Earlier, was trying to use the onboard wifi on a train, which has frequent latency spikes as you can imagine. It never quite drops out, but latency does vary between 50ms-5000ms on most things.
I struggled for _15 mins_ on yet another f#@%ng-Javascript-based-ui-that-does-not-need-to-be-f#@%ng-Javascript, simply trying to reset my password for Venmo.
Why... oh why... do we have to have 9.1megabytes of f#@*%ng scripts just to reset a single damn password? This could be literally 1kb of HTML5 and maybe 100kb of CSS?
Anyway, this was a long way of saying I welcome FastCGI and server side rendering. Js need to be put back into the toys bin... er trash bin, where it belongs.
Python has a policy against maintaining compatibility with boring technology. We discussed this at some length in this thread the other day at https://news.ycombinator.com/item?id=44477966; many people voiced their opposition to the policy. The alternatives suggested for the specific case of the cgi module were:
- use a language that isn't Python so you don't have to debug your code every year to make it work again when the language maintainers intentionally break it
sqlite resolves lock contention between processes with exponential backoff. When the WAL reaches 4MB it stops all writes while it gets compacted into the database. Once the compaction is over all the waiting processes probably have retry intervals in the hundred millisecond range, and as they exit they are immediately replaced with new processes with shorter initial retry intervals. I don't know enough queuing theory to state this nicely or prove it, but I imagine the tail latency for the existing processes goes up quickly as the throughput of new processes approaches the limit of the database.
OP posited SQLite database contention. I don't know enough about this space to agree or disagree. It would be interesting, and perhaps illuminating, to perform a similar experiment with Postgres.
CGI still makes a lot of sense when there are many applications that each only get requests at a low rate. Pack them onto servers, no RAM requirement unless actively serving a request. If the most of the requests can be served straight from static files by the web server then it's really only the write rate that matters, so even a high traffic sites could be a good match. With sendfile and kTLS the static content doesn't even need to touch user space.
It'd be interesting to compare the performance of the author's approach to an analogous design that changes CGI for WASI, and scripts/binaries to Wasm.
neilv|7 months ago
Another reason to use CGI is if you have a very small and simple system. Say, a Web UI on a small home router or appliance. You're not going to want the 200 NPM packages, transpilers and build tools and 'environment' managers, Linux containers, Kubernetes, and 4 different observability platforms. (Resume-driven-development aside.)
A disheartening thing about most my recent Web full-stack project was that I'd put a lot of work into wrangling it the way Svelte and SvelteKit wanted, but upon finishing, wasn't happy with the complicated and surprisingly inefficient runtime execution. I realized that I could've done it in a fraction of the time and complexity -- in any language with convenient HTML generation, a SQL DB library, and an HTTP/CGI/SCGI-ish library, plus a little client-side JS).
ptsneves|7 months ago
Most of the chore part is done by chatgpt and the mental model of understanding what it wrote is very light and often single file. It is also easily embedded in static file generators.
On the contrary Vue/React have a lot of context required to understand and mentally parse. On react the useCallback/useEffect/useMemo make me need to manually manage dependencies. This really reminds me of manual memory management in C, with perhaps even more pitfalls. On vue the difference between computed, props and vanilla variables. I am amazed that the supposed more approachable part of tech is actually more complex than regular library/script programming.
arzookanak|7 months ago
[deleted]
arzookanak|7 months ago
[deleted]
simonw|7 months ago
masklinn|7 months ago
This neatly demonstrates one of the issues with CGI: they add synchronisation issues while removing synchronisation tooling.
Bluestein|7 months ago
pkal|7 months ago
But in the end efficiency isn't my concern, as I have almost not visitors, what turns out to be more important is that Go has a lot of useful stuff in the standard library, especially the HTML templates, that allow me to write safe code easily. To test the statement, I'll even provide the link and invite anyone to try and break it: https://wwwcip.cs.fau.de/~oj14ozun/guestbook.cgi (the worst I anticipate happening is that someone could use up my storage quota, but even that should take a while).
kragen|7 months ago
kragen|7 months ago
Summary:
- 60 virtual AMD Genoa CPUs with 240 GB (!!!) of RAM
- bash guestbook CGI: 40 requests per second (and a warning not to do such a thing)
- Perl guestbook CGI: 500 requests per second
- JS (Node) guestbook CGI: 600 requests per second
- Python guestbook CGI: 700 requests per second
- Golang guestbook CGI: 3400 requests per second
- Rust guestbook CGI: 5700 requests per second
- C guestbook CGI: 5800 requests per second
https://github.com/Jacob2161/cgi-bin
I wonder if the gohttpd web server he was using was actually the bottleneck for the Rust and C versions?
antoineleclair|7 months ago
It's so simple and it can run anything, and it was also relatively easy to have the CGI script run inside a Docker container provided by the extension.
In other words, it's so flexible that it means the extension developers would be able to use any language they want and wouldn't have to learn much about Disco.
I would probably not push to use it to serve big production sites, but I definitely think there's still a place for CGI.
In case anyone is curious, it's happening mostly here: https://github.com/letsdiscodev/disco-daemon/blob/main/disco...
kragen|7 months ago
shrubble|7 months ago
RedShift1|7 months ago
0xbadcafebee|7 months ago
Actually shell scripting is the perfect language for CGI on embedded devices. Bash is ~500k and other shells are 10x smaller. It can output headers and html just fine, you can call other programs to do complex stuff. Obviously the source compresses down to a tiny size too, and since it's a script you can edit it or upload new versions on the fly. Performance is good enough for basic work. Just don't let the internet or unauthenticated requests at it (use an embedded web server with basic http auth).
kragen|7 months ago
jchw|7 months ago
Plus, honestly, even if you are relatively careful and configure everything perfectly correct, having the web server execute stuff in a specific folder inside the document root just seems like a recipe for problems.
zokier|7 months ago
0x000xca0xfe|7 months ago
But the world has changed. Modern systems are excellent for multiprocessing, CPUs are fast, cores are plentiful and memory bandwidth just continues getting better and better. Single thread performance has stalled.
It really is time to reconsider the old mantras. Setting up highly complicated containerized environments to manage a fleet of anemic VMs because NodeJS' single threaded event loop chokes on real traffic is not the future.
taeric|7 months ago
And, sadly, there is no getting around the "configure everything perfectly" problem. :(
kragen|7 months ago
A couple of years ago my (now) wife and I wrote a single-event Evite clone for our wedding invitations, using Django and SQLite. We used FastCGI to hook it up to the nginx on the server. When we pushed changes, we had to not just run the migrations (if any) but also remember to restart the FastCGI server, or we would waste time debugging why the problem we'd just fixed wasn't fixed. I forget what was supposed to start the FastCGI process, but it's not running now. I wish we'd used CGI, because it's not working right now, so I can't go back and check the wedding invitations until I can relogin to the server. I know that password is around here somewhere...
A VPS would barely have simplified any of these problems, and would have added other things to worry about keeping patched. Our wedding invitation RSVP did need its own database, but it didn't need its own IPv4 address or its own installation of Alpine Linux.
It probably handled less than 1000 total requests over the months that we were using it, so, no, it was not significantly better to not fork+exec for each page load.
You say "outdated", I say "boring". Boring is good. There's no need to make things more complicated and fragile than they need to be, certainly not in order to save 500 milliseconds of CPU time over months.
rajaravivarma_r|7 months ago
The performance numbers seem to show how bad it is in real world.
For testing I converted the CGI script into a FastAPI script and benchmarked it on my MacBookPro M3. I'm getting super impressive performance numbers,
Read ``` Statistics Avg Stdev Max Reqs/sec 2019.54 1021.75 10578.27 Latency 123.45ms 173.88ms 1.95s HTTP codes: 1xx - 0, 2xx - 30488, 3xx - 0, 4xx - 0, 5xx - 0 others - 0 Throughput: 30.29MB/s ``` Write (shown in the graph of the OP) ``` Statistics Avg Stdev Max Reqs/sec 931.72 340.79 3654.80 Latency 267.53ms 443.02ms 2.02s HTTP codes: 1xx - 0, 2xx - 0, 3xx - 13441, 4xx - 0, 5xx - 215 others - 572 Errors: timeout - 572 Throughput: 270.54KB/s ```
At this point, the contention might be the single SQL database. Throwing a beefy server like in the original post would increase the read performance numbers pretty significantly, but wouldn't do much on the write path.
I'm also thinking that at this age, one needs to go out of their way to do something with CGI. All macro, micro web frameworks comes with a HTTP server and there are plenty of options. I wouldn't do this for anything apart from fun.
FastAPI-guestbook.py https://gist.github.com/rajaravivarma-r/afc81344873791cb52f3...
Nzen|7 months ago
[0] https://www.nearlyfreespeech.net/help/faq#CGISupport
p2detar|7 months ago
UK-Al05|7 months ago
9rx|7 months ago
monkeyelite|7 months ago
It’s also more secure because each request is isolated at the process level. Long lived processes leak information to other requests.
I would turn it around and say it’s the ideal model for many applications. The only concern is performance. So it makes sense that we revisit this question given that we make all kinds of other performance tradeoffs and have better hardware.
Or you know not every site is about scaling requests. It’s another way you can simplify.
> but it is an outdated execution model
Not an argument.
The opposite trend of ignoring OS level security and hoping your language lib does it right seems like the wrong direction.
g-mork|7 months ago
0xbadcafebee|7 months ago
andrewstuart|7 months ago
diath|7 months ago
kragen|7 months ago
As other commenters have pointed out, peak traffic is actually more important.
dspillett|7 months ago
hu3|7 months ago
So I'd say per day is not very meaningful.
dengolius|7 months ago
exabrial|7 months ago
I struggled for _15 mins_ on yet another f#@%ng-Javascript-based-ui-that-does-not-need-to-be-f#@%ng-Javascript, simply trying to reset my password for Venmo.
Why... oh why... do we have to have 9.1megabytes of f#@*%ng scripts just to reset a single damn password? This could be literally 1kb of HTML5 and maybe 100kb of CSS?
Anyway, this was a long way of saying I welcome FastCGI and server side rendering. Js need to be put back into the toys bin... er trash bin, where it belongs.
carodgers|7 months ago
What is a modern python-friendly alternative?
kragen|7 months ago
- wsgiref.handlers.CGIHandler, which is not deprecated yet. gvalkov provided example code for Flask at https://news.ycombinator.com/item?id=44479388
- use a language that isn't Python so you don't have to debug your code every year to make it work again when the language maintainers intentionally break it
- install the old cgi module for new Python from https://github.com/jackrosenthal/legacy-cgi
- continue using Python 3.12, where the module is still in the standard library, until mid-02028
unknown|7 months ago
[deleted]
rokob|7 months ago
bracketfocus|7 months ago
I’d also be interested in getting a concrete reason though.
scraptor|7 months ago
twh270|7 months ago
hedgehog|7 months ago
oxcabe|7 months ago
IshKebab|7 months ago
unknown|7 months ago
[deleted]
arzookanak|7 months ago
[deleted]