top | item 38742047

(no title)

essayist | 2 years ago

It's 1996 or so. The web is new. The Bureau of Transportation Statistics (BTS) at the US Department of Transportation collects ontime arrival data for all the major airlines, by flight. It publishes some summary reports, but what people really want to see is how all the flights from, say, JFK to LAX performed in a given month.

The monthly database textfile is not that large, but it is unwieldy.

I'm a web consultant, but database backends are not yet a thing, at least not for us. Static webpages, all the way down.

So I use a script to parse the database into a series of text files and directories. E.g. JFK/index.html is a list of all the airports receiving flights from JFK, e.g. LAX, SFO, etc. And JFK/LAX.html is that month's results for all JFK to LAX flights. Etc.

As I recall, once I'd worked it out, it took 15 minutes to generate all those files on my Mac laptop, and then a little ftp action got the job done. Worked great, but someone did complain that we were polluting search results with so many pages for LAX, SFO, etc. etc. (SEO, sadly, was not really on our radar.)

That was replaced within a year by a php setup with a proper Oracle backend, and I had to explain to a DB admin what a weighted average was, but that's another story.

discuss

order

No comments yet.