top | item 28015980

Hosting SQLite Databases on GitHub Pages

567 points| isnotchicago | 4 years ago |phiresky.netlify.app

77 comments

order

papanoah|4 years ago

I get this error on Firefox 90.0.2 on Debian 10. It works in chrome though.

[error: RuntimeError: abort(Error: Couldn't load https://phiresky.netlify.app/world-development-indicators-sq.... Status: 0). Build with -s ASSERTIONS=1 for more info.]

Other than that, is pretty awesome and exactly what I was hoping for.

phiresky|4 years ago

Looks like Netlify changed something since I wrote this article regarding what headers they send. Detecting support for Range-requests is kinda tricky and relies on heuristics [1]. Not sure why it still works in Chrome though.

You can go to this version of my blog, it should work there:

https://phiresky.github.io/blog/2021/hosting-sqlite-database...

Maybe the link could be updated?

Except for the DOM demo, since those need some Cross-Site-Isolation Headers you can't set on GitHub Pages (that's the reason I mirrored it to Netlify originally)

[1] https://github.com/phiresky/sql.js-httpvfs/issues/13

cafxx|4 years ago

That's one lovely trick.

If I may suggest one thing... instead of range requests on a single huge file how about splitting the file in 1-page fragments in separate files and fetching them individually? This buys you caching (e.g. on CDNs) and compression (that you could also perform ahead of time), both things that are somewhat tricky with a single giant file and range requests.

With the reduction in size you get from compression, you can also use larger pages almost for free, potentially further decreasing the number of roundtrips.

There's also a bunch of other things that could be tried later, like using a custom dictionary for compressing the individual pages.

ignoramous|4 years ago

> If I may suggest one thing... instead of range requests on a single huge file how about splitting the file in 1-page fragments in separate files and fetching them individually?

Edgesearch does that though with Cloudflare Workers mediating searches: https://github.com/wilsonzlin/edgesearch

Uses roaring-bitmaps to index, but could also use Stavros' trick (bloom/cuckoo-filters) to further gate false-positives on-the-client: https://news.ycombinator.com/item?id=23473365

ticklemyelmo|4 years ago

I think that's the core innovation here, smart HTTP block storage.

I wonder if there has been any research into optimizing all http range requests at the client level in a similar way. i.e. considering the history of requests on a particular url and doing the same predictive exponential requests, or grabbing the full file asynchronously at a certain point.

goeiedaggoeie|4 years ago

SQLite not being in browsers instead of indexdb saddens me today still.

I designed a system 15 years ago that released dimensional star schemas for specific reports as sqllite databases into Adobe Air (or whatever the prerelease name was) for a large retailer in the UK. We would query the data warehouse, build the sqlite db file (I can't remember the exact db sizes but they weren't too big - 15mb or so) and the computational performance we got from using sqlite as our crunching layer when building the dimensional reports with drilldown was just astounding.

devwastaken|4 years ago

SQLite was not designed for arbitrary execution with control over queries and internal state. Someone demonstrated that you could redirect the pointer address for various callbacks and potentially exploit it. The solution currently is compiling sqlite in webassembly. Though I am certainly saddened that browsers don't have some sort of web-made equivalent natively.

CRConrad|4 years ago

I've been thinking about inventing exactly that for the last week or so.

devwastaken|4 years ago

Huh, this is a funny one. I had this idea a long time ago when doing some napkin design of a "static wiki". Problem was the querying didn't fit how software optimizes content delivery, so millions of people requesting from a single database would most likely be difficult to accomplish in a performant manner. Secondarily writing to said database would of course be impossible because locking, and you'd need a server anyways to do any sort session based submittal of data.

Very nice for read-only static data sets for small sites though. Infact this may be very useful for county mapping systems, converting over the GIS data to tables in SQLite.

If at all possible it would be better if this could be in ES5 (no async await) javascript, only very very modern browsers are going to be able to access it. People with older phones (which is many) wouldn't be able to use it at all.

earthboundkid|4 years ago

> People with older phones (which is many) wouldn't be able to use it at all.

Caniuse.com does not agree with this assertion.

manmal|4 years ago

I can’t fully put my finger on why exactly, but I feel that this is a transformative idea. What’s to stop me from emulating a private SQLite DB for every user of a web app, and use that instead of GraphQL?

noduerme|4 years ago

I have software deployed that depends on locally run (MySQL) DBs sitting on user/retail outlet machines, and I could think of a few reasons. Big data downloads, modifying anything about the data structure can cause breakage between the middleware and the DB, you're basically distributing a data model along with what should normally be serverside code to the client. Conceptually it's great but for large data, I'd hate to be paying for the bandwidth if it were public. And if it's not just a web page, you need a way to update it on every platform. Also, there are always things in a web app that simply cannot be client-side because they'd pose a security threat.

The concept reminds me a little bit of CD-ROM multimedia, in that it's so self-contained. For something like that it's great.

deanclatworthy|4 years ago

Nothing stopping you doing that right now with localStorage or IndexedDB. The issue is the browser cannot be trusted to keep that data, or at least these APIs aren't designed for long-term persistent storage. If we could solve this problem, we could go a long way towards some level of decentralisation. On the other hand, which is more secure? Your service or the user's machine. So there's a lot to consider.

ndm000|4 years ago

IMHO this reduces the need for a full fledged database serving read only information at scale. The restriction before this is that a SQLite file had to be on a single server. Now, having SQLite on S3, you could write a set of scalable web services on top of the file on S3 and scale those services as much as needed.

randomdata|4 years ago

Nothing, I'm sure, but the idea behind GraphQL is that you can query all of the different backend services you have in a single request, reducing the network latency associated with firing off requests to all those services individually. It would seem that an emulated SQLite database would bring you right back to having to perform multiple network requests, assuming your data needs are more complex than a single relation. Under normal usage, SQLite avoids the N+1 problem by not having IPC overhead, but that wouldn't apply here.

fouc|4 years ago

Yes, I was just thinking the same thing. It would be very interesting to skip GraphQL altogether using this alongside RESTful APIs.

nwsm|4 years ago

You don't own the database. You can't be sure it's not being tampered with, and joining any other users' data together still requires your own backend. You also can't promise data won't be lost.

As another reply said, this could be useful for data-intensive readonly applications.

benoror|4 years ago

How about running GraphQL on top of it?

Hackbraten|4 years ago

Good writeup, thanks!

All the code snippets, when run, give me the following error message:

[error: RuntimeError: abort(Error: server uses gzip or doesn't have length). Build with -s ASSERTIONS=1 for more info. (evaluating 'new WebAssembly.RuntimeError(e)')]

Could that be a Mobile Safari thing?

cube00|4 years ago

Same error on Firefox 90.0.2 desktop.

conradludgate|4 years ago

Same error on mobile chrome on my android

vladdoster|4 years ago

Same error with Vivaldi 4.1.2369.11 (Chromium based)

shams93|4 years ago

Yeah it's sad apple has crippled their browser the browser on a $40 android tablet is more powerful than safari running on a $1200 iPhone

makmanalp|4 years ago

FWIW the scientific computing community (who often deal with petabytes of geodata) has been thinking of ideas like this for a while, e.g. techniques around file formats that are easy to lazily and partially parse, (ab)using FUSE to do partial reads using http RANGE requests, some combination thereof, etc:

http://matthewrocklin.com/blog/work/2018/02/06/hdf-in-the-cl...

zubairq|4 years ago

On yazz.com we have been embedding and running SQLite in web pages for over 2 years now. It is definitely something that works well

mfbx9da4|4 years ago

Wow okay, so is this like an HTTP based buffer pool manager? Instead of reading pages from disk it reads via HTTP?

manmal|4 years ago

That’s how I understood it too

makmanalp|4 years ago

This is a great example of how as technology changes, it changes use cases, which can prompt a revisiting of what was once considered a good idea. You'll often see the pendulum of consensus swing in one direction, and then swing back to the exact opposite direction less than a decade later.

2010s saw REST-conforming APIs with json in the body largely as an (appropriate) reaction to what came before, and also in accordance with changes around what browsers were able to do, and thus how much of web apps moved from the backend to the front.

But then, that brought even more momentum where web apps started doing /even more/. There was a time when downloading a few megabytes per page, generating an SVG chart or drawing an image, interacting to live user interaction was all unthinkable. But interactive charting is now de facto. So now we need ways to access ranges and pieces of bulk data. And it looks a lot more like block storage access than REST.

---

These are core database ideas: you maintain a fast and easy to access local cache of key bits of data (called a bufferpool, stored in memory, in e.g. mysql). In this local cache you keep information on how to access the remaining bulk of the data (called an index). You minimize dipping into "remote" storage that takes 10-100x time to access.

Database people refer to the "memory wall" as a big gap in the cache hierarchy(CPU registers, L1-L3, main memory, disk / network) where the second you dip beyond it, latency tanks (Cue the "latency numbers every programmer should know" chart). And so you have to treat this specially and build your query plan to work around it. As storage techniques changed (e.g. SSDs, then NVMEs and 3d x-point etc), databases research shifted to adapt techniques to leverage new tools.

In this new case, the "wall" is just before the WAN internet, instead of being before the disk subsystem.

---

This new environment might call for a new database (and application) architectural style where executing large and complex code quickly at the client side is no problem at all in an era of 8 core CPUs, emscripten, and javascript JITs. So the query engine can move to the client, the main indexes can be loaded and cached within the app, and the function of the backend is suddenly reduced to simply storing and fetching blocks of data, something "static" file hosts can do no problem.

The fundamental idea is: where do I keep my data stored, where do I keep my business logic, and where do I handle presentation. The answer is what varies. Variations on this thought:

We've already had products that completely remove the "query engine" from the "storage" and provides it as a separate service, e.g. Presto / Athena where you set it up to use anything from flat files to RDBMSs as "data stores" across which it can do fairly complicated query plans, joins, predicate pushdown, etc. Slightly differently, Snowflake is an example of a database that's architected around storing main data in large, cheap cloud storage like s3, no need to copy and keep entire files to the ec2 node, only the block ranges you know you need. Yet another example of leveraging the boundary between the execution and the data.

People have already questioned the wisdom of having a mostly dumb CRUD backend layer with minimal business logic between the web client and the database. The answer is because databases just suck at catering to this niche, but nothing vastly more complicated than that. They certainly could do granular auth, serde, validation, vastly better performance isolation, HTTP instead of a special protocol, javascript client, etc etc. Some tried.

Stored procedures are also generally considered bad (bad tooling, bad performance characteristics and isolation, large element of surprise), but they needn't be. They're vastly better in some products that are generally inaccessible to or unpopular with large chunks of the public. But they're a half baked attempt to keep business logic and data close together. And some companies had decided at a certain time that their faults were not greater than their benefits, and had large portions of critical applications written in this way not too long ago.

---

Part of advancing as an engineer is to be able to weigh the cost of when it's appropriate to sometimes free yourself from the yoke of "best practices" and "how it's done". You might recognize that something about what you're trying to do is different, or times and conditions have changed since a thing was decided.

And also to know when it's not appropriate: existing, excellent tooling probably works okay for many use cases, and the cost of invention is unnecessary.

We see this often when companies and products that are pushing boundaries or up against certain limitations might do something that seems silly or goes against the grain of what's obviously good. That's okay: they're not you, and you're not them, and we all have our own reasons, and that's the point.

galaxyLogic|4 years ago

I would think that if we know the SQL-queries we need we could pre-perform them and store the results into simple indexed tables. The web-app would then need to just ask for the data at a given index-value. No SQL needed on the browser. Could this work? Pre-executing SQL.

ridaj|4 years ago

Neat. Just not to be used for authentication!

Mooty|4 years ago

I wonder if you could turn this with a Google Spreadsheet into a real DB system with writing access and a little obfuscated security wrapper.

chovybizzass|4 years ago

bummer on not being able to write to sqlite. I am using neocities and was wonder how i could get a db into play

levi_n|4 years ago

Depending on how often you need to write, you could use a CI pipeline on a cron to collect your updates, add them to a the sqlite file and commit the changes.

encryptluks2|4 years ago

I don't think it would be impossible, although I think it may be easier to just use a JavaScript Git client and something like Critic markup to make changes.

Proven|4 years ago

[deleted]