top | item 40915886

Storing Scraped Data in an SQLite Database on GitHub

42 points| ngshiheng | 1 year ago |jerrynsh.com

8 comments

order
[+] kristianp|1 year ago|reply
It's fun to test the boundaries of github's services, but if you're doing something useful I'd just hire a vps, they can be had from $5 a month. You could still upload the sqlite file to github via a check-in.
[+] chatmasta|1 year ago|reply
Presumably you can bypass the artifact retention limit by uploading them as release artifacts (which are retained forever) rather than job artifacts.

(Not that I’d advocate for this in general, since ultimately you’re duplicating a bunch of data and will eventually catch the eye of some GitHub compliance script.)

[+] ngshiheng|1 year ago|reply
interesting! perhaps cleaning up the older data might help abit here

> since ultimately you’re duplicating a bunch of data and will eventually catch the eye of some GitHub compliance script

I suppose this could also be a concern with git scraping as we are bascially duplicating data through git commits (not trying to imply that one is better or worse). Having that said, I'm not sure if GitHub would be fine with any of these if more people were to do the same at a larger scale