Storing Scraped Data in an SQLite Database on GitHub
42 points by ngshiheng 1 year ago | 8 comments- kristianp 1 year agoIt's fun to test the boundaries of github's services, but if you're doing something useful I'd just hire a vps, they can be had from $5 a month. You could still upload the sqlite file to github via a check-in.
- chatmasta 1 year agoPresumably you can bypass the artifact retention limit by uploading them as release artifacts (which are retained forever) rather than job artifacts.
(Not that I’d advocate for this in general, since ultimately you’re duplicating a bunch of data and will eventually catch the eye of some GitHub compliance script.)
- jzebedee 1 year agoThat's exactly what I did for scraping the USCIS processing time daily: https://github.com/jzebedee/uscis
- Crier1002 1 year agoout of curiosity: is there a specific reason to use robinraju/release-downloader@v1 over actions/download-artifact@v4 here at your 'Download previous DB' step in build_db.yml?
- Crier1002 1 year ago
- ngshiheng 1 year agointeresting! perhaps cleaning up the older data might help abit here
> since ultimately you’re duplicating a bunch of data and will eventually catch the eye of some GitHub compliance script
I suppose this could also be a concern with git scraping as we are bascially duplicating data through git commits (not trying to imply that one is better or worse). Having that said, I'm not sure if GitHub would be fine with any of these if more people were to do the same at a larger scale
- chatmasta 1 year agoWhat would be interesting is if you could find a way to scrape only the deltas and then somehow reconcile them into the full scrape.
- chatmasta 1 year ago
- jzebedee 1 year ago