(no title)
tetsusaiga | 3 years ago
Motivated to prove myself, I ended up writing a crawler with all kinds of contingencies for the crap UI, which would press the "Download" button that downloaded each record as a JSON file (couple thousand of these). Then my little node app would shoot the JSON files to an S3 bucket for safekeeping, parse them, and save each record in DynamoDB I believe it was.
Doesn't take a genius to come up with the idea to write a crawler, but no one else did, and the business was entering a state of frantic desperation re: this issue, so I felt pretty smart for a bit.
Then a few months later they abandoned the CMS and all it's corresponding data.
No comments yet.