(no title)
3xblah | 5 years ago
curl https://proxy.c2.com/wiki/remodel/pages/ > 1.html
To get a wiki page in JSON and convert it to textfile without using gratuitous JavascriptOptional: feed through fmt(1)
curl https://proxy.c2.com/wiki/remodel/pages/EgolessProgramming|tr -d '\n'|sed 's/ *//;s/{ \"date\": \"//;s/ \"text\": \"//;s/\",/\
/
s/\. :)//;s|\\r\\n\*|\
\
\*|g
s|\\r\\n|\
|g
s/'''//g; s/''//g;s/\\\\"/"/g;s/\\"/"/g;s/\\t//g;s/ ://g;s/ */ /g;s/\" }$//' > 1.txt
To search the wiki curl https://proxy.c2.com/cgi/fullSearch/?search=$1|sed 's|href=wiki.|href=https://proxy.c2.com/wiki/remodel/pages/|g' > 1.html
bear8642|5 years ago
Also appears quote missing in second code block
3xblah|5 years ago
Fact: It took more work for someone to convert the HTML to JSON, write the Javascript and set up the proxy than it did for me to write a shell script.
Not sure what were the benefit(s) to that person versus the one-time cost of switching away from plain HTML to requiring Javascript. No doubt he deemed it worth the time to set up.
What I do know is the benefit to me versus the one-time cost of writing a shell script. It means I do not need to use Javascript or submit to Google Analytics. I do not even need internet access once I have downloaded the C2 wiki, converted it to text and stored it on local media. If it one day disappears from the web and the IA, I still have a copy. This wiki is a piece of history and it is not changing.
Apologies for the error with the quotes. Here is a fix
cat > 1.sed
^D