top | item 10443742

Pigshell: Shell for the web

56 points| fbrusch | 10 years ago |pigshell.com | reply

18 comments

order
[+] fiatjaf|10 years ago|reply
The problem is that it is very difficult to understand the documentation, it is very difficult to develop plugins (like Google Drive) and extend the functionalities, it is very difficult to run the local server in a VPS, customization (dotfiles) is also complicated and undocumented.
[+] tubelite|10 years ago|reply
I agree. Mea culpa.

I am still active on the project and I am working to address the issues you mention, though progress has been slower than I could wish.

[+] andyidsinga|10 years ago|reply
The example of grabbing a page and feeding it into d3 is whizbang! ( https://github.com/pigshell/pigshell#hello-world )

I tried to cat this hn page, but also got the CORS error -- trying to route through [h]ttp://localhost:50937/...

need to run psty for proxying certain things and local access ...so its not "pure client-side Javascript app running in the browser"

pretty cool nonetheless -- maybe psty could be baked into a chrome or FF extention (?)

[+] tubelite|10 years ago|reply
Try

ycat https://news.ycombinator.com | hgrep .athing | html

ycat uses YQL to bypass CORS.

There are a few interesting workflows which can be done completely in the browser without a proxy: e.g. copying files from one Google Drive account to another, backing up Google Drive to Dropbox, and so on.

[+] mrdrozdov|10 years ago|reply
Would like every url to be a directory, and for that site to have its data nested within that url.
[+] markbnj|10 years ago|reply
I suspect this works essentially like that, if you have the psty proxy companion program running so pigshell can get around COR restrictions.
[+] P4bTXfOZAZicOOw|10 years ago|reply
Could we do the same thing with FUSE filesystems?
[+] charlesL|10 years ago|reply
Exactly what I was thinking. That would make all linux utils available to use as well.
[+] aembleton|10 years ago|reply
Just tried to grep BBC News site but got a `cross-origin request denied` error. In the man page for hgrep, it suggests hgrep on a wikipedia page, and I got the same error.

The error in full: Cross-origin request denied (check if psty is running): http://en.wikipedia.org/wiki/List_of_countries_and_dependenc...

I don't know how to check if psty is running. Anyone got any ideas?

[+] tubelite|10 years ago|reply
Since most websites are not CORS-friendly, you need to run the psty proxy on your desktop.

python psty.py -a

For further information on psty, and a download link, see: http://pigshell.com/v/0.6.4/usr/doc/psty.html

Alternately, if you don't want to run psty, you could try the "ycat" command which uses YQL to bypass CORS. This is useful if you are trying to access HTML pages, rather than binary resources. e.g.

ycat http://en.wikipedia.org/wiki/List_of_countries_and_dependenc... | hgrep table.wikitable | html

You may have to try it a couple of times; I notice that the YQL endpoint sometimes refuses connections.

[+] techdragon|10 years ago|reply
Honestly one of the most interesting "web shells" I've looked at. But not something I can use regularly since most of my workflow is local for developing software.
[+] Yahivin|10 years ago|reply
I wish it would at least try to do a real CORS request before giving up and forcing you to use the proxy. I have a CloudFront distribution with CORS all set up but it doesn't even try to make the request.
[+] astazangasta|10 years ago|reply
I don't understand why this runs in the browser. Surely this is all stuff you can do in a local shell? What am I missing?
[+] eccstartup|10 years ago|reply
It seems that buying a domain name is so expensive. Why pig?