This is a great tool to do mass changes to our 1200 CPU compute cluster. The best part about this is, it will map all of the hosts with the same output together and reduce this down to just one instance of the output, with the affected hostnames listed together. So instead of 500 instances of the same output data I get one. This comes in handy for finding hosts which are different than what I'm looking for.
Simple example:
--------------------------------------------------------------------------------
compute[0009,0051,0187] (3 HOSTS)
--------------------------------------------------------------------------------
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootvg-rootlv
58G 14G 42G 25% /
--------------------------------------------------------------------------------
compute[0013,0038,0044,0046,0065,0103,0125,0142,0195,0213,0216,0234] (12 HOSTS)
--------------------------------------------------------------------------------
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootvg-rootlv
58G 11G 44G 19% /
--------------------------------------------------------------------------------
compute0247 (1 HOST)
--------------------------------------------------------------------------------
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootvg-rootlv
58G 7.6G 47G 15% /
Cluster SSH is a terrific tool. When I learned of it I coded up a quick python script to play selected tracks of MIDI files and used CSSH to play Bach's Toccata and Fugue in D minor polyphonically on ~20 computers' PC speakers.
It was also fun to terrify first year students with 20 cd trays ejecting at the same time...
Neat idea, though I feel like it would be better implemented as a command line tool using ncurses or something similar so it could be used on any Unixy system (including logged into another server, for example)
I suppose april fools is the reason we've gotten so far through this discussion that only scattered mentions of fabric or puppet/chef?
I've certainly done my share of ssh in a for loop, but if you're seriously considering this you almost certainly want to look into puppet/chef/cfengine/etc.
Now that <a href="http://devstructure.com>blueprint</a>; exists, it's radically simple to turn your current environment into chef/puppet deploy script. That said, distributed shell tools like this are still tremendously useful and should be part of any cluster stack.
C-b % [n times, n panes]
C-b M-5 [tiled layout]
### ssh into your servers in each pane, or whatever
C-b : setw sy <tab> [short for "synchronize-panes"]
### have synced fun!
This like these always makes me uneasy. Yes, you can do stuff on X number of machines all at the same time. That also means you can break things all at the same time.
Only use I can see if for multi-system stats - which should be handled differently anyhow, but that's a different story. Perhaps running htop, bwm-ng, iftop or other similar tools. Would certainly be scary to do anything other than that...
This seems like something that would be much better if integrated into a tool like Fabric or Capistrano that already knows how to send SSH commands to a bunch of different servers.
Does anything like that exist? I guess the downside would be fixing problems if your command failed in 1 or 2 of 10 or 20 servers… Then you'd have to open up a separate session to those servers to figure out what was going on.
This tool looks awesome. For some of us, it's the first time at the running 20 EC2 instances at once rodeo, and a tool like this is helpful for testing and developing the parallelization and maintenance tasks that everyone else is being condescending about.
I use csshx, and I highly recommend it. One annoyance is that on password entry screens it occasionally does not show the password prompt. But generally speaking it's a great utility.
[+] [-] res0nat0r|15 years ago|reply
http://taktuk.gforge.inria.fr/kanif/
This is a great tool to do mass changes to our 1200 CPU compute cluster. The best part about this is, it will map all of the hosts with the same output together and reduce this down to just one instance of the output, with the affected hostnames listed together. So instead of 500 instances of the same output data I get one. This comes in handy for finding hosts which are different than what I'm looking for.
Simple example:
[+] [-] rw|15 years ago|reply
[+] [-] drx|15 years ago|reply
It was also fun to terrify first year students with 20 cd trays ejecting at the same time...
[+] [-] kowsik|15 years ago|reply
[+] [-] JonnieCache|15 years ago|reply
[+] [-] VMG|15 years ago|reply
[+] [-] ceejayoz|15 years ago|reply
[+] [-] tlrobinson|15 years ago|reply
[+] [-] scottm01|15 years ago|reply
I've certainly done my share of ssh in a for loop, but if you're seriously considering this you almost certainly want to look into puppet/chef/cfengine/etc.
[+] [-] splitrocket|15 years ago|reply
[+] [-] ez77|15 years ago|reply
[+] [-] rilindo|15 years ago|reply
[+] [-] elliottcarlson|15 years ago|reply
[+] [-] 9999|15 years ago|reply
[+] [-] taylorbuley|15 years ago|reply
[+] [-] tomkinstinch|15 years ago|reply
I've been using sshpt and pssh for this sort of thing:
http://code.google.com/p/sshpt/
http://code.google.com/p/parallel-ssh/
[+] [-] swah|15 years ago|reply
[+] [-] technomancy|15 years ago|reply
[+] [-] mccutchen|15 years ago|reply
Does anything like that exist? I guess the downside would be fixing problems if your command failed in 1 or 2 of 10 or 20 servers… Then you'd have to open up a separate session to those servers to figure out what was going on.
[+] [-] th0ma5|15 years ago|reply
also for clusters i wrote a python script using libcloud that connects multiple terminals and does some bastardized map/reduce type helper ideas:
http://gist.github.com/281504
[+] [-] bound008|15 years ago|reply
[+] [-] ajslater|15 years ago|reply
[+] [-] dacort|15 years ago|reply
Came in useful from time-to-time.
[+] [-] 9999|15 years ago|reply
[+] [-] flexd|15 years ago|reply
[+] [-] zackb|15 years ago|reply
[+] [-] aastaneh|15 years ago|reply
[+] [-] hoprocker|15 years ago|reply