This sort of visualization while really really cool (and kudos to the developers) is also really hard to navigate. I can't easily find any of the languages I'm looking for without a lot of visual scanning and zooming in-and-out.
The names inside the nodes are not readable. They don't display name when clicked or hover the cursor over them. So I closed the page after 5 or 10 seconds.
There are many nodes and edges missing. It would be great if the underlying graph was on a wiki. Maybe this content could be added to Wikidata (http://www.wikidata.org/).
You may be interested in DBpedia, which gets its data from wikipedia but presents it in a structured format (for example, the page for Clojure: http://dbpedia.org/describe/?url=http://dbpedia.org/resource...), though for this case, it seems the data is already on wikipedia.
It's interesting that the distinction between functional languages and imperative languages is not at all clear in the graph. This could be caused by the fact that the visualization technique used is not capable of showing this, or that there are simply too many types of edges (or because each type of edge is getting the same "importance" in the graph drawing algorithm).
Looking at the source, it looks as though each node is in a fixed position on the graph. Wouldn't it be easier to use something like graphviz where you just declare the nodes and connections, and it (graphviz) takes care of drawing it properly?
This doesn't look too nice graphically. Influenced by -lines are grey on grey which is almost invisible. Also, it doesn't make much sense either. C# is placed close to Smalltalk, far away from C and C++, not to mention Java.
A filter for those who are only interested in one aspect of the graph (i.e. language influence) would be very nice. With the current balancing, Haskell is closer to Java than to ML.
[+] [-] phaedryx|11 years ago|reply
https://www.dropbox.com/s/4f4s6au5dcshef8/Screenshot%202014-...
[+] [-] mahmud|11 years ago|reply
[+] [-] WoodenChair|11 years ago|reply
[+] [-] klibertp|11 years ago|reply
[+] [-] aurora72|11 years ago|reply
[+] [-] finin|11 years ago|reply
[+] [-] CompleteSkeptic|11 years ago|reply
[+] [-] chrisseaton|11 years ago|reply
[+] [-] amelius|11 years ago|reply
[+] [-] dorfsmay|11 years ago|reply
[+] [-] nilsjuenemann|11 years ago|reply
http://exploringdata.github.io/vis/programming-languages-inf...
[+] [-] kyberias|11 years ago|reply
[+] [-] AbraKdabra|11 years ago|reply
[+] [-] kriro|11 years ago|reply
There also needs to be better filtering of the raw data ("many others" near Python is kind of a head scratcher for example).
Filters for "just languages" or language + path length of X etc. might be cool as well.
A simple NON_AUTHORS = ["others", "et al."] etc. and using that in normalize_data would be a good start imo.
Edit: Plankalkül also has encoding issues :)
[+] [-] sbensu|11 years ago|reply
[+] [-] tantalor|11 years ago|reply
[+] [-] vixen99|11 years ago|reply