The site did run fine for me without java script, after I deleted the semi transparent div that told me that the site needs java script. ( Apparently to automatically hide the div.)
Seriously, I sometimes wonder if there would be a market for a lightweight text display app, the server would then serve some kind of markdown text, perhaps with a few hints like headlines and the client would render the text based on the local screen setting and the hints of the text structure in the markdown. That would be great for short articles.
That sounds pretty useful. Arguably, that's what text-based browsers such as lynx and links already do but in another way. Those are still what I use when I want to read some article without distractions, ads, and funny layout and images.
It's taken quite awhile but I'm looking forward to getting the board. I hope there are decently mature go binding because that would be incredible to have a goroutine coprocessor.
If not, I've been wanting to take a stab at opencl.
Both MPI and OpenMP are fairly mature and easy to use with C or C++ (I haven't used them with FORTRAN, but I believe you can, with a few slight alterations).
OpenCL / CUDA is more mature and ready... and a far more understood architecture. I bet that any GPU will crush this thing in pure compute power.
Where Parallella's advantage comes in, is its grid architecture. Whereas a typical GPU today is "very very wide" (IE: an AMD 7750 does 512 operations at the same time), the Epiphany is a "grid", and each node of the grid can be doing different things.
I'm going to greatly simplify things down for you with this example (so experts out there... don't shoot me :-p). A GPU can execute 1 program, but that one program can do 512 operations at once.
On the other hand, the Epiphany-IV truly can run 64-different programs at once.... all taking different paths and doing their own things.
Epiphany-III and Epiphany IV perform "if statements" more or less how you'd expect a computer would do it. Which is the important bit... if-statements don't really slow down your program.
In contrast, the typical wide "wavefront GPU architecture"... the "if statement" basically halves the speed of your GPU. The GPU has to execute one "half" of the branch, and then later, it comes back to execute the other "half". (Its only "really" executing one program, but doing 512 of them at once. See what I mean?)
GPUs are a "wide" architecture... but Epiphany is the first "grid-based" supercomputer for ~$99. GPUs are very mature technologies... Epiphany is still in initial research (as far as the consumer market is concerned)
EDIT: In reality, a modern GPU supports maybe 4 to 8 different wavefronts (8 for the more expensive GPUs, maybe only 4 for the cheaper ~$99 GPUs). But each of those wavefronts can do hundreds of computations at once.
The Parallella projects sounds interesting, but the parityportal is shit.
The site shows a big banner "Please enable javascript to view this site." that is overlaying the content, but disabling CSS also helps - an other example of js-failure.
I'm just posting to say that I peeled off that retardo javascript-required page blocker with the "element hiding helper" add-on to the ad-block plus add-on. I dunno what I was missing without javascript but the site seemed to work pretty well without it.
> The site shows a big banner "Please enable javascript to view this site." that is overlaying the content, but disabling CSS also helps - an other example of js-failure.
Simple workaround (in most browsers). Right click on the offending div -> Inspect Element -> Press the delete button.
to correct for my earlier mistake... the board consists of an arm processor (like a phone) and an array of smaller processors. linux and python will run on the arm, but not on the smaller processors.
multiprocessing works by running a separate copy of python on each core and then managing data transfer. so it won't help you here, because you would need python to be running on each of the smaller processors in the array.
instead you need to target the array processors in a dedicated language. people are mentioning opencl (which is c-like, but has a very strong emphasis on all processors doing the same task); the wikipedia page describes a gcc-based compiler.
at a stretch, perhaps you could use the gcc-based c compiler to compile python and get multiprocessing working that way. but i imagine that it would be a lot of work and an inefficient way to use the system (the small cores are not very powerful, so you need to keep overhead down, so python is a bad idea).
if someone can get erlang working across the array then that might be your best bet. erlang is a little bit like python and multiprocessing (not terribly similar, but close enough for many things to make sense).
There's a number of Linux distributions for ARM, as the Raspberry Pi project has several. Python, Ruby, Perl, NodeJS and a number of other tools are fully supported.
The multi-processing side, though, is probably not a standard CPU meaning you can't just throw Python code at it. You'll have to use CUDA or OpenCL techniques.
If you want a small ARM system, the Pi is a good place to start but the Beagle Board (http://beagleboard.org/) is a much better deal.
The article is a bit confusing, since, as far as I can tell, the "first model to be shipped" only has 16 (+2 ARM) cores. The 64 (+2) core board is not shipping/reservable yet.
Congrats to the people at Parallella though! I've been excitedly checking their site/twitter about every other day.
I'll believe it when I see it. These guys have been pushing off and pushing off, I've all but written off my "contribution".
The fact that they're preselling the 16 core boards before we have even received ours, and that they come with storage( the contributors have to supply our own sd cards ) at the same price point, I'm left with an unpleasant taste.
Convolving images on the fly would be one, the Zynq architecture allows for some pretty high bandwidth throughput. Xilinx keeps pushing it as a solution to 'smart' cameras (things that know what they are looking at by doing analysis on the background)
Technically you could just run a standard LAMP stack since it's running an ARM-compatible version of Ubuntu. It just won't take advantage of the multiple processors.
According to this[1] Erlang has been run on machines with similar number of cores. This[2] looks like the most interesting work done with Erlang on Parellella so far. I haven't look at Parallella for a long time, but IIRC the hardware architecture very much suites the process model in Erlang.
[+] [-] _delirium|12 years ago|reply
[+] [-] yk|12 years ago|reply
Seriously, I sometimes wonder if there would be a market for a lightweight text display app, the server would then serve some kind of markdown text, perhaps with a few hints like headlines and the client would render the text based on the local screen setting and the hints of the text structure in the markdown. That would be great for short articles.
[+] [-] wladimir|12 years ago|reply
[+] [-] pampa|12 years ago|reply
[+] [-] rciorba|12 years ago|reply
[+] [-] _pmf_|12 years ago|reply
[+] [-] Everlag|12 years ago|reply
If not, I've been wanting to take a stab at opencl.
[+] [-] gamegoblin|12 years ago|reply
[+] [-] oomkiller|12 years ago|reply
[+] [-] dragontamer|12 years ago|reply
Where Parallella's advantage comes in, is its grid architecture. Whereas a typical GPU today is "very very wide" (IE: an AMD 7750 does 512 operations at the same time), the Epiphany is a "grid", and each node of the grid can be doing different things.
I'm going to greatly simplify things down for you with this example (so experts out there... don't shoot me :-p). A GPU can execute 1 program, but that one program can do 512 operations at once.
On the other hand, the Epiphany-IV truly can run 64-different programs at once.... all taking different paths and doing their own things.
Epiphany-III and Epiphany IV perform "if statements" more or less how you'd expect a computer would do it. Which is the important bit... if-statements don't really slow down your program.
In contrast, the typical wide "wavefront GPU architecture"... the "if statement" basically halves the speed of your GPU. The GPU has to execute one "half" of the branch, and then later, it comes back to execute the other "half". (Its only "really" executing one program, but doing 512 of them at once. See what I mean?)
GPUs are a "wide" architecture... but Epiphany is the first "grid-based" supercomputer for ~$99. GPUs are very mature technologies... Epiphany is still in initial research (as far as the consumer market is concerned)
EDIT: In reality, a modern GPU supports maybe 4 to 8 different wavefronts (8 for the more expensive GPUs, maybe only 4 for the cheaper ~$99 GPUs). But each of those wavefronts can do hundreds of computations at once.
[+] [-] jevinskie|12 years ago|reply
[+] [-] kephra|12 years ago|reply
The site shows a big banner "Please enable javascript to view this site." that is overlaying the content, but disabling CSS also helps - an other example of js-failure.
http://en.wikipedia.org/wiki/Adapteva#Parallella_project would be a better canidate for HN. Its readable, and also shows some critical points.
[+] [-] Amadou|12 years ago|reply
I'm just posting to say that I peeled off that retardo javascript-required page blocker with the "element hiding helper" add-on to the ad-block plus add-on. I dunno what I was missing without javascript but the site seemed to work pretty well without it.
https://adblockplus.org/en/elemhidehelper
[+] [-] hrjet|12 years ago|reply
Simple workaround (in most browsers). Right click on the offending div -> Inspect Element -> Press the delete button.
[+] [-] iandanforth|12 years ago|reply
1. Can I run python on linux on this
2. Would the multiprocessing module work like it does on x86 4 core chips?
[+] [-] andrewcooke|12 years ago|reply
multiprocessing works by running a separate copy of python on each core and then managing data transfer. so it won't help you here, because you would need python to be running on each of the smaller processors in the array.
instead you need to target the array processors in a dedicated language. people are mentioning opencl (which is c-like, but has a very strong emphasis on all processors doing the same task); the wikipedia page describes a gcc-based compiler.
at a stretch, perhaps you could use the gcc-based c compiler to compile python and get multiprocessing working that way. but i imagine that it would be a lot of work and an inefficient way to use the system (the small cores are not very powerful, so you need to keep overhead down, so python is a bad idea).
if someone can get erlang working across the array then that might be your best bet. erlang is a little bit like python and multiprocessing (not terribly similar, but close enough for many things to make sense).
[+] [-] astrodust|12 years ago|reply
The multi-processing side, though, is probably not a standard CPU meaning you can't just throw Python code at it. You'll have to use CUDA or OpenCL techniques.
If you want a small ARM system, the Pi is a good place to start but the Beagle Board (http://beagleboard.org/) is a much better deal.
[+] [-] WestCoastJustin|12 years ago|reply
[1] http://www.parallella.org/2013/05/11/the-parallella-board-no...
[+] [-] dragontamer|12 years ago|reply
2. No. Its very very different.
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] makmanalp|12 years ago|reply
[+] [-] hosh|12 years ago|reply
[+] [-] gamegoblin|12 years ago|reply
[+] [-] zoba|12 years ago|reply
Congrats to the people at Parallella though! I've been excitedly checking their site/twitter about every other day.
[+] [-] nine_k|12 years ago|reply
64-core node is going to cost you quite a bit more (the pledge for it on Kickstrater was $199).
[+] [-] throwit1979|12 years ago|reply
The fact that they're preselling the 16 core boards before we have even received ours, and that they come with storage( the contributors have to supply our own sd cards ) at the same price point, I'm left with an unpleasant taste.
[+] [-] graue|12 years ago|reply
http://shop.adapteva.com/collections/parallella/products/par...
says:
"Unless otherwise specified, the Parallella-16 board ships bare without a 5V power supply or SD card. (These must be purchased separately.)"
[+] [-] kristianp|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] gtt|12 years ago|reply
[+] [-] BhavdeepSethi|12 years ago|reply
[+] [-] hcarvalhoalves|12 years ago|reply
[+] [-] ChuckMcM|12 years ago|reply
[+] [-] synchronise|12 years ago|reply
[+] [-] ddedden|12 years ago|reply
[+] [-] shank8|12 years ago|reply
[+] [-] andyl|12 years ago|reply
Does anyone know if Erlang / Elixir will run on Parallella and take advantage of all the cores?
[+] [-] jzelinskie|12 years ago|reply
[1] http://kth.diva-portal.org/smash/get/diva2:392243/FULLTEXT01
[2] http://www.parallella.org/2013/05/25/explorations-in-erlang-...
[+] [-] plainOldText|12 years ago|reply
[+] [-] rorrr2|12 years ago|reply
[+] [-] Ecio78|12 years ago|reply