top | item 14519698

Darpa Funds Development of New Type of Processor

199 points| farseer | 8 years ago |eetimes.com | reply

97 comments

order
[+] dreamcompiler|8 years ago|reply
Graph processing machines are not new. The signal-to-noise ratio of this article is so low that I can't tell how this architecture differs from e.g. the Cray Eldorado. Or the Connection Machine for that matter.
[+] dmix|8 years ago|reply
There is a real use-case here that has become so widespread and common in the intelligence and law enforcement community that they saw utility in having a processor optimized for graph analysis. They likely see some uses for it outside of that market as well which is why both Intel and Qualcomm are interested in working on it. Even if it is a variation on 'Threadstorm' it's still optimized for this particular type of data processing.

I read the whole article and I don't see anywhere in this claiming it is a new idea. But certainly there is no other processor on the market like this one. So it fits the category of "new type of processor", even if it's for a dedicated use-case - just like those ML optimized processors.

There's a big difference between knowing it is theoretically possible and the value you get from a real world implementation with real users. That sounds pretty news worthy to me.

[+] PhantomGremlin|8 years ago|reply
The signal-to-noise ratio of this article is so low

I started reading the comments before I read the article. And, sadly, once I saw your comment I knew exactly who wrote the article. One click and confirmed!

[+] rayiner|8 years ago|reply
What does the memory interface look like for this? None of the papers I can find indicate how you can get so much parallelism out of the CPU <-> memory interface.

An interesting approach to non-Von Neumann computing is to put ALUs in memory, to take advantage of the fact that DRAMs have far more internal bandwidth than what is exposed in traditional systems: http://researcher.ibm.com/researcher/files/us-leejinho/tvlsi....

[+] kevinnk|8 years ago|reply
Manufacturing logic on the same wafer as DRAM is difficult since the processes are so different. On the other hand, manufacturing on separate dies and connecting with an interposer or TSVs gets you tremendous bandwidth and relatively low latency; this is how some newer generation graphics memory is implemented (see https://en.m.wikipedia.org/wiki/High_Bandwidth_Memory).
[+] MrBuddyCasino|8 years ago|reply
> What does the memory interface look like for this?

Yeah thats the big black box titled "magic happens here" in their diagram. Maybe something like HBM2?

[+] convolvatron|8 years ago|reply
there was a great architecture proposal from Thomas Sterling and the old MTA folks that matched the hardware thread context to the DRAM row size.

which effectively extended the MTA thread pool to...infinity

[+] empath75|8 years ago|reply
Is it just me or is the illustration of 'graphs' of totally the wrong kind of graphs? I'm sure they don't mean bar charts.
[+] Sean1708|8 years ago|reply
It didn't even dawn on me that that figure was supposed to represent graphs until I read your comment (and then the caption), I had just assumed it was supposed to represent Big Data.
[+] angstrom|8 years ago|reply
Yeah, they missed including a pie chart and a gantt chart for good measure.
[+] qntty|8 years ago|reply
Yeah, a bit embarrassing for a magazine that covers technology, HIVE defintely works on graph-theory graphs. Even weirder is the (Source: DARPA) attribution.
[+] mycall|8 years ago|reply
Its hard to display 2^n dimensions in a graph.
[+] youdontknowtho|8 years ago|reply
The way that they describe sparse graph processing in memory sounds like the kind of pointer chasing that makes run-time object-oriented programming memory access patterns slow.

I wonder if that was a translation to PR artifact, or if there might be something there to accelerate some of the Java or .Net memory access patterns that we all use.

[+] cvoss|8 years ago|reply
There is a very direct sense in which pointer chasing _is_ the fundamental operation of sparse graph processing.

On top of what access patterns the developers tend to use, there's always the JVM garbage collector (the bane of efficiency) which runs a basic graph algorithm over the entire program's network of pointers. Although, I suspect in many applications the graph in question is small (by comparison to big-data-scale graphs) and throwing heavy machinery like this at it would be overkill.

Then again, maybe I'm not dreaming big enough, and this kind of processor will make the need for cache line locality optimizations, careful instruction scheduling around memory I/O, and half-second freezes for GC a thing of the past?

[+] Frenchgeek|8 years ago|reply
"Darpa Funds Development of New Type of Processor: Worlds First Non-Von-Neumann "

https://en.wikipedia.org/wiki/Harvard_architecture ?

[+] antoinealb|8 years ago|reply
Was my thought as well "What? AVR are harvard architecture and I can buy them for 0.1$". But after looking, apparently it is more linked to new parallelism paradigms:

> "This non-von-Neumann approach allows one big map that can be accessed by many processors at the same time, each using its own local scratch-pad memory while simultaneously performing scatter-and-gather operations across global memory."

[+] Cyph0n|8 years ago|reply
I'm happy to see that a group from Georgia Tech is working on this. Actually, my group just sent in a proposal for the DARPA SSITH[1] program. The high-level goal of SSITH is to design low-level (firmware or hardware) protection techniques that can guard against common software vulnerabilities that lead to hardware exploitation.

[1]: http://www.darpa.mil/news-events/2017-04-10

[+] atonse|8 years ago|reply
My guess at what's driving this need, is for intelligence agencies to make sense of and traverse the large and complicated graphs that are used to map real world relationships.

As they collect more data related to this, they'll need better ways to traverse these graphs.

[+] cardiffspaceman|8 years ago|reply
I can't tell if the processing in TFA is the kind of graph processing that a machine like TIGRE or SKIM were proposed to do in the '80's. Or perhaps the graph nodes are less specialized than these machines?
[+] Symmetry|8 years ago|reply
Sounds a lot like the Cell in concept.

https://en.wikipedia.org/wiki/Cell_(microprocessor)

[+] vonmoltke|8 years ago|reply
The SPUs on the Cell were closer in concept to DSPs than they are to what the article describes. The Cell chip itself was essentially a general-purpose CPU joined in silicon to several DSPs.

The SPUs are still designed for sequential processing of memory, just smaller, discrete blocks. The whole chip is orchestrated by a standard von Neumann processor anyway, so that acts as a bottleneck to keeping the SPUs busy.

[+] wyldfire|8 years ago|reply
What's the "community detection" benchmark referenced in the article?
[+] AriaMinaei|8 years ago|reply
From a layman: Would this necessitate a different programming paradigm?
[+] grondilu|8 years ago|reply
Yes, it's mentioned on page 2 of the article (you may have missed it was two-paged)
[+] nnfy|8 years ago|reply
The civilian application proposed in the article, mapping the many to many relationships between amazon purchasers and items purchased, is unsettling.
[+] jacquesm|8 years ago|reply
That doesn't need anything special in terms of hardware so it's a bad example, current hardware is well capable of making those connections.
[+] bantunes|8 years ago|reply
Can't wait for this publicly funded research to make some private corporation billions!
[+] losteric|8 years ago|reply
Versus, what? No public funding? No private utilization of public research? Public research should fall in the public domain, free of IP and open for any use.

If anything, we should be doing more of this... create dedicated academic R&D funding streams by taxing established or dying industries in order to publicly explore new fields, and use X Prize-style programs or NASA's COTS/CRS programs to incentivize private commercialization. America needs jobs, what's better than creating new industries?

Plus, public investment in R&D has a good track record. This study from 1980 [1] indicates a $17 return over 18 years for every $1 invested in NASA - 1700% ROI. Returns depend on programs and the administration in office, but similar numbers can be obtained from other programs (NOAA and agricultural, military and medical research, etc). R&D is speculative, but most programs do much better than breaking even.

[1] https://er.jsc.nasa.gov/seh/economics.html

[+] jacquesm|8 years ago|reply
You mean like the internet? Or self driving cars?
[+] angstrom|8 years ago|reply
Not sure if sarcastic or sincere. You can damn near predict what will be the future tech 10 years down the line based on what DARPA is doing to make killing more efficient.
[+] hinkley|8 years ago|reply
Like web browsers?

Mars Andreesen started off working on an NSF grant at NCSA. One of those things Al Gore encouraged (when people misquote him as inventing the internet)

Edit: also why web browsers are free. Netscape had to compete with a free browser.

[+] davidiach|8 years ago|reply
You mean that private corporations can somehow just sit there and collect rent from this publicly funded research? Amazing!
[+] exelius|8 years ago|reply
No way; this is the kind of thing that can become a state-secret level advantage.

Seriously, it's a graph processor. Graph analysis is basically the entire job of a modern intelligence agency.

There's a cold-war level arms race going on in cybersecurity. Russia embarrassed us with their sophisticated cyberwarfare capability in the last election -- they were able to infiltrate both campaigns AND the FBI (they used false information to manipulate Comey into making a statement -- which required them to know how conflicted he was over interference in the Clinton e-mail thing), and undoubtedly were behind the news cycles in the months before the election.

Better/more condensed graph analysis capability, at the scale that the three-letter-agencies use it? That's a strategic advantage. You can bet the Russians are working on something similar. If they haven't already -- throughout the cold war they tended to push the frontier of technology faster than the US, but had trouble mobilizing that advantage because communism was so damn inefficient.

[+] IncRnd|8 years ago|reply
This is DARPA not NSF. It is meant to make someone rich, so they can supply what is under research.
[+] vvdcect|8 years ago|reply
Maybe the money taxed from that said corporation could be channeled to pay for universal income.
[+] Aron|8 years ago|reply
Someone stem the tide against these anti-capitalists! They are everywhere!
[+] dropthebase|8 years ago|reply
If it annoys you that much, stop using internet, throw away your laptop and pretty much everything else in your life. No need to support these "evil private corporations". Yeah, really. I'm sure you can do it.

What? Not so good idea anymore? It's funny how ironic and shortsighted these HN commies are.

[+] andreasgonewild|8 years ago|reply
Worlds first, my ass. Journalism is dead; has been for a long, long time; we're living in the end-times of the walking dead.