top | item 6447783

Stanford engineers build computer using carbon nanotube technology

68 points| jonbaer | 12 years ago |phys.org | reply

18 comments

order
[+] ColinWright|12 years ago|reply
Each report has its own take on the topic - here are a few other HN submissions:

https://news.ycombinator.com/item?id=6447714

  First computer made of nanotubes unveiled
  (bbc.co.uk)
https://news.ycombinator.com/item?id=6447669

  Processor made from carbon nanotubes runs multitasking OS
  (arstechnica.com)
https://news.ycombinator.com/item?id=6447583

  First computer made from carbon nanotubes debuts
  (ieee.org)
https://news.ycombinator.com/item?id=6447227

  Researchers Build a Working Carbon Nanotube Computer
  (nytimes.com)
https://news.ycombinator.com/item?id=6446731

  World's first carbon nanochip computer, comparable to 4004
  (technologyreview.com)
https://news.ycombinator.com/item?id=6446258

  Breakthrough in Carbon Nantotube Computing Could 'Save' Moore's Law
  (pcmag.com)
https://news.ycombinator.com/item?id=6446008

  First Computer Made From Carbon Nanotubes Debuts
  (ieee.org)
A few currently have any comments, a few have an upvote or two.
[+] igravious|12 years ago|reply
Surely those (ieee.org) articles are dupes.

For once we have an announcement that isn't about some advance that might lead to something 5 years down the line. They built an actual computer. Wow.

[+] moocowduckquack|12 years ago|reply
The method for getting rid of the metal ones is stunningly simple. Presumably you could use the same trick to blow stuff like encryption keys or specific finite state machines into the actual chip.
[+] joe_the_user|12 years ago|reply
Hmm,

Over the last forty years, a Moore's Law of processor speed, transistors per chip, information storage and I assume other things has operated [1].

Moore's Law of processor speed has definitely broken down in the last ten years and I assume that this research is attempting to address this fundamental limitation. I believe Moore's Law of transistors per chip is still here but that without increasing speed, this tendency is nowhere as useful (we don't want a whole lot of slow cores, we want a few fast cores).

Moore's Law of data storage is still here but it's also not as useful without a similar exponential increase in system data throughput [2].

[1] http://en.wikipedia.org/wiki/Moore%27s_law [2] http://en.wikipedia.org/wiki/Throughput

[+] breckinloggins|12 years ago|reply
I have a prediction about where this will go:

First, things will continue about the way they are now: cores will stay at the same speed or get slower but there will be more of them.

The industry will stay this way for at least the next 5 to 10 years, and during that time we are going to get better and better figuring out how to take advantage of the increasing number of cores.

Finally, at some point in the future (I predict between 10 and 15 years), we are going to see an explosion in materials science that allows us to suddenly jump from 3 - 5GHz / core to 400 GHz - 5 THz per core (with even more cores than we have now and dramatically lower power consumption).

At this point we will be nicely prepared to exploit both the parallelism and the return of raw speed gains. We'll also see completely different architectures like Chuck Moore's greenArrays, quantum co-processors, "memory/compute fabrics" like memristor crossbar arrays, and further advancements in the GPU/APU area. These changes will continue to challenge our assumptions about what are the "proper" paradigms, languages, and architectures for computer programs.

We've made great progress already. I can think of myself as an example: in 2005 I, like most of the programmers here, read Herb Sutter's essay The Free Lunch Is Over[1] and realized that he was correct: the era of easy speedups was giving way to the era of concurrency. And programmers were going to have to figure out how to take advantage of that concurrency or an app in 2013 was going to run at about the same speed as an app in 2005.

But I had no idea how we were going to do that. I'd used threads and mutexes and locks in my professional work so I knew that correct concurrency was hard (and correctness almost never survived past a few rounds of maintenance). But I didn't have the first clue how we were going to get out of this mess without all becoming experts in Category Theory.

Fast forward to today and I find myself using things like "lambda expressions", "functional closures", "higher order functions", "map/reduce", "filters", and even "monads" and, you know what, they now seem as natural to me as the for-loop.

These days, I'm much more likely to write:

    stuff.select {|l| l > 23}.map(&:increment_magically).sum
than I am to write the equivalent imperative loop. I'm no genius, so I consider it remarkable that I've been able to digest these concepts as fast as I did.

And I am willing to bet that many other programmers here feel the same way. Not only that, but we are beginning to really understand things like:

- Identity vs value

- The benefits and dangers of mutable state

- Impure vs pure functions

- Software Transactional Memory

- MVCC / persistent data structures

- Even Category Theory :)

None of this stuff was even remotely on my radar 8 years ago, and now it's part of our professional lives. We may not get to use it all yet, but we can use more and more of it on every project.

And we are seeing the benefits.

Now fast forward to a time when we not only have hundreds or thousands of cores, but they suddenly jump up to 500 GHz. I guarantee you none of these skills will have been for nought.

[1] http://www.gotw.ca/publications/concurrency-ddj.htm

[+] kenster07|12 years ago|reply
There is nothing wrong with a 'whole lot of slow cores.' In fact, it is essentially how the human brain works.
[+] p1esk|12 years ago|reply
High performance computing these days depends on GPUs to do heavy lifting. I assume by 'speed' you mean flops, and the flops count in GPUs is progressing nicely.

And storage throughput has jumped how much with the advent of SSD?

[+] eliasmacpherson|12 years ago|reply
Pedant here.

Moore's Law is was originally only for components per chip. Other people have applied it to the other items you mention, but I don't think it's accurate to say "Moore's Law of data storage" because there isn't one.

[+] Nanomedicine|12 years ago|reply
The just add certain amount of CNT transistors together. There is nothing worth for nature... I think a paper of Graphene transistors "computer" will publish on nature recently....
[+] username42|12 years ago|reply
"computer with 178 transistors". What is the minimum number of transistors to be called a computer ?