top | item 3134577

Kurzweil's rebuttal to Paul Allen

102 points| ca98am79 | 14 years ago |technologyreview.com | reply

121 comments

order
[+] hugh3|14 years ago|reply
I grow tired of Kurzweil's vague arguments against people who disagree with his vague predictions.

What I think Kurzweil doesn't understand is that in any argument about what's going to happen in the future, the onus of proof inevitably lies with the guy saying "This is what's going to happen", not the guy saying "Ehh, maybe not".

I don't know what's going to happen in the future, and I don't pretend to know what's going to happen in the future, but whatever happens either (a) I'll find out eventually or (b) it'll happen after I'm dead anyway. But john_b's point about Kurzweil's lack of a null hypothesis is a good one.

So my question for Kurzweil is this: what will the world look like if you're wrong? What possibilities are your predictions excluding? If I'm still alive in 2060, and I look around at the world around me, under precisely what conditions am I entitled to say "Well whaddya know, looks like Kurzweil was wrong about that Singularity thing after all"?

[+] khafra|14 years ago|reply
I agree with your conclusions, but

> the onus of proof inevitably lies with the guy saying "This is what's going to happen", not the guy saying "Ehh, maybe not".

I'd say that the onus lies on the one making conjunctions instead of disjunctions. Often, negative predictions are disjunctions, but this isn't always the case: Compare "in 2100, North America will be inhabited by humans" with "Ehh, maybe not."

[+] GrantS|14 years ago|reply
>but whatever happens either (a) I'll find out eventually or (b) it'll happen after I'm dead anyway

It's not your main point, but I think you're leaving out an important option: (c) you can choose to be intimately involved in making the future turn out a certain way.

The best reason to think and write about the future is so that we can decide what future we want to create for ourselves. Then, as someone with the ability to write code, you can go out and create those very things.

[+] api|14 years ago|reply
IMHO, the future cannot be predicted. Period. I'm not talking impossible-as-in-hard. I am talking impossible-as-in-perpetual-motion.

Nice well-behaved linear Newtonian systems can be modeled and predicted. There are systems that are chaotic but that can be modeled in the aggregate very well too, like thermodynamic systems and certain kinds of fluid flow.

Life isn't like any of that. Life is complex, chaotic, computationally irreducible, and full of feedback loops on top of feedback loops. Even worse: predictions often create economic incentives to prove them wrong. Take a position on the stock market and you have created an incentive for your prediction to not come true.

People have always wanted to deny the fundamental unpredictability of history, and have always clung to woo-woo prophecy superstitions toward this end. The ancients had Tarot cards and pig entrails. We have graphs and computer models.

[+] losvedir|14 years ago|reply
Ugh, not this again.

That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome.

I believe it was on HN that this discussion came up before, but that's a short sighted way of looking at it. Basically, it doesn't take into account all the interactions of the environment required to turn that "source code" into a person. Sure, the DNA would be sufficient if you were able to accurately simulate cellular actions, protein folding, and physics in general, but we just can't do that yet, and it doesn't look like we'll be able to any time soon.

[+] fl3tch|14 years ago|reply
Exactly. Distributed computing grids still have trouble folding single peptides with reasonable accuracy.

We have the source code, but we don't have the compiler.

[+] bermanoid|14 years ago|reply
Basically, it doesn't take into account all the interactions of the environment required to turn that "source code" into a person.

This is an extremely common, and completely incorrect criticism as applied to AGI complexity estimates.

Here's the way to think of it: the entirety of the information content required to move from non-intelligence to intelligence has to have been figured out some time between single-celled organisms and humans, because the substrate on which our intelligence is implemented literally didn't exist at that point. Which means that any part of the biological machinery that was in place when single-celled organisms ruled the planet does not count towards the complexity of the "intelligence algorithm" itself - it's irrelevant, accidental complexity, not information content that is required to get from "working computer" to "working intelligent computer".

You'll find that almost all of the cellular actions, protein folding, and physics were already working just fine when the single-celled ickies were evolving, so it's all complexity that we can safely ignore, which means we can start the complexity count with DNA. Apart from (IMO) extremely minor epigenetic contributions, the pure-DNA information estimates should provide extremely hard upper bounds on the difficulty of the problem, estimates that we'll probably blow through quite easily once we know what we're doing - evolution rarely finds the optimal solutions to problems, I see no reason to assume that it stumbled across one in this case...

[+] Bishop6|14 years ago|reply
Kurzweil makes some good points here. He doesn't address each and every criticism with his view of AI progress, but he does a good job calling out Paul Allen on not doing the homework.
[+] bh42222|14 years ago|reply
Neither Kurzweil nor Allen have done their homework. This is a disappointingly informal argument if you're looking for hard scientific facts.
[+] DanielStraight|14 years ago|reply
Calling out someone for not doing their homework seems somewhere in the DH1-2 range:

http://www.paulgraham.com/disagree.html

There are some good points in Kurzweil's response, but the ones about Paul Allen are definitely not them.

I think his best point was about extrapolating function from individual cells or structures, without needing to understand every single cell or structure individually.

[+] kenjackson|14 years ago|reply
What are the big advances in linear programming that happened since 1988?

As api mentions in the comments here on HN there are areas of work where progress stopped. For example passenger jet speed was once thought to continue at a rapid rate such that LA to Europe flights would be a few hours. Skyscraper height was thought to continue, with advances in various technologies and engineering methods making it desirable. Both hit realities that signficantly slowed down their progress.

I tend to side with Allen on this. While we're bright people, I don't know if I see us able to keep increasing computing power while keeping actual power consumption reasonably low.

[+] api|14 years ago|reply
What passenger jet speed and skyscrapers really hit is economics: demand limits to growth.

Most super-tall skyscrapers are economic disasters. There seems to be a maximum economically rational height to a skyscraper, and it's already been reached. You can build higher, but if you do you're wasting your money.

A human-level or beyond AI would probably be like the Burj Khalifa: an economic disaster. Why build it when screwing and popping out babies is far cheaper and already works? If you want to exceed human intelligence, it would be a lot cheaper to augment human brains with external digital assistants (like what you're using now) or implants than to re-engineer an entirely new embodiment.

[+] ramanujan|14 years ago|reply
Hmmm? Generally agree with the plane flight example, but linear programming seems like a surprisingly bad example to pick. Karmarkar's algorithm has allowed the essential insight of linear programming to be generalized into a programme for optimizing any convex function, subject to convex constraints, over a convex set (see Stanford's EE364).

I assume you are familiar with this as 1988 = date of Fulkerson prize for Karmarkar's work? I guess the point narrowly holds if you're thinking of pure LP rather than general CP. But general CP is really quite a big deal.

[+] cpeterso|14 years ago|reply
Here is a rough transcript of a Long Now Foundation talk by SF author Vernor Vinge (and coiner of the term "singularity") entitled "What If the Singularity Does NOT Happen?". He sees:

  * Scenario 1: A Return to MADness (nuclear war)
  * Scenario 2: The Golden Age (peace and prosperity)
  * Scenario 3: The Wheel of Time (catastrophic natural disaster)
http://www-rohan.sdsu.edu/faculty/vinge/longnow/
[+] politician|14 years ago|reply
The Singularity concept strikes me as a sort of wishful thinking. Technology advancing so fast that we no longer can control or understand it? Yeah, that already happened to my parent's generation with AOL, yet here I am texting this on my iPhone. New generations understand intuitively what the previous generation understood theoretically.

Even so, I fully expect memristors to deliver strong AI.

[+] loup-vaillant|14 years ago|reply
Remember there is 3 main schools of thought regarding the "Singularity" concept, which are more or less compatible.

Accelerating change. Basically Kurzweil's view. Exponential improvement of machines, which will eventually reach then exceed human intelligence in every single domain, or something like that.

Even horizon. If we ever build something that achieve greater-than-human intelligence, we cannot predict whatever it will do to the world, because we're just not as smart as that thing.

Intelligence explosion. If we ever build an AI (or something similar) that is more effective at doing AI research than we are, then that AI would be even more effective… and foom you have something that would leave Skynet in the dust, so it'd better be our friend[1]. Note that the first iteration of that thing may not need to be smarter than us: it just have to be able to build something smarter than itself[2] (and of course, the self-improvement cycle must not hit a ceiling too soon).

[1]: https://en.wikipedia.org/wiki/Friendly_AI [2]: https://en.wikipedia.org/wiki/Seed_AI

[+] 0x12|14 years ago|reply
> Even so, I fully expect memristors to deliver strong AI.

Why would they?

Is strong AI a function of storage capacity or speed?

An AI running at 1/100th of what a future AI may be capable of is still an AI and I can't see how a mere improvement of a couple of orders of magnitude would do what decades of Moore's law have failed to do so far.

If strong AI was just a matter of speed then we could theoretically take any of the large clusters available today and run it at an appreciable fraction of its normal speed leading at a minimum to a validation of the fact that it is indeed a strong AI that's been created.

The barrier seems to be more that we don't know how to go about building one from a software perspective than that we wouldn't have the capability to design the hardware.

So how would an advance in hardware suddenly fix that?

[+] danilocampos|14 years ago|reply
> Technology advancing so fast that we no longer can control or understand it? ... Yeah, that already happened to my parent's generation with AOL

That's not the singularity, else we'd have had several singularities across several generations as certain groups of people fail to grasp the utility or value of printing presses, steam engines or atomic bombs.

The notion of the singularity suggests a point past which it is impossible to predict the future.

Global networking, in its basic form, was entirely predictable to certain technologists for decades before its existence. In fact, the original Spaceship Earth ride at Epcot nailed many peaces of later commonplace technologies – even multiplayer gaming. Hell, I read of a B'ahai priest who predicted the world wide web in the 30's.

Meanwhile, a singularity posits a confluence of technologies, connectedness and social change that renders all events past its arrival entirely impossible to predict.

[+] super_mario|14 years ago|reply
String AI is not a hardware problem. It's not a matter of lack of computational power. It is a software and modeling problem. If you had an AI algorithm and a model, you could still run it on any Turning machine. It would just take a lot longer (perhaps years or decades or more) to compute a single thought on current hardware instead of real time or faster than real time on some super fast future hardware.

There are people (like Roger Penrose) who argued that intelligence and consciousness are not computational in nature (and hence no algorithm can be conscious). Penrose goes all the way down to quantum mechanical effects in the brain. I have not really followed developments on this and where Penrose's argument currently stands.

[+] spot|14 years ago|reply
Nobody takes him seriously. He is just a Christian apologist who wraps it up with quantum who haa.
[+] api|14 years ago|reply
What about energy?

It's true that if you look at most areas of technology they are advancing rapidly. Except energy. Energy has stagnated since the 1950s.

I'm on the fence on this issue, but there are many very intelligent and knowledgeable people who are predicting a kind of anti-singularity: in the 21st century, fossil fuel depletion will send us way back, perhaps even de-industrialize most societies.

Is our civilization simply a machine that is transferring the order (low entropy state) in fossil fuels into order within itself (technology and economic complexity), and when those fossil fuels run out will this ordering process cease?

The lack of major breakthroughs in energy in the past 50 years is pretty dramatic. Nuclear looked like an energy panacea once, but it's turned out to be clunky and hard to scale. Solar panels and wind turbines are interesting, but the problem with those is that we basically can't store energy. Energy storage is either super-expensive per kilowatt-hour and not scalable (e.g. Li-Ion batteries) or very inefficient (e.g. water electrolysis to hydrogen).

Without a breakthrough on the order of cheap ultra-capacitors or fusion, I'm afraid we'll be seeing peak everything pretty soon, including technological complexity.

The thing is: all the technologies of the "singularity" are energy consumers. Where are the producers? What is going to power the singularity?

Then there's another area that makes me horribly pessimistic: politics. Most of our societies are degenerating to banana republic levels of corruption. Even if the energy problem is technically solvable, it seems to me that our political systems may be set up to do the absolute worst possible thing in this area: ride the fossil fuel crash into the ground in an orgy of war and despotism.

[+] lupatus|14 years ago|reply
Note: I made a similar comment a few days ago.

It seems that you might be slightly misinformed about energy. Since the 1950s, there have been a number of advancements in energy production. Some examples are efficient shale oil extraction, tar sands oil extraction, horizontal drilling, deep water drilling, and high Arctic drilling.

As The Futurist discusses in depth[1], annual world oil consumption has been hovering around 32 billion barrels since about 1982. That means oil consumption, at $100/barrel, is $3.2 trillion, or 5% of nominal world GDP.

My take away from that is technology has made the oil supply a non-issue. And, it will continue not being a problem for the foreseeable future. It is not a hard resource limit.

Also, check out the the research going into liquid fluoride thorium fusion. It seems to have potential as a way to do cheap, safe, "easy" nuclear fusion.

Personally, I think that the problem is human willpower and chicken-little attitudes. To increase my technology optimism, I read nextbigfuture.com. Nearly daily, I see an article posted there that makes me go, "Holy crap! We can do that now??!"

[1] http://www.singularity2050.com/2011/07/the-end-of-petrotyran...

[+] lukifer|14 years ago|reply
Kurzweil's formula is based on recursive intelligence: getting smarter lets you improve the rate at which you get smarter, ad infinitum. Though this can sometimes improve energy technology, intelligence ultimately depends on energy, of which there is a finite usable amount. Though I'm optimistic about new nuclear technologies, I'm forced to agree that absent an unforeseen break-though in fusion, "zero-point", etc., Kurzweil's model is agnostic when it comes to extracting useful energy, and therefore could have an expiration date.

It's worth noting, however, that we'll never completely run out of energy, so long as Sol keeps burning. Can we sustain anything remotely resembling civilization based only on solar energy and its indirect organic by-products? Well, that's obviously debatable.

> Even if the energy problem is technically solvable, it seems to me that our political systems may be set up to do the absolute worst possible thing

Never doubt the self-interest of the powerful. Some may be short-sighted enough to eat their seed and burn resources in resource wars, but I think we're more likely to devolve into feudalism, where the masses eke out their existence serving the needs of those who control the remaining resources. On the other hand, if any kind of information infrastructure remains when the oil runs out, we also have the possibility of new emergent, de-centralized social structures constructed out of necessity. (Imagine, for instance, a mesh network of hand-crank-powered cell phones which virally alert clusters of villages to band together to defend against marauders.)

Either way, I can't help but feel that our lifetimes are the perfect fulfillment of the old Chinese curse: "May you live in interesting times."

[+] mahyarm|14 years ago|reply
China is starting a +1 billion dollar project to create a liquid thorium fission reactor (not fusion). The amount of thorium in the world is greater than the amount of copper, tin and some other metals, the amount of usable uranium is around the level of platinum. A working thorium reactor was created in the 1960s, and once china makes a few and show the world its a repeatable process, investors and banks world wide will be willing to invest in these reactors.

http://thoriumremix.com/2011/

[+] temphn|14 years ago|reply
Overall good line of thought, but a few other things are worth considering. It is vastly underappreciated just how much the EPA has retarded energy development in the US. The retroactive revocation of permits for the mountaintop coal project, the Keystone XL imbroglio, and the hasty ban on offshore drilling are just a few examples. (Not many know that Deepwater Horizon was out that far because of regulations forbidding drilling closer to the shore).

These areas are inherently dirty and dangerous. It's always easy to do Monday morning quarterbacking, but little recognition of the fact that infrequent domestic oil spills may be preferable to endless foreign oil wars.

Resource extraction in general is demonized by the EPA just as much as the pharmaceutical industry is by the FDA. It is hard to regulate an industry which is respected by the public, because they will get some benefit of the doubt. But if they can be turned into polluters and poisoners, it is easy to justify ever greater state power over the sector.

Computers and the internet are in the exact opposite space: highly respected and almost entirely unregulated. So it's hard for many here to see what the fedguv means (they make App Store approval look like a walk in the park).

But once you're on the insides of those sectors, you start to realize that what is holding back oil in the West (and nuclear power, and drugs, and all non CS sectors) are human factors, not physical ones. That is simultaneous harder and easier to deal with than a genuine scarcity of energy.

[+] schiffern|14 years ago|reply
I brought up the exact same objection when Kurzweil spoke at my college. He pointed out that the installed cost-per-watt and total installed capacity of PV exhibits an exponential trend over the ~30 years of available data. He extrapolated that within 20 years, 100% of the world's energy could be provided by photovoltaic panels.
[+] cavedave|14 years ago|reply
"Averaged over 30 years, the trend is for an annual 7 percent reduction in the dollars per watt of solar photovoltaic cells. While in the earlier part of this decade prices flattened for a few years, the sharp decline" http://blogs.scientificamerican.com/guest-blog/2011/03/16/sm...

'The cost of solar, in the average location in the U.S., will cross the current average retail electricity price of 12 cents per kilowatt hour in around 2020, or 9 years from now. In fact, given that retail electricity prices are currently rising by a few percent per year, prices will probably cross earlier, around 2018 for the country as a whole, and as early as 2015 for the sunniest parts of America.'

If solar is at a 7% annual decline is that not a very rapid advancement?

[+] pasbesoin|14 years ago|reply
Solve the amount of raw collection (speaking of "next generation" energy technologies), and some leakage in the storage and delivery becomes acceptable. It remains a problem that you can peck at, over time.

For the U.S., I see a/the primary crisis that is addressable as being energy. And solving it would, directly and indirectly, re-invigorate our economy and provide an entire sector not just of jobs but of careers / career growth. And education.

That the current Administration didn't jump into this aggressively, has been of enormous disappointment to me.

That would be some leadership. Chart the course (course corrections allowed), and advocate -- including with that bully pulpit -- to make it so.

Given that we don't have such leadership (which would be unusual, admittedly), it will have to be bottom up. I just hope that's enough.

[+] bfrs|14 years ago|reply
The energy problem was solved by Freeman Dyson et. al. in the 50s at Los Alamos [1]. Here are his paraphrased proposals:

Kardashev Level I (http://en.wikipedia.org/wiki/Kardashev_scale)

Dyson fusion engines.

He's not talking about unviable Tokamak designs (http://en.wikipedia.org/wiki/Tokamak), but something much simpler. Basically use H-bombs (which are the only tried and tested fusion technology) to drive a large internal combustion engine. The energy released by each explosion is stored by using it to lift water back into hydraulic dams.

Kardashev Level II

Dyson rings and shells (http://en.wikipedia.org/wiki/Dyson_shell).

Kardashev Level III

"Lather, rinse and repeat" with Dyson shells.

[1] Just as all problems in computer science were solved (in principle) at Xerox PARC in the 70s, all energy problems were solved (in principle) at Los Alamos in the 50s.

[+] Troll_Whisperer|14 years ago|reply
>What about energy?

>It's true that if you look at most areas of technology they are advancing rapidly. Except energy. Energy has stagnated since the 1950s.

This is demonstrably false. Solar power generation, for example, has been enjoyinging the same kind of Moore's Law exponential price/power over the past 15 years that computer processing power has. This is hardly surprising due to the fact that silicon wafer solar panels often use the same semi-conductor suppliers that computer hardware manufacturers do. Newer thin-film solar panels represent a jump in paradigm that promises even greater price performance.

I could have brought up similar points about the progress of wind-power, bio-fuel, or a number of other fields. Energy has anything but stagnated since the 1950s.

tldr: Stay off the peak oil scaremongering sites. They'll blind you.

[+] brfox|14 years ago|reply
I don't think energy production needs to change very much. One dimension of computing improvement over the past N years is that cpus are getting faster per unit of energy. So, it takes less and less energy for more and more powerful computers.

Remember that our brains only need a couple thousand kcals per day.

[+] bh42222|14 years ago|reply
Allen writes that "the Law of Accelerating Returns (LOAR). . . is not a physical law." I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk.

Oh that's a terrible point! Thermodynamics laws are nothing like predictions about the future. I would have thought linguistic slight of hand like this is beneath Kurzweil.

Allen's statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome.

The design of the human brain is not entirely contained in the genome!

As soon as we mapped the human genome we were faced with a paradox. How come the complexity difference between us and mice for example, is NOT proportional to the difference in our genomes?

Here's an article form 2002 "Just 2.5% of DNA turns mice into men": http://www.newscientist.com/article/dn2352-just-25-of-dna-tu...

In other words, if you look at just how the genomes are different then humans and mice ought be a lot more similar than we are.

We have since come to find out just what a huge role the feedback-interactions of DNA and its products, like proteins and all kinds of RNA, play in the development of life.

This staggeringly complex feedback mechanism is why despite the mapping of the human genome, medical progress still remains excruciatingly slow. Much, much faster the before! But not nearly as fast as we had hopped when the human genome was first mapped.

Note that epigenetic information (such as the peptides controlling gene expression) do not appreciably add to the amount of information in the genome.

This is true in that they don't add much to the genome. But it is profoundly wrong in that they do add hugely to the actual resulting phenotype.

Kurzweil continues in this same vane for a while. I don't know if he has just never bothered to look into the latest research or if his understandably strong desire to not die has resulted in a huge confirmation bias.

When Kurzweil talks about the general trend of scientific progress I tend to agree with him. But neither Paul Allen nor anyone else disagrees with the notion that we will reach the singularity at some point in the future.

The argument is about the timing. And timing the future, is like timing the stock market, something I don't care to try to do.

But when Kurzweil attempts to convince the reader that the singularity is near, by using specific examples, that's when I start do disagree with him. Because once he starts being specific, it becomes easy for me to see where he is wrong, factually, objectively wrong.

[+] endtime|14 years ago|reply
>Oh that's a terrible point! Thermodynamics laws are nothing like predictions about the future. I would have thought linguistic slight of hand like this is beneath Kurzweil.

Could you elaborate on this? I'm not a huge Kurzweil fan, but as far as I can tell he's saying something reasonable here - that when he talks about LOAR, he's describing a phenomenon rather than a physical process, and that this is an accepted usage of the word "law". I don't think he's playing semantic tricks so much as responding to a semantic complaint.

[+] wnewman|14 years ago|reply
It seems to me that Kurzweil is on rather strong grounds when he argues in effect that 25Mbytes is a safe conservative upper bound on the information needed to specify a human infant brain. The relevant information content of the epigenetic stuff is unlikely to be tens of megabytes, and extremely unlikely to be hundreds of megabytes. Otherwise, it's hard to see how we could've overlooked such a high proportion of non-DNA design information being passed around in all the work being done on genetics. It's also hard to see how so much extra information would stay stable against mutation-ish pressures unless its copy error rate was much lower than DNA, and hard to see how we'd've overlooked all the machinery that would accomplish that.

Moreover, I think 25M bytes is probably a very conservative upper bound, so that the relevant uncompressable complexity of what computer scientists need to design for general AI is likely no more than 1M bytes. A lot of actual brain stuff is likely to be description of the physical layer that silicon engineers won't care about, because they do the physical layer in a completely different way (silicon and masks and resists and hardest of hard UV and two low digital voltages, not wet tendrils groping toward each other in the dark and washing each other with neurotransmitters). A significant amount of actual brain stuff is likely to be application layer stuff that we don't need (e.g., the Bearnaise sauce effect, and fear of heights and snakes) and optimizations that we don't strictly need (all sorts of shortcuts for visual processing and language grammar and so forth, when more general-purpose mechanisms would still suffice to pass a Turing test). A lot of brain stuff is likely to be stuff in common with a fish, much of which we already know how to implement from scratch. And all brain stuff seems pretty likely to be encoded rather inefficiently: lots of twisty little protein substructures and nucleic acid binding sites are unlikely to be nearly as concise as the kind of mathematical or programming language notation that describes what's going on.

When Kurzweil writes "do not appreciably add" I understand him to be willing to stand by roughly the quantitative information theoretical claims I made at first (25Mbytes, tens of Mbytes, hundreds of Mbytes). When you write "profoundly wrong ... add hugely" I am unable to tell what you are claiming. How many uncompressable bits of design information you are talking about? Perhaps you believe that natural selection pounded out and mitosis reliably propagates 200M uncompressable bytes of brain design information? or 1G bytes? As above, I think that is probably false. Or perhaps I should read "hugely" as "vitally" and understand that you merely mean that the epigenetic information might be less than a million uncompressable bytes but still if you corrupt it badly you have a dead or hopelessly moronic infant. If that's what you mean, I think you are factually correct, but also don't think that that fact contradicts Kurzweil's argument.

[+] api|14 years ago|reply
Another response:

Kurzweil also ignores economics. The advance of technology is driven in part by economic forces. Computing power may stagnate not because we have reached physical limits but because present-day computers are good enough for what 98% of the market wants.

I see this trend developing. If anything, the trend in consumer computing is toward less powerful but lower-power and more portable computing devices. My current laptop -- a Macbook Air -- is actually slower than my previous laptop. But it is more portable and uses less energy. And it does everything I want. I don't need more power right now.

The only areas driving the performance end are gaming, high performance computing, and high-capacity data centers. How long will those go until they too are basically satiated?

We've seen this in other areas. The envelope for aviation maxed out in the 1970s with things like the U2. Space flight seems to just now be emerging from a long coma with things like SpaceX, but on closer examination SpaceX is just reviving 1960s ideas and doing them at a lower cost with modern control systems and materials technology.

My other reply about energy deals with supply-side limits to growth. This response deals with demand limits to growth.

[+] hexagonc|14 years ago|reply
I don't think Kurzweil ignores economics at all. In fact, many of his arguments are largely based on economics, such as the cost to obtain a given amount of computation. It is undeniable that this has gone down steadily with time. Computation is so cheap that cloud computing (e.g., E2C, Azure) allows individual developers access to more computing than most know what to do with.

Your example of the MacBook Air is invalid because the MacBook Air of today isn't priced principally on performance -- it is priced mostly by build quality and performance/watt/kilogram. Never before have computers had a better performance/watt/kilogram profile. You have to consider the whole package.

The argument that there is less need for performance is also flawed. It is not that humans have less need for computation, it's that the distribution of computation is changing. As mentioned before, a lot of computation is moving to the cloud. In many cases, this is simply the most efficient place for it to be; instead of having an abundance of computation sitting mostly unused on a laptop or desktop, computation is becoming a service where it is used on demand. Think of the computation required for cloud services like speech to text processing or Google Goggles. Most of the computation is farmed out the servers in cloud and only minimal.

I'm not even sure I agree that the decrease in local computational needs is a long term trend. For one thing, migration to the cloud can only continue as long as network bandwidth keeps up with cloud data demands. If network bandwidth is not able to keep up then I predict we'll see local computational needs spike again.

Even with some computation siphoned off to the cloud, AI will create a whole new class of applications for even local computation because it is so data intensive. When the next crop of AI applications emerge and become more common, the early algorithms will probably be inefficient, which will itself cause a spike in computational needs. The applications are here yet but there are many (including myself) that are working hard to change that.

[+] lupatus|14 years ago|reply
What the market wants is every map, book, song, movie, game, and poem ever made to be instantly accessible, searchable, and reviewable. They want their work to be autosyncing, autobackedup, and to follow them from device to device. They want to be able to securely talk with friends and family at any time, to publically talk with friends and family at any time, to discover new friends and family, and to be able to completely disconnect from friends and family at will. They want intelligent tools that keep them from making dumb decisions, tools to help them make even better good decisions, and tools that won't get in the way of them making dumb decisions.

I think that the market for computing power has a looong way to go before it taps-out what the market demands.

[+] cpeterso|14 years ago|reply
Moore's Law gives us more for your money, where "more" can be more speed or more (cheaper) devices for the same cost.
[+] Florin_Andrei|14 years ago|reply
Gaming alone will keep driving computer growth for a long long time.
[+] shin_lao|14 years ago|reply
Science progress is not just a question of intelligence.

Therefore, even if computers become more intelligent than humans, it is doubtful a "singularity" will occur.

[+] loup-vaillant|14 years ago|reply
If not, then what?

Even while I agree with Alan Kay when he says that IQ << knowledge << outlook. But all three happen in the brain regardless.

Imagine we manage to build a machine that produce more insights than Newton. That particular form of intelligence would be quite likely to trigger a singularity, don't you think?

[+] suivix|14 years ago|reply
It's startling to me that everything I do comes from 50 megabytes of source code.
[+] bh42222|14 years ago|reply
It's startling to me that everything I do comes from 50 megabytes of source code.

This fact was a result of the mapping of the human genome, and has since been proven wrong.

But like many such facts it has a certain "stickiness" to the human brain. I expect to be hearing it for many years.

I tried finding the paper which found that the complexity of mRNA or tRNA (or some other kind, I forget) produced in the brain matches the complexity difference between mice and humans almost exactly, unlike the difference between the two genomes.

It also turns out that type of RNA is very fragile and very easily mutated compared to DNA. And evolution does like to follow the path of least resistance. By my google-fu fails me.

tl;dr: It's not 50 megabytes, shit's complicated.

[+] doctoboggan|14 years ago|reply
Don't forget about the massive amount of information stored in society (customs, language, etc). You have to download all of this info through the course of your life.
[+] civilian|14 years ago|reply
"The genetic code does not, and cannot, specify the nature and position of every capillary in the body or every neuron in the brain. What it can do is describe the underlying fractal pattern which creates them." --Academician Prokhor Zakharov, "Nonlinear Genetics"

Despite what the exact number is, the brain's connections are much more numerous than the data in my DNA. All information flows from DNA. Chemistry between RNA & proteins & everything does change the end product, but even the chemistry that those do is largely controlled by DNA. (The DNA only produces proteins which does chemistry that it wants to do. Or there is chemistry already happening in the environment, which the development strategy accounts for, but which doesn't add much, if any, complexity to the brain.)

And my quote, by the way, is from a fictional person.

[+] rbanffy|14 years ago|reply
It hasn't been so since before you were born. Your experiences in your mother's womb have shaped your brain. All that your DNA contains is the basic rules on how to assemble a generic human brain. What happens to it has far greater consequence on what you become.

I like to joke I was born an engineer. While it's true I have always been curious towards all things technological (I was born during the height of the Apollo program and, as a kid, wanted to be an astronaut) but parental support (getting lego-like kits with transmissions, gears, motors, good books and a good school) landed me on one of the most prestigious engineering schools in Brazil where I was further "perfected" by some good teachers (and some awful ones - you have to learn to avoid them, after all).

[+] natch|14 years ago|reply
Well of course that's just the substrate. A lot of what you do comes from what you've learned and continue to learn from your environment.
[+] nickpinkston|14 years ago|reply
It also doesn't count the crazy amount of epigenetics that occurs too, and the specialization that becomes emergently interactive with the environment of other cells and outer world.

Still 50Mb is crazy small for what you'd think the genome would be.

[+] sp332|14 years ago|reply
Don't forget the runtime environment though :)
[+] archgoon|14 years ago|reply
Running on a very complicated instruction set.