top | item 39776078

(no title)

Simon321 | 1 year ago

He coined the concept 'singularity' in the sense of machines becoming smarter than humans what a time for him to die with all the advancements we're seeing in artificial intelligence. I wonder what he thought about it all.

>The concept and the term "singularity" were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole",[8] and later in his 1993 essay The Coming Technological Singularity,[4][7] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.

Looks like he was spot on.

discuss

order

gumby|1 year ago

Just to clarify, the “singularity” conjectures a slightly different and more interesting phenomenon, one driven by technological advances, true, but its definition was not those advances.

It was more the second derivative of future shock: technologies and culture that enabled and encouraged faster and faster change until the curve bent essentially vertical…asymptotimg to a mathematical singularity.

An example my he spoke of was that, close to the singularity, someone might found a corporation, develop a technology, make a profit from it, and then have it be obsolete by noon.

And because you can’t see the shape of the curve on the other side of such a singularity, people living on the other side of it would be incomprehensible to people on this side.

Ray Lafferty’s 1965 story “Slow Tuesday Night” explored this phenomenon years before Toffler wrote “Future Shock”

PaulHoule|1 year ago

Note that the "Singularity" turns up in the novel

https://en.wikipedia.org/wiki/Marooned_in_Realtime

where people can use a "Bobble" to freeze themselves in a stasis field and travel in time... forward. The singularity is some mysterious event that causes all of unbobbled humanity to disappear leaving the survivors wondering, even 10s of millions of years later, what happened. As such it is one of the best pretenses ever in sci-fi. (I am left wondering though if the best cultural comparison is "The Rapture" some Christians believe in making this more of a religiously motivated concept as opposed to sound futurism.)

I've long been fascinated by this differential equation

  dx
  -- = x^2
  dt
which has solutions that look like

  x = 1/(t₀-t)
which notably blows up at time t₀. It's a model of an "intelligence explosion" where improving technology speeds up the rate of technological process but the very low growth when t ≪ t₀ could also be a model for why it is hard to bootstrap a two-sided market, why some settlements fail, etc. About 20 years ago I was very interested in ecological accounting and wondering if we could outrace resource depletion and related problems and did a literature search for people developing models like this further and was pretty disappointed not to find much also it did appear as a footnote in the ecology literature here and there. Even papers like

https://agi-conf.org/2010/wp-content/uploads/2009/06/agi10si...

seem to miss it. (Surprised the lesswrong folks haven't picked it up but they don't seem too mathematically inclined)

---

Note I don't believe in the intelligence explosion because what we've seen in "Moore's law" recently is that each generation of chips is getting much more difficult and expensive to develop whereas the benefits of shrinks are shrinking and in fact we might be rudely surprised that the state of the art chips of the new future (and possibly 2024) burn up pretty quickly. It's not so clear that chipmakers would have continued to invest in a new generation if governments weren't piling huge money into a "great powers" competition... That is, already we might be past the point of economic returns.

gcr|1 year ago

with respect, we don’t know if he was spot on. Companies shoehorning language models into their products is a far cry from the transformative societal change he describes will happen. nothing like a singularity has yet happened at the scale he describes, and might not happen without more fundamental shifts/breakthroughs in AI research.

CuriouslyC|1 year ago

What we're seeing right now with LLMs is like music in the late 30s after the invention of the electric guitar. At that point people still have no idea how to use it so, so they were treating it like an amplified acoustic guitar. It took almost 40 years for people to come up with the idea of harnessing feedback and distortion to use the guitar to create otherworldly soundscapes, and another 30 beyond that before people even approached the limit of guitar's range with pedals and such.

LLMs are a game changer that are going to enable a new programming paradigm as models get faster and better at producing structured output. There are entire classes of app that couldn't exist before because there there was a non-trivial "fuzzy" language problem in the loop. Furthermore I don't think people have a conception of how good these models are going to get within 5-10 years.

throw1234651234|1 year ago

Singularity doesn't necessarily rely on LLMs by any means. It's just that communication is improving and the number of people doing research is increasing. Weak AI is icing on top, let alone LLMs, which are being shoe-horned into everything now. VV clearly adds these two other paths:

            o Computer/human interfaces may become so intimate that users
              may reasonably be considered superhumanly intelligent.
            o Biological science may find ways to improve upon the natural
              human intellect.
https://edoras.sdsu.edu/~vinge/misc/singularity.html

jart|1 year ago

> Within thirty years, we will have the technological means to create superhuman intelligence.

Blackwell.

> o Develop human/computer symbiosis in art: Combine the graphic generation capability of modern machines and the esthetic sensibility of humans. Of course, there has been an enormous amount of research in designing computer aids for artists, as labor saving tools. I'm suggesting that we explicitly aim for a greater merging of competence, that we explicitly recognize the cooperative approach that is possible. Karl Sims [22] has done wonderful work in this direction.

Stable Diffusion.

> o Develop interfaces that allow computer and network access without requiring the human to be tied to one spot, sitting in front of a computer. (This is an aspect of IA that fits so well with known economic advantages that lots of effort is already being spent on it.)

iPhone and Android.

> o Develop more symmetrical decision support systems. A popular research/product area in recent years has been decision support systems. This is a form of IA, but may be too focussed on systems that are oracular. As much as the program giving the user information, there must be the idea of the user giving the program guidance.

Cicero.

> Another symptom of progress toward the Singularity: ideas themselves should spread ever faster, and even the most radical will quickly become commonplace.

Trump.

> o Use local area nets to make human teams that really work (ie, are more effective than their component members). This is generally the area of "groupware", already a very popular commercial pursuit. The change in viewpoint here would be to regard the group activity as a combination organism. In one sense, this suggestion might be regarded as the goal of inventing a "Rules of Order" for such combination operations. For instance, group focus might be more easily maintained than in classical meetings. Expertise of individual human members could be isolated from ego issues such that the contribution of different members is focussed on the team project. And of course shared data bases could be used much more conveniently than in conventional committee operations. (Note that this suggestion is aimed at team operations rather than political meetings. In a political setting, the automation described above would simply enforce the power of the persons making the rules!)

Ingress.

> o Exploit the worldwide Internet as a combination human/machine tool. Of all the items on the list, progress in this is proceeding the fastest and may run us into the Singularity before anything else. The power and influence of even the present-day Internet is vastly underestimated. For instance, I think our contemporary computer systems would break under the weight of their own complexity if it weren't for the edge that the USENET "group mind" gives the system administration and support people!) The very anarchy of the worldwide net development is evidence of its potential. As connectivity and bandwidth and archive size and computer speed all increase, we are seeing something like Lynn Margulis' [14] vision of the biosphere as data processor recapitulated, but at a million times greater speed and with millions of humanly intelligent agents (ourselves).

Twitter.

> o Limb prosthetics is a topic of direct commercial applicability. Nerve to silicon transducers can be made [13]. This is an exciting, near-term step toward direct communcation.

Atom Limbs.

> o Similar direct links into brains may be feasible, if the bit rate is low: given human learning flexibility, the actual brain neuron targets might not have to be precisely selected. Even 100 bits per second would be of great use to stroke victims who would otherwise be confined to menu-driven interfaces.

Neuralink.

---

https://justine.lol/dox/singularity.txt

beambot|1 year ago

Probably just a question of time constant / zoom on your time axis. When zoomed in up close, an exponential looks a lot like a bunch of piecewise linear components, where big breakthroughs just are a discontinuous changes in slope...

jimbokun|1 year ago

Still has 6 years to be proven correct.

mnsc|1 year ago

Imagine the first llm to suggest an improvement to itself that no human has considered. Then imagine what happens next.

angiosperm|1 year ago

It has, anyway, already had a profound effect on the IT job market.

trenchgun|1 year ago

He popularized and advanced the concept, but originally it was by von Neumann.

nabla9|1 year ago

The concept predates von Neuman.

First known person to present the idea was mathematician and philosopher Nicolas de Condorcet in the late 1700s. Not surprising, because he also laid out most ideals and values of modern liberal democracy as they are now. Amazing philosopher.

He basically invented the idea of ensemble learning (known as boosting in machine learning).

Nicolas de Condorcet and the First Intelligence Explosion Hypothesis https://onlinelibrary.wiley.com/doi/10.1609/aimag.v40i1.2855

0xdeadbeefbabe|1 year ago

> He wrote that he would be surprised if it occurred before 2005 or after 2030.

Being surprised is also an exciting outcome. Was he thinking about that too?

holoduke|1 year ago

"We should destroy all AI out there before its taking over"