top | item 36464924

(no title)

goldenshale | 2 years ago

As if anyone is good at predicting the future. Please can we stop acting like expertise equates to fortune telling capabilities?! Nobody has any clue what a 1000x sized GPT model could do, and anybody who makes strong claims is a charlatan. In this age of paranoid AI risk cultists we need to cultivate humility and calm, a willingness to follow data rather than beliefs and predictions.

discuss

order

p-e-w|2 years ago

> paranoid AI risk cultists

There is broad consensus among experts that a hypothetical strong AI would be a threat, and potentially an existential threat, to humanity. While not everyone agrees on details like timeline and alignment issues, the idea that AI is dangerous is not a cult, it's the mainstream view.

Climate scientists cannot "predict the future" with certainty either. That doesn't mean their warnings are hot air, and neither are the warnings from AI safety experts. It seems like the educated masses are currently in denial about AI in much the same way as the uneducated masses have been in denial about climate change for a while.

Risk assessment doesn't require understanding. I don't have to understand how a venomous snake senses prey in order to know that the snake is a potential threat to me. In fact, the less I know about the snake, the higher the assessed risk should be, since the uncertainty is higher as well.

goldenshale|2 years ago

What nonsense. I've spent over a decade 100% focused on AI, and the broad consensus among everyone I've worked with is not to be that concerned at all. The only consensus is that a small group of self proclaimed experts who make a lot of noise is that they get lots of press coverage if they scream and shout making predictions based on zero scientific evidence.

We can understand the physics of greenhouse gases and take measurements of earth systems to build evidence for models and theories. (Many of which are nonetheless very inaccurate beyond short time horizons.) Show me any evidence for AI risk today beyond people's theories and beliefs?

The best predictor of the future is the past, not people's wild ideas about what the future could be. I'm not about to sit here feeling scared because there is more uncertainty that our matrix multiplies are about to go rogue. There are no AGI experts or AI risk experts, because we don't have any of these systems to study and analyze. What we have is people forming beliefs about their own predictions about systems which are unknowable.

civilized|2 years ago

"Expertise" in a speculative concept like AI risk is not remotely comparable to expertise in a scientific field like climate change.

There are two definitions of expertise:

1. Knowing more than most people about a topic. This is the type of expertise that wins the Quiz Bowl.

2. Actual mastery of a field, such that predictions and analyses generated by a person possessing such mastery are reliable. This is the type of expertise that fixes your home or car.

The first definition is easily verifiable, and due to the availability heuristic, it is often presented as a legitimate proxy for the second. But it isn't really, not in general.

If I know more about horoscopes than most people, I am a horoscope expert. But it doesn't mean I can be relied on to predict any of the things horoscopes supposedly predict. It's the same with AI risk. Expertise in AI risk is not a basis for credibility because AI risk is not a real scientific field.

Climate change is a real field of science. AI risk is Nostradamic prognostication by people who know more than you.

antifa|2 years ago

> a hypothetical strong AI would be a threat, and potentially an existential threat, to humanity.

I wish a cool scifi robot woke up one day and violently optimized all of humanity into paperclips, instead I live in the real world where the jobs are going to evaporate like water in a newly installed desert and the "let them eat cake" will get increasingly louder and blue-check-markier.

ilaksh|2 years ago

Warning people about potential extreme risks from advanced AI does not make you a cultist. It makes you a realist.

I love GPT and my whole life and plans are based on AI tools like it. But that doesn't mean that if you make it say 50% smarter and 50 times faster that it can't cause problems for people. Because all it takes is systems with superior reasoning capability to be given an overly broad goal.

In less than five years, these models may be thinking dozens of times faster than any human. Human input or activities will appear to be mostly frozen to them. The only way to keep up will be deploying your own models.

So to effectively lose control you don't need the models to "wake up" and become living simulations of people or anything. You just need them to get somewhat smarter and much faster.

We have to expect them to get much, much faster. The models, software, and hardware for this specific application all have room for improvement. And there will be new paradigms/approaches that are even more efficient for this application.

For hyperspeed AI to not come about would be a total break from computing history.

goldenshale|2 years ago

A realist is someone who accepts reality as it is, not as they might be able to anxiously envision that it could be. Life is too short and attention too precious to fill the meme space with every dreamer's deepest concerns. None of these dramatic X-risk claims is based on anything but beliefs and conjecture. "Thinking dozens of times faster?" What do you even mean? These are models executing matrix multiplies billions of times faster than our brains propagate information, and they represent knowledge in a manner which is unique and different from human brains. They have no goals, no will, and no inner experience of us being frozen or fast or anything else. We are so prone to anthropomorphize willy-nilly. We evolved in a paradigm of resource competition so we have drives and impulses to protect, defend, devour, etc., of which AI models have zero. Anyone who has investigated reinforcement learning knows that we are currently far away from understanding let alone implementing systems which can effectively deconstruct abstract goals into concrete sub-tasks, yet people are soooo sure that these models are somehow going to all of a sudden be an enormous risk. Why don't we wait until there is even the slightest glimmer of evidence before listening to these prophets of doom?

This pseudo-intellectual belief structure is very cult like. Its an end of the world scenario that only an elite few can really understand, and they, our saviors, our band of reluctant nerd heroes, are screaming from the pulpit to warn us of utter destruction. The actual end of days. These "black box" (er, I mean, we engineered them that way after decades of research, but no, nobody really understands them, right?) shoggoths will be so incredibly brilliant that they will be able to dominate all of humanity. They will understand humans so well as to manipulate us out of existence, yet they will be so utterly stupid as to pursue paper clips at all cost.

Maybe instead these models will just be really useful software tools to compress knowledge and make it available to humanity in myriad forms to develop a next level of civilization on top of? People will become more educated and wise, the cost of goods and services will drop dramatically, thereby enriching all of humanity, and life will go on. There are straighter paths from where we are today to this set of predictions than there are to many of the doomsday scenarios, yet it has become hip among the intelligentsia to be concerned about everything. Being optimistic is somehow not real, (although the progress of civilization serves as great evidence that optimism is indeed rational) while being a loud mouthed scare mongerer or a quiet, very serious and concerned intellectual, is seen as respectable. Forget that. All the doomers can go rot in their depressive caves while the rest of us build a bad ass future for all of humanity. Once hail bop has passed over I hope everyone feels welcome to come back to the party.

ignoramous|2 years ago

> As if anyone is good at predicting the future.

Whoever predicts the right direction, (and when the time is right) puts money where their mouth is, stands a shot at unseating... the alt man.

  I think the way to use these big ideas is not to try to identify a precise point in the future and then ask yourself how to get from here to there, like the popular image of a visionary. You'll be better off if you operate like Columbus and just head in a general westerly direction. Don't try to construct the future like a building, because your current blueprint is almost certainly mistaken. Start with something you know works, and when you expand, expand westward.
  
  The popular image of the visionary is someone with a clear view of the future, but empirically it may be better to have a blurry one.
paulgraham.com/ambitious.html

JimtheCoder|2 years ago

Sir, this is a discussion forum...