It's a nice article. In a way though it kind of bypasses what I see as the main takeaways.
It's not about AI development, it's about something mentioned earlier in the article: "make as much money as I can". The problems that we see with AI have little to do with AI "development", they have to do with AI marketing and promulgation. If the author had gone ahead and dammed the creek with a shovel, or blown off his hand, that would have been bad, but not that bad. Those kinds of mistakes are self-limiting because if you're doing something for the enjoyment or challenge of it, you won't do it at a scale that creates more enjoyment than you personally can experience. In the parable of the CEO and the fisherman, the fisherman stops at what he can tangibly appreciate.
If everyone working on and using AI were approaching it like damming a creek for fun, we would have no problems. The AI models we had might be powerful, but they would be funky and disjointed because people would be more interested in tinkering with them than making money from them. We see tons of posts on HN every day about remarkable things people do for the gusto. We'd see a bunch of posts about new AI models and people would talk about how cool they are and go on not using them in any load-bearing way.
As soon as people start trying to use anything, AI or not, to make as much money as possible, we have a problem.
The second missed takeaway is at the end. He says Anthropic is noticing the coquinas as if that means they're going to somehow self-regulate. But in most of the examples he gives, he wasn't stopped by his own realization, but by an external authority (like parents) telling him to stop. Most people are not as self-reflective as this author and won't care about "winning zero sum games against people who don't necessarily deserve to lose", let alone about coquinas. They need a parent to step in and take the shovel away.
As long as we keep treating "making as much money as you can" as some kind of exception to the principle of "you can't keep doing stuff until you break something", we'll have these problems, AI or not.
> As soon as people start trying to use anything, AI or not, to make as much money as possible, we have a problem.
I noticed that, around the turn of the century, when "The Web" was suddenly all about the Benjamins.
It's sort of gone downhill, since.
For myself, I've retired, and putter around in my "software garden." I do make use of AI, to help me solve problems, and generate code starts, but I am into it for personal satisfaction.
> it's about something mentioned earlier in the article: "make as much money as I can".
I think it's a little deeper than that. It's the democratization of capability.
If few people have the tools, the craftsman is extremely valuable. He can make a lot of money without a glut of knowledge or real skill. In general the people don't have the tools and skills to catch up to where he is. He is wealthy with only frontloaded effort.
If everyone has the same tools, the craftsman still has value, because of the knowledge and skillset developed over time. He makes more money because his skills are valuable and remain scarce; he's incentivized to further this skillset to stay above the pack, continue to be in demand, and make more money.
If the tools do the job for you, the craftsman has limited value. He's an artifact. No matter how much he furthers his expertise, most people will just turn the tool on and get good enough product.
We're in between phase 2 and 3 at the moment. We still test for things like algorithm design and ask questions in interviews about the complexity of approaches. A lot of us still haven't moved on to the "ok but now what?" part of the transition.
The value now is less knowing how the automation works and improving our knowledge of the underlying design, but how to use the tools in ways that produce more value than the average Joe. It's a hard transition for people who grew up thinking this was all you needed to get a comfortable or even lucrative life.
I'm past my SDE interview phase of life now and in seeking engineers I'm looking less for people who know how to build a version of the tool and more people who operate in the present, have accepted the change, and want to use what they have access to and add human utility to make the sum of the whole greater than the parts.
To me the best part of building software was the creativity. That part hasn't changed. If anything it's more important than ever.
Ultimately we're building things to be consumed by consumers. That hasn't changed. The creek started flowing in a different direction and your job in this space is not to keep putting rocks where the water used to go, and more accepting that things are different and you have to adapt.
This is such a well-written response. There's something intentionally soothing about this post that slowly turns into a jarring form of self-congratulation as it goes along. Congratulations for knowing there's a limit to wrecking your parents' property. Congratulations for being able to appreciate the sand on the beach, in some no doubt instagrammable moment of existential simplicity. Congratulations for being so smart that you could have blown up your hand. And for "Leetcoding", whatever the fuck that means. And for claiming you quit a shady job because you got bored (but possibly also grew a conscience). And then topped off by the final turn: "This is, of course, about artificial intelligence development". I'd only add one thing to your analysis: We've got a demo right here of a psyche that would prefer love to money (but mostly both), and it's still determined to foist bad things onto the world in a load-bearing way, as a bid for either, or whatever it can get. My parents used to call that "a kid that doesn't care if he gets good or bad attention, as long as he gets attention." I think that's the root driver for almost all the tech billionaires of the past 20 years, and the one thing that unites Bezos, Zuck, Jobs, Dorsey, Musk... it's: "Look dad, I didn't just take your money. I'm so smart I could'a blown off my hand with all those fireworks you bought me, but see? Two hands! Look how much money I made from your money! Why aren't you proud of me?! Where can I find love? Maybe if I tell people what a leetcoder I am and how I could be making BAD AI but I'm just making GOOD AI, then everyone will love me."
Don't get me wrong, I'm not immune to these feelings either. I want to do good work and I want people to love what I do. But there's something so... so fucking nakedly exhibitionist and narcissistic about these kinds of posts. Like, so, GO FUCKING LAY WITH CLAMS, write a novel, the world is waiting for it if you're really a genius. Have the courage to say you have a conscience if you actually do. Leave the rest of us alone and stop polluting a world you don't understand with your childish greed and self-obsession.
This is an excellent essay, and I feel similar to the author but couldn't express it as nicely.
However if we are counting on AI researchers to take the advice and slow down then I wouldn't hold my breath waiting. The author indicated they stepped away from a high paying finance job for moral reasons, which is admirable. But wallstreet continues on and does not lack for people willing to play the "make as much money as you can" game.
A finance job is a zero-sum game. Most tech jobs are negative sum, in that they make the world worse. You have the wrong takeaway here. Companies like Amazon and Google and OpenAI and the like are not-so-slowly destroying our planet and companies like Citadel just move money around.
> However if we are counting on AI researchers to take the advice and slow down then I wouldn't hold my breath waiting. The author indicated they stepped away from a high paying finance job for moral reasons, which is admirable. But wallstreet continues on and does not lack for people willing to play the "make as much money as you can" game.
I doubt OP is counting on it, it is moreso expressing what an optimal world would look like so people can work towards it if they would feel like it or just to put the idea out there.
The story of playing at damming the creek or on the sand at the seaside is wholesome and brought a smile to my face. Cracking the "puzzle" is almost the bad ending of the game, if you don't get any fun at playing it anymore.
People should spend more of their time doing things because they're fun, not because they want to get better at it.
Maybe the apocalypse will happen in our lifetime, maybe not. I intend to have fun as much as I can in my life either way.
“(talking about when he tells his wife he’s going out to buy an envelope) Oh, she says well, you’re not a poor man. You know, why don’t you go online and buy a hundred envelopes and put them in the closet? And so I pretend not to hear her. And go out to get an envelope because I’m going to have a hell of a good time in the process of buying one envelope. I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I don’t know. The moral of the story is, is we’re here on Earth to fart around. And, of course, the computers will do us out of that. And, what the computer people don’t realize, or they don’t care, is we’re dancing animals.”
The author seems concerned about AI risk -- as in, "they're going to kill us all" -- and that's a common LW trope.
Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.
As Dwarkesh once asked:
> One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.
> Shouldn’t we be expecting that kind of stuff?
I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.
Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans. And this hasn't changed at all over the past five years. Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.
More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.
Aren’t semiautonomous drones already killing soldiers in Ukraine? Can you not imagine a future with more conflict and automated killing? Maybe that’s not seen as AI risk per se?
The thing about LLMs is that they're trained exclusively on text, and so they don't have much insight into these sorts of problems. But I don't know if anyone has tried making a multimodal LLM that is trained on x-ray tomography of parts under varying loads tagged with descriptions of what the parts are for - I suspect that such a multimodal model would be able to give you a good answer to that question.
No, the LLMs aren't going to kill us all. Neither are they going to help a determined mass murderer to get us all.
They are, however, going to enable credulous idiots to drive humanity completely off a cliff. (And yes, we're seeing that in action right now). They don't need to be independent agents. They just need to seem smart.
A perfect AI isn't a threat: you can just tell it to come up with a set of rules whose consequences would never be things that we today would object to.
A useless AI isn't a threat: nobody will use it.
LLMs, as they exist today, are between these two. They're competent enough to get used, but will still give incorrect (and sometimes dangerous) answers that the users are not equipped to notice.
Like designing US trade policy.
> Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.
What does the latter have to do with the former?
> Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans.
Why would the destruction of humanity need to use a novel mechanism, rather than a well-known one?
> And this hasn't changed at all over the past five years.
They're definitely different now than 5 years ago. I played with the DaVinci models back in the day, nobody cared because that really was just very good autocomplete. Even if there's a way to get the early models to combine knowledge from different domains, it wasn't obvious how to actually make them do that, whereas today it's "just ask".
> Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.
And write code. Not great code, but "it'll do" code. And use APIs.
> More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.
While I'd agree they lack the competence to do so, I don't see how this matters. Humans are lazy and just tell the machine to do the work for them, give themselves a martini and a pay rise, then wonder why "The Machine Stops": https://en.wikipedia.org/wiki/The_Machine_Stops
The human half of this equation has been shown many times in the course of history. Our leaders treat other humans as machines or as animals, give themselves pay rises, then wonder why the strikes, uprisings, rebellions, and wars of independence happened.
Ironically, the lack of imagination of LLMs, the very fact that they're mimicking us, may well result in this kind of AI doing exactly that kind of thing even with the lowest interpretation of their nature and intelligence — the mimicry of human history is sufficient.
--
That said, I agree with you about the limitations of using them for research. Where you say this:
> I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.
I had similar with NotebookLM, where I put in one of my own blog posts and it missed half the content and re-interpreted half the rest in a way that had nothing much in common with my point. (Conversely, this makes me wonder: how many humans misunderstand my writing?)
"It was only once I got it that I realized I no longer could play the game "make as much money as I can.""
Funny, that is what my father taught me when I was 12 because we had compassion. What is it with glorifying all these logic loving Spock like people? Don't you know Captain Kirk was the real hero of Star Trek? Because he had compassion?
If there's money to be made, there will always be someone with a shovel or a truckload of sparklers who is willing to take the risk (especially if the risk can be externalized to the public) and reap the reward.
My understanding is that the author is this superior being trying to accomplish a massive task (damming a beach) while knowing that it could cause problems for these clams. In the real world, Anthropic is trying to accomplish a massive task (building AGI) and they're finally starting to notice the potential impacts this has on people.
Coquinas are clams that bury themselves in the sand very close to the surface [1]. The author worries that while they are playing with the sand, they might accidentally bury coquina clams too deep and kill them because they can no longer reach the surface.
Anthropic apparently is starting to notice the possible danger to others of their work. I'm not sure what they are referring to.
I think that they're saying a little bit of playing around with replacing thinking and composing with automated tools is recoverable, but at an industrial or societal scale the damage is significant. Like the difference between shoveling away some sand with your hands to bury the small creatures temporarily and actually destroying their habitat by "lobbying city council members to put in a groin or seawall, and seriously move that beach sand."
That was a well written essay with a non-sequitur AI Safety thing tacked to the end. His real world examples were concrete, and the reason to stop escalating easy to understand ("don't flood the neighbourhood by building a real dam").
The AI angle is not only even hypothetical: there is no attempt to describe or reason about a concrete "x leading to y", just "see, the same principle probably extrapolates".
There is no argument there that is sounder than "the high velocities of steam locomotives might kill you" that people made 200 years ago.
> the high velocities of steam locomotives might kill you
This obviously seems silly in hindsight. Warnings about radium watches or asbestos sound less silly, or even wise. But neither had any solid scientific studies showing clear hazard and risk. Just people being good Bayesian agents, trying to ride the middle of the exploration vs. exploitation curve.
Maybe it makes sense to spend some percentage of AI development resources on trying to understand how they work, and how they can fail.
The progress-care trade-off is a difficult one to navigate, and is clearly more important with AI. I've seen people draw analogies to companies, which have often caused harm in pursuit of greater profits, both purposefully and simply as byproducts: oil-spills, overmedication, pollution, ecological damage, bad labor conditions, hazardous materials, mass lead poisoning. Of course, the profit seeking company as an invention has been one of the best humans have ever made, but that doesn't mean we shouldn't take "corp safety" seriously. We pass various laws on how corps can operate and what they can and can not do to limit harms and _align_ them with the goals of society.
So it is with AI. Except, corps are made of people that work on people speeds, and have vague morals and are tied to society in ways AI might not be. AI might also be able to operate faster and with less error. So extra care is required.
That AI is dangerous and the closer we get to the danger zone the better it would be if the companies developing these technologies understand it might be better to slow down and make sure it's safe vs pushing ahead at maximum speed.
So many articles and comments claim Ai will destroy critical thinking in our youths. Is there any evidence that this conviction that many people share is even remotely true?
To me it just seems like the same old knee-jerk luddite response people have to any powerful new technology that challenges that status quo since the dawn of time. The calculator did not erase math wizards, the television did not replace books and so on. It just made us better, faster, more productive.
Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.
Some people may go full-on Wall-E, but I for one will never stop tinkering, and many of my friends won't either.
The things I could have done if I had had an LLM as a kid... I think I've learned more in the past two years than ever before.
The major difference is that in order to use a calculator, you need to know and understand the math you're doing. It's a tool you can work with. I always had a calculator for my math exams and I always had bad grades :)
You don't have to know how to program to ask ChatGPT to build yet another app for you. It's a substitute for your brain. My university students have good grades on their do-at-home exams, but can't spot a off-by-one error on a 3 lines Golang for loop during an in-person exam.
I would expect people today to be quite a lot worse at mental arithmetic that we used to be before calculators.
And worse at memorizing stuff than before writing.
We have tools to help us with that, and maybe it isn't a big loss? And they also bring new arenas and abilities.
And maybe in the future we will be worse at critical thinking (https://news.ycombinator.com/item?id=43484224), and maybe it isn't a big loss? It is hard to imagine what new abilities and arenas will emerge. Though I think that critical thinking is a worse loss than memory and mental arithmetic. Though, also, we are probably a lot less good at it than we think we are, generally.
But it did. Quick, what's 67 * 49? A math wiz would furrow their brow for a second and be able to spit out an answer, while the rest of us have to pull out a calculator. When you're doing business in person and have to move numbers around, having to stop and use a calculator slows you down. If you don't have a role where that's useful then it's not a needed skill and you don't notice it's missing, like riding s horse, but doesn't mean the skill itself wouldn't be useful to have.
I don't think that's the argument the article was making. It was, to my understanding, a more nuanced question about if we want to destroy or severely disturb systems at equilibrium by letting AI systems infiltrate our society.
> Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.
One can zoom out a little bit. The issue didn't start with social media, nor AI. "Star Wars, A New Hope", is, to my understanding, an incredibly good film. It came out in 1977 and it's a great story made to be appreciated by the masses. And in trying to achieve that goal, it really wasn't intellectually challenging. We have continued in that downhill for a bit, and now we are in 16 second stingers in TikTok and Youtube. So, the way I see it, things are not balancing out. Worse, people in USA elected D.J. Trump because somehow they couldn't understand how this real-world Emperor Palpatine was the bad guy.
I don't think you got the point of the article? It is saying that we as wise humans know (sometimes) when to stop optimizing for a goal, due to the negative side effects. AIs (and as some other people have pointed out corporations) do not naturally have this line in their head, and we must draw such lines carefully and with purpose for these superhuman beings.
BrenBarn|10 months ago
It's not about AI development, it's about something mentioned earlier in the article: "make as much money as I can". The problems that we see with AI have little to do with AI "development", they have to do with AI marketing and promulgation. If the author had gone ahead and dammed the creek with a shovel, or blown off his hand, that would have been bad, but not that bad. Those kinds of mistakes are self-limiting because if you're doing something for the enjoyment or challenge of it, you won't do it at a scale that creates more enjoyment than you personally can experience. In the parable of the CEO and the fisherman, the fisherman stops at what he can tangibly appreciate.
If everyone working on and using AI were approaching it like damming a creek for fun, we would have no problems. The AI models we had might be powerful, but they would be funky and disjointed because people would be more interested in tinkering with them than making money from them. We see tons of posts on HN every day about remarkable things people do for the gusto. We'd see a bunch of posts about new AI models and people would talk about how cool they are and go on not using them in any load-bearing way.
As soon as people start trying to use anything, AI or not, to make as much money as possible, we have a problem.
The second missed takeaway is at the end. He says Anthropic is noticing the coquinas as if that means they're going to somehow self-regulate. But in most of the examples he gives, he wasn't stopped by his own realization, but by an external authority (like parents) telling him to stop. Most people are not as self-reflective as this author and won't care about "winning zero sum games against people who don't necessarily deserve to lose", let alone about coquinas. They need a parent to step in and take the shovel away.
As long as we keep treating "making as much money as you can" as some kind of exception to the principle of "you can't keep doing stuff until you break something", we'll have these problems, AI or not.
ChrisMarshallNY|10 months ago
I noticed that, around the turn of the century, when "The Web" was suddenly all about the Benjamins.
It's sort of gone downhill, since.
For myself, I've retired, and putter around in my "software garden." I do make use of AI, to help me solve problems, and generate code starts, but I am into it for personal satisfaction.
nkozyra|10 months ago
I think it's a little deeper than that. It's the democratization of capability.
If few people have the tools, the craftsman is extremely valuable. He can make a lot of money without a glut of knowledge or real skill. In general the people don't have the tools and skills to catch up to where he is. He is wealthy with only frontloaded effort.
If everyone has the same tools, the craftsman still has value, because of the knowledge and skillset developed over time. He makes more money because his skills are valuable and remain scarce; he's incentivized to further this skillset to stay above the pack, continue to be in demand, and make more money.
If the tools do the job for you, the craftsman has limited value. He's an artifact. No matter how much he furthers his expertise, most people will just turn the tool on and get good enough product.
We're in between phase 2 and 3 at the moment. We still test for things like algorithm design and ask questions in interviews about the complexity of approaches. A lot of us still haven't moved on to the "ok but now what?" part of the transition.
The value now is less knowing how the automation works and improving our knowledge of the underlying design, but how to use the tools in ways that produce more value than the average Joe. It's a hard transition for people who grew up thinking this was all you needed to get a comfortable or even lucrative life.
I'm past my SDE interview phase of life now and in seeking engineers I'm looking less for people who know how to build a version of the tool and more people who operate in the present, have accepted the change, and want to use what they have access to and add human utility to make the sum of the whole greater than the parts.
To me the best part of building software was the creativity. That part hasn't changed. If anything it's more important than ever.
Ultimately we're building things to be consumed by consumers. That hasn't changed. The creek started flowing in a different direction and your job in this space is not to keep putting rocks where the water used to go, and more accepting that things are different and you have to adapt.
noduerme|10 months ago
Don't get me wrong, I'm not immune to these feelings either. I want to do good work and I want people to love what I do. But there's something so... so fucking nakedly exhibitionist and narcissistic about these kinds of posts. Like, so, GO FUCKING LAY WITH CLAMS, write a novel, the world is waiting for it if you're really a genius. Have the courage to say you have a conscience if you actually do. Leave the rest of us alone and stop polluting a world you don't understand with your childish greed and self-obsession.
doctoboggan|10 months ago
However if we are counting on AI researchers to take the advice and slow down then I wouldn't hold my breath waiting. The author indicated they stepped away from a high paying finance job for moral reasons, which is admirable. But wallstreet continues on and does not lack for people willing to play the "make as much money as you can" game.
dachris|10 months ago
One recent HN comment [0] comparing corporations and institutions to AI really stuck with me - those are already superhuman intelligences.
[0] https://news.ycombinator.com/item?id=43580681
chipsrafferty|10 months ago
yapyap|10 months ago
I doubt OP is counting on it, it is moreso expressing what an optimal world would look like so people can work towards it if they would feel like it or just to put the idea out there.
unknown|10 months ago
[deleted]
unwind|10 months ago
[1]: https://www.howtomakealongbow.co.uk/part-5-tillering
[2]: https://en.wikipedia.org/wiki/Tiller_(botany)
defrost|10 months ago
It's an iterative process of bending and shaping, bending again, and wood removal in stages.
red_admiral|10 months ago
People should spend more of their time doing things because they're fun, not because they want to get better at it.
Maybe the apocalypse will happen in our lifetime, maybe not. I intend to have fun as much as I can in my life either way.
migueldeicaza|10 months ago
https://richardswsmith.wordpress.com/2017/11/18/we-are-here-...
broabprobe|10 months ago
“(talking about when he tells his wife he’s going out to buy an envelope) Oh, she says well, you’re not a poor man. You know, why don’t you go online and buy a hundred envelopes and put them in the closet? And so I pretend not to hear her. And go out to get an envelope because I’m going to have a hell of a good time in the process of buying one envelope. I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I don’t know. The moral of the story is, is we’re here on Earth to fart around. And, of course, the computers will do us out of that. And, what the computer people don’t realize, or they don’t care, is we’re dancing animals.”
― Kurt Vonnegut
OisinMoran|10 months ago
A_D_E_P_T|10 months ago
Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.
As Dwarkesh once asked:
> One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.
> Shouldn’t we be expecting that kind of stuff?
I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.
Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans. And this hasn't changed at all over the past five years. Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.
More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.
miningape|10 months ago
tvc015|10 months ago
lukeschlather|10 months ago
groby_b|10 months ago
They are, however, going to enable credulous idiots to drive humanity completely off a cliff. (And yes, we're seeing that in action right now). They don't need to be independent agents. They just need to seem smart.
turtleyacht|10 months ago
ben_w|10 months ago
A useless AI isn't a threat: nobody will use it.
LLMs, as they exist today, are between these two. They're competent enough to get used, but will still give incorrect (and sometimes dangerous) answers that the users are not equipped to notice.
Like designing US trade policy.
> Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.
What does the latter have to do with the former?
> Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans.
Why would the destruction of humanity need to use a novel mechanism, rather than a well-known one?
> And this hasn't changed at all over the past five years.
They're definitely different now than 5 years ago. I played with the DaVinci models back in the day, nobody cared because that really was just very good autocomplete. Even if there's a way to get the early models to combine knowledge from different domains, it wasn't obvious how to actually make them do that, whereas today it's "just ask".
> Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.
And write code. Not great code, but "it'll do" code. And use APIs.
> More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.
While I'd agree they lack the competence to do so, I don't see how this matters. Humans are lazy and just tell the machine to do the work for them, give themselves a martini and a pay rise, then wonder why "The Machine Stops": https://en.wikipedia.org/wiki/The_Machine_Stops
The human half of this equation has been shown many times in the course of history. Our leaders treat other humans as machines or as animals, give themselves pay rises, then wonder why the strikes, uprisings, rebellions, and wars of independence happened.
Ironically, the lack of imagination of LLMs, the very fact that they're mimicking us, may well result in this kind of AI doing exactly that kind of thing even with the lowest interpretation of their nature and intelligence — the mimicry of human history is sufficient.
--
That said, I agree with you about the limitations of using them for research. Where you say this:
> I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.
I had similar with NotebookLM, where I put in one of my own blog posts and it missed half the content and re-interpreted half the rest in a way that had nothing much in common with my point. (Conversely, this makes me wonder: how many humans misunderstand my writing?)
FollowingTheDao|10 months ago
Funny, that is what my father taught me when I was 12 because we had compassion. What is it with glorifying all these logic loving Spock like people? Don't you know Captain Kirk was the real hero of Star Trek? Because he had compassion?
It is no wonder the Zizians were birthed from LW.
praptak|10 months ago
khazhoux|10 months ago
I swear that’s what lesswrong posters see every day in the mirror.
profsummergig|10 months ago
xmprt|10 months ago
jjcob|10 months ago
Anthropic apparently is starting to notice the possible danger to others of their work. I'm not sure what they are referring to.
[1]: https://www.youtube.com/watch?v=KZUlf7quu3o
ern|10 months ago
hecanjog|10 months ago
cubefox|10 months ago
https://www.anthropic.com/news/anthropic-education-report-ho...
ziofill|10 months ago
Isamu|10 months ago
Guide to Bow Tillering:
https://straightgrainedboard.com/beginners-guide-on-bow-till...
MrBuddyCasino|10 months ago
The AI angle is not only even hypothetical: there is no attempt to describe or reason about a concrete "x leading to y", just "see, the same principle probably extrapolates".
There is no argument there that is sounder than "the high velocities of steam locomotives might kill you" that people made 200 years ago.
luc4sdreyer|10 months ago
This obviously seems silly in hindsight. Warnings about radium watches or asbestos sound less silly, or even wise. But neither had any solid scientific studies showing clear hazard and risk. Just people being good Bayesian agents, trying to ride the middle of the exploration vs. exploitation curve.
Maybe it makes sense to spend some percentage of AI development resources on trying to understand how they work, and how they can fail.
iNic|10 months ago
So it is with AI. Except, corps are made of people that work on people speeds, and have vague morals and are tied to society in ways AI might not be. AI might also be able to operate faster and with less error. So extra care is required.
bogdanoff_2|10 months ago
appleorchard46|10 months ago
ido|10 months ago
axpvms|10 months ago
seafoamteal|10 months ago
curtisszmania|10 months ago
[deleted]
DanAtC|10 months ago
[deleted]
DrSiemer|10 months ago
To me it just seems like the same old knee-jerk luddite response people have to any powerful new technology that challenges that status quo since the dawn of time. The calculator did not erase math wizards, the television did not replace books and so on. It just made us better, faster, more productive.
Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.
Some people may go full-on Wall-E, but I for one will never stop tinkering, and many of my friends won't either.
The things I could have done if I had had an LLM as a kid... I think I've learned more in the past two years than ever before.
hacb|10 months ago
The major difference is that in order to use a calculator, you need to know and understand the math you're doing. It's a tool you can work with. I always had a calculator for my math exams and I always had bad grades :)
You don't have to know how to program to ask ChatGPT to build yet another app for you. It's a substitute for your brain. My university students have good grades on their do-at-home exams, but can't spot a off-by-one error on a 3 lines Golang for loop during an in-person exam.
Tistron|10 months ago
We have tools to help us with that, and maybe it isn't a big loss? And they also bring new arenas and abilities.
And maybe in the future we will be worse at critical thinking (https://news.ycombinator.com/item?id=43484224), and maybe it isn't a big loss? It is hard to imagine what new abilities and arenas will emerge. Though I think that critical thinking is a worse loss than memory and mental arithmetic. Though, also, we are probably a lot less good at it than we think we are, generally.
fragmede|10 months ago
But it did. Quick, what's 67 * 49? A math wiz would furrow their brow for a second and be able to spit out an answer, while the rest of us have to pull out a calculator. When you're doing business in person and have to move numbers around, having to stop and use a calculator slows you down. If you don't have a role where that's useful then it's not a needed skill and you don't notice it's missing, like riding s horse, but doesn't mean the skill itself wouldn't be useful to have.
dsign|10 months ago
I don't think that's the argument the article was making. It was, to my understanding, a more nuanced question about if we want to destroy or severely disturb systems at equilibrium by letting AI systems infiltrate our society.
> Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.
One can zoom out a little bit. The issue didn't start with social media, nor AI. "Star Wars, A New Hope", is, to my understanding, an incredibly good film. It came out in 1977 and it's a great story made to be appreciated by the masses. And in trying to achieve that goal, it really wasn't intellectually challenging. We have continued in that downhill for a bit, and now we are in 16 second stingers in TikTok and Youtube. So, the way I see it, things are not balancing out. Worse, people in USA elected D.J. Trump because somehow they couldn't understand how this real-world Emperor Palpatine was the bad guy.
iNic|10 months ago