I’ll do the Minority Report here: I loved the article, the point being that rich people hyping AI for their own enrichment have somewhat shutdown rational arguments of benefits vs. costs, the costs being: energy use, environmental impact of using environmentally unfriendly energy sources out of desperation, water pollution from by products of electronics production and recycling and from water use in data centers, diverting money from infrastructure and social programs, putting more debt stress on society, etc.I have been a paid AI practitioner since 1982, so I appreciate the benefits of AI - it is just that I hate the almost religious tech belief that real AI will happen from exponential cost increases for LLM training and inference for essentially linear gains.
I get that some lazy ass people have turned vibe coding and development into what I consider an activity sort-of like mindlessly scrolling social media.
ughitsaaron|6 days ago
andai|5 days ago
addled|5 days ago
I naturally have a hard time stopping when almost done with something, but with AI everything feels "close" to a big breakthrough.
Just one more turn... Until suddenly it's way later than I thought and hardly have time to interact with my family.
boxedemp|6 days ago
Where are they?
Are we sure that's not a misunderstanding of the terminology? Artificial diamonds, such as cubic zirconia, are not diamonds, and nobody thinks they are. 'Artificial' means it's not the real thing. When will conscious, actual intelligence be called 'synthetic intelligence' instead of 'artificial'?
Incidentally, this comment was written by AI.
grogers|6 days ago
When computers have super-human level intelligence, we might be making similar distinctions. Intelligence IS intelligence, whether it's from a machine or an organism. LLMs might not get us there but something machine will eventually.
andai|5 days ago
Synthetic sounds more neutral, aside from bringing microplastics to my mind.
I guess the field of artificial life has the same issue.
As another comment pointed out, you don't necessarily need consciousness for intelligence. And you don't need either of those for goal oriented behavior.
My favorite example is the humble refrigerator. (The old one, without the microchips!) It has a goal (target temperature), it senses its environment (current temperature), and takes action based on that (turn cooling on or off).
A cuter example is the dandelion seed. It "wants" to fly. Obviously! So you can display goal directed behavior as the result of natural forces moving through you. (Arguably electricity and glucose also fall in that category, but... Yeah...)
LLMs, conscious or not, moved into that category this year, in a big way. (e.g. Opus and Codex routinely bypassing security restrictions in the pursuit of the goal.)
Does it really have goals, or does it merely appear to act as though it has them? Does it appear to act as though it has consciousness?
(I forget who said it: it won't really disrupt the global economic system, it will merely appear to do so ;)
Also, here I am! :)
palmotea|5 days ago
I haven't met him, but a famous (pre-ChatGPT) counterexample is Blake Lemoine:
> In June 2022, LaMDA gained widespread attention when Google engineer Blake Lemoine made claims that the chatbot had become sentient. (https://en.wikipedia.org/wiki/LaMDA).
It's also not uncommon here to see someone respond to a comment questioning the consciousness or sentience of LLMs with the question along the lines of "how do you know anyone is conscious/sentient?" They're not being direct with their beliefs (I believe as a kind of motte and bailey tactic), but the implication is they think LLM are sentient and bristle when someone suggests otherwise.
sshine|5 days ago
One can bypass the whole sentience discussion and say that AI stands for Automated Inference.
If actual, conscious intelligence were to manifest synthetically, as in silicon-based rather than carbon-based, it is a losing battle to convince people because of the philosophical “problem of other minds.”
If there is a functional equivalence between meatspace intelligence and synthetic, it will surely have enough value to reinforce itself, philosophical debates aside.
tim333|5 days ago
It depends a bit what you mean by conscious but assuming it's human like then it incorporates a lot of feelings, vision, sound, thoughts and the like, things that are not language really. But we do it with neurons and some chemicals and I imagine you could do something like that with artificial neural networks and some computer version of the chemistry, but not just language really.
mullingitover|5 days ago
I've always doubted it, but then again I've also been skeptical about claims that humans have these capabilities.
rickydroll|5 days ago
On terminology, I would argue for non-biological intelligence. People can be awfully bioist (biological racist).
jamesfinlayson|6 days ago
I saw someone on the news claiming this recently, but he ran an AI consultancy firm so I suspect he was trying to drum up business.
melagonster|6 days ago
People who declare that AGI is coming.
mattclarkdotnet|6 days ago
And nobody working in the space either as ML/AI practitioners, or as philosophers, or as cognitive scientists, even thinks we know what consciousness is, or what is required to create it. So there would be no way to tell if an AI is conscious because we haven’t yet managed to reliably tell if humans, or dogs, or chimpanzees or whales are conscious.
The claim that is often made is that more work on the current generation of AI tech will lead to AGI at a human or better level. I agree with Yann Lecun that this is unlikely.
pllbnk|5 days ago
georgeecollins|5 days ago
Robert Solow, Noble Prize winning economist, 1987.
kamaal|5 days ago
1)
What he means to say is, say you needed to get something done. You could ask AI to write you a Python script which does the job. Next time around you could use the same Python script. But that's not how people are using AI, they basically think of a prompt as the only source of input, and the output of the prompt as the job they want get done.
So instead of reusing the Python script, they basically re-prompt the same problem again and again.
While this gives an initial productivity boost, you now arrive at a new plateau.
2)
Second problem is ideally you must be using the Python script written once and improve it over time. An ever improving Python script over time should do most of your day job.
That's not happening. Instead since re-prompting is common, people are now executing a list of prompts to get complex work done, and then making it a workflow.
So ideally there should be a never ending productivity increase but when you sell a prompt as a product, people use it as a black box to get things done.
A lot of this has to do with lack of automation/programming mindset to begin with.
palmotea|5 days ago
> Robert Solow, Noble Prize winning economist, 1987.
Some skeptic was wrong in the past, therefore we should disbelieve every skeptic, forever.
That's the argument, right?
rubslopes|5 days ago
I just had a meeting yesterday when someone from the customer support team vibe-coded a solution in a few hours. The boss said, "Let's just give this as a gift; this product is not our focus and I want to show them how AI makes us work fast."
unknown|6 days ago
[deleted]
agnishom|5 days ago
Junior developers will find it harder to be hired and trained. The case for lesser known artists and musicians is much worse. The scientific literature will be flooded by low quality AI slop with questionable veracity. Drafts of Good debut novels will be harder to find. When someone writes a love song, their romantic partner(s) will have to question if it was LLM generated. Nobody will be able to trust video footage of any kind and will have a much harder time telling what is the truth.
I don't think standard economic indicators are tuned to detect these externalities in the short to medium term.
palmotea|5 days ago
This. I think generative AI will mostly generate destruction. Not in the nuking cities sense, but in hollowing out institutions and social bonds, especially the complicated and large-scale kind that have enabled advanced civilization. In many ways, things will revert to a more primitive state: only really knowing people in your local vicinity (no making friends online, because it'll be mostly dead-internet bots out there), only really knowing the news you see yourself, more reliance on rumor and hearsay, removal of the ability for the little guy to challenge and disprove institutional propaganda (e.g. can't start a blog and put up some photos and have people believe your story about what happened), etc.
yunwal|5 days ago
I think most people will retreat into smaller spaces where they can rely on people to not deceive them. Everyone is moving to discord/group chats now for any sort of trustworthy information. This might be a good thing honestly. It was probably never good that we all got our information from the same place.
slopinthebag|5 days ago
[deleted]
lich_king|5 days ago
Hey, that's a legitimate engineering activity.