Ask HN: Is the AI Apocalypse Imminent?
26 points| vbi8iBEX | 3 years ago
Most of this chatter I have seen on Twitter, but there are artist communities battling AI communities on Reddit as well.
I personally think that everyone will just need to adapt as it has always been, but I am curious what others on HN think.
Are professionals in for a rude awakening? Are artists and software engineers and writers really going to be replaced with AI?
Will software engineering involve product managers talking to ChatGPT instead of Engineers, and if we're still in the mix, will our salaries be substantially reduced?
Obviously the technology will have SOME impact, even if there is no "apocalypse", so how should professionals be viewing this?
What are the best ways to prepare for the inevitable shift? And what should the message to the scared / confused public be?
[+] [-] Rzor|3 years ago|reply
Maybe I'm not seeing the leopard that will eventually eat my face, but in the worst case scenario, I don't think it's happening quite that fast, and if it is, it's probably more boring than we are imagining, unseen consequences and all. It's just that hard to predict the future.
[+] [-] steve_adams_86|3 years ago|reply
For sure, I asked chatGPT to generate code for a certain embedded device to interface with a specific pH sensor and the result was totally bananas.
There is no reason to believe that niche and bespoke requirements will be swallowed up in an efficient, reliable, repeatable package in the very near future. It can be taught, and will be, but I don't think it'll be too snappy.
> immensely pump their productivity, more with less and all
The more I use chatGPT, the more I think this is the case. There are too many loose ends to tie, too many semantics to clarify and realign, and so on. It's immensely powerful today, but perhaps we sometimes forget how much we've learned and how far we've come.
I felt a little doomed on my first go with chatGPT, but now the shock has worn off. I'm optimistic that it'll be a useful tool. The more I play with it, the better I get at prompting it, the more I think alright – I can actually see this being useful one day. It might actually be a useful way to rubber duck problems, generate stubs, review rudimentary implementations for improvements, etc.
Given its current limitations and how deep they are entrenched by limitations of its design, I suspect it will require a significant breakthrough to overcome them and truly replace people.
[+] [-] w4ffl35|3 years ago|reply
This is where my mind has been as well - I feel we're heading towards a future that will benefit small business - but I don't want to be overly optimistic.
[+] [-] Haga|3 years ago|reply
[deleted]
[+] [-] theGnuMe|3 years ago|reply
So take video game art. Looks like you could train an AI to generate all of that. And if can't it will happen soon. That will probably empower current digital artists and give them more capacity. It will also allow smaller shops to produce higher quality art perhaps with a creative director running prompts through the AI model vs hiring digital artists. However at some point the whole thing becomes quite complex to manage so you may have artists anyway.
At some point we will probably get prompts to movies as well.
prompts to SQL will probably happen as well as prompts to code (has already happened). This will first be code that a dev will refine. It can be dangerous because of subtle implications but that will eventually work itself out. So expect the same pattern as with digital artists for dev work. However at some point the whole thing becomes quite complex to manage so you may have devs anyway.
There will be prompt based no code solutions for business analysts as well. Will this replace the business analyst? Probably not, will it allow you to do more with less, probably. Will it scale, maybe not, you still might need a bunch of analysts to wrangle all of the systems.
In any case scale and growth will probably mean you need more people unless you can design the overall system well.
So in some sense we all become managers with little AI bots doing the IC work.
[+] [-] f0e4c2f7|3 years ago|reply
But it actually took a while for companies to adapt and make use of that.
Marc Andreeson talks about this idea that new technologies follow a cycle. First they're ignored, then people fight them, then they settle on calling people names for using it.
You can look back on the industrial revolution, or more recently the internet for an idea of what that pattern looks like. Some companies might adapt fairly fast but I suspect that will be the rarity.
Instead what you'll have are small groups of individuals, highly leveraged by AI come in and make new products that wholesale replace non-ai companies. Some old companies will acquire new ones in time and survive, many won't.
One early example might be Lensa[0]. They use stable diffusion on their backend against a paid iPhone app. Pretty simple stack there, not even training any models themselves. And yet - they're now doing $1M/day in revenue.
We're going to see a lot more of these.
Big companies will "try" too but they'll mostly just have meetings and powergrabs about trying. The 2020s are the decade of the startup.
[0] https://apps.apple.com/us/app/lensa-ai-photo-video-editor/id...
[+] [-] randomNumber7|3 years ago|reply
So I think as a programmer you don't have to worry, but the other implications will probably be hughe.
[+] [-] drooby|3 years ago|reply
What is the abstraction above writing domain specific code? Writing ACs - aka “prompt” engineering.
I have always wanted more engineers to have this role. If product doesn’t know what they’re doing, going back and forth on faulty ACs is a huge cause of friction and costs the company way too much money.
Writing accurate and informative language that is also logically correct and encapsulates all of our desires.. is damn hard, and AI will never be able to speak and feel for us.
[+] [-] FlyingSnake|3 years ago|reply
AI would be a great addition to tools like VCS, RDBMS, CI/CD paradigms, testing and should help developers in writing better and robust systems.
[+] [-] jasfi|3 years ago|reply
Tech tends to replace some jobs and create new ones. It'll be the same with AI. We'll work differently.
[+] [-] pydry|3 years ago|reply
What was truly terrifying about the industrial revolution was the way it upgraded the horrors of warfare to an entirely new level and precipitated a world war brought about by the shifts in the relative power of dominant empires at the time.
I don't think AI will put us out of a job. I do think it could trigger terrifying new kinds of warfare and oppression.
I reckon there will be one or more Ottomans - dominant world powers who do not adjust to the new technological realities and get crushed as a result.
[+] [-] rapjr9|3 years ago|reply
[+] [-] thunfischtoast|3 years ago|reply
Example: AI can now generate great single concept arts, but in my opinion it will still take some time until it can do it coherently for a full project where everything needs to fit together. In the same manner the developer needs to write code that fits into existing systems. Both can of course profit already from AI today, but they are not to be replaced as easily.
The way bigger threat lies in all the social aspects of the internet. It's hard already to weed out all the crap when I want to find something specific e.g. on Youtube. I imagine it will be even harder when I need to filter through the low-quality generated content that will be uploaded just for the numbers. Also I see non-curated online discussion platforms and comment sections dying: How am I supposed to properly discuss when everytime I take a stance there will be instantly 10 bots screaming back at me?
[+] [-] 082349872349872|3 years ago|reply
In the best case, people with good ideas but poor articulation could use AI (as a far far cheaper substitute for wetware legal counsel) to put their arguments in succinct and lucid forms.
[+] [-] Madmallard|3 years ago|reply
[+] [-] gaurangt|3 years ago|reply
Software Engineers or artists' jobs aren't going to "vanish" instantaneously because of AI; instead, it would make our lives easier.
Low-level menial, entry-level tasks like writing basic, repetitive code or basic design tasks would vanish or slowly phase away. Higher-level functions which require a lot of creativity and critical thinking won't be replaced with AI, at least for a VERY long time.
As it is currently, ChatGPT behaves more like a programmer who is just learning how to code. Just like Photoshop or Figma is a tool for designers, Software Engineers will soon start using ChatGPT to automate certain mundane tasks.
We are already doing that on sites like StackOverflow, where we find Regexes or stuff like that.
[+] [-] fsloth|3 years ago|reply
The future is not about everyone becoming unemployed. The future is one where everyone has their personal army of secretaries.
I bring up the analogue of the renaissance master painter, who often had studios of apprentices. When preparing a huge painting, instead of doing all by themselves, they let their apprentices paint the easy bits and then they did the hard parts (if needed) and signed the work away.
The downside is of course that the need for apprentices shrinks - but then again everyone can have their own art studio (when previously only few super stars could afford one).
[+] [-] wnkrshm|3 years ago|reply
That's the real nightmare, not which part of the implementation goes where.
[+] [-] chriskanan|3 years ago|reply
[+] [-] PaulHoule|3 years ago|reply
It was no problem finding an unprotected home directory with a solution in it, but we found the program didn't work. In the end we had to not only modify the program enough to not get caught, but we had fix the bugs in it.
Little did I know what good preparation this would be for my career in software development!
Devastated by the impact of a $100 million project failure I took an underpaid job for a small but proud web development shop based at a Superfund site were I completed roughly 20 projects that other programmers had started in about 9 months. It was the most acute example of something I'd experienced a lot in my career, both before and after, where somebody, anybody from a complete fresher to a certified genius to somebody getting a masters in A.I. because they really needed the intelligence, built something they couldn't finish and left behind a product that looked promising but needed serious rework to get it in front of customers. (... Then we ran out of projects, I cracked, and two days later got a job at the other web development shop that was landing all the new contracts that we were failing to get.)
I see GPT-3 as that fresher programmer who can make things that look promising to management but in the end turn out to need a huge amount of rework to put in front of customers. For a time I was greatly resentful that somebody would seem to do the "20% of the work that gets 80%" of the results and it seemed I'd do the "80% of the work that gets 20% of the results" and have people complain I took to long to do things, even during my annus mirabalus at Spider Graphics or many other times I'd saved a project that had been circling the drain for years.
GPT-3 has a hypnotic ability to get away with making mistakes which I think is a product of it being trained to produce the token with the highest probability. Like Andy Warhol, it is actively anti-creative.
Fixing the hard-to-find mistakes that it makes will be a maddening job and people will always be looking for ways to push the bubble out from under the rug and not realize the machine they are trying to build is impossible for fundamental logical reasons. I think of the dialogs of Achilles and the Tortoise from
https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
where they are trying to build impossible machines and repeatedly failing because they have no idea that what they're trying to do is impossible. I've had people say GEB is a critique of the old symbolic AI but neural networks don't repeal the fundamental results of mathematical logic and computer science.
Sure, you can escape Gödel's theorem by building a system that doesn't get the right answer but then you have a system that doesn't get the right answer.
[+] [-] sinuhe69|3 years ago|reply
[+] [-] theGnuMe|3 years ago|reply
[+] [-] w4ffl35|3 years ago|reply
[+] [-] ineedausername|3 years ago|reply
[+] [-] theGnuMe|3 years ago|reply
[+] [-] licebmi__at__|3 years ago|reply
[+] [-] tluyben2|3 years ago|reply
[0] https://twitter.com/luyben/status/1600663169353015297
[+] [-] seydor|3 years ago|reply
GPTs are an explosive multiplier for productivity. They will become the new baseline and people will be asked to do more with them. Those that can't keep up will lose jobs etc, but not the majority. We are in for a huge jump in productivity.
The best we can do is educate the public about the existence of these tools. It's crazy that people have been trained to dismiss them because of bad press in the past 10 years. We can't really stop technology so we better join the ride
[+] [-] lm28469|3 years ago|reply
I see more and more people saying stuff like that on HN and I feel like we've collectively gave up on the idea of regulations &c.
It's our collective job to decide what's good or not and what we want or not. Saying "eh it's technology we _have_ to go with it" seems really dumb and a fairly new thing.
Thinking like that means weaponised autonomous robots is just the logical next step of "progress" for example. Or mass surveillance is inevitable and being so would even make it desirable by that logic
[+] [-] TheOtherHobbes|3 years ago|reply
More crudely, GPTs make it easier to produce disposable crap automatically by applying statistical data compression to large training sets.
They're purely mechanical and have no concept of semantics or subjectivity.
So if you ask GPT to write code, you may get something that mostly works, or you may get garbage. And if you're lucky enough to get something that mostly works you'll still need to refine and test it manually.
Human coding has the same perpetually-almost-there issue. But humans can understand a specification and also understand unstated requirements. So the almost-there is more likely to be fit for purpose.
Humans can also take the initiative with new classes of problems.
The real danger is that AI-ifying everything will cripple that imaginative ability, because we'll all be settling for AI worse-is-better instead of aiming higher.
[+] [-] sinuhe69|3 years ago|reply
[+] [-] GoblinSlayer|3 years ago|reply