top | item 33917171

Ask HN: Is the AI Apocalypse Imminent?

26 points| vbi8iBEX | 3 years ago

I feel there is a lot of FUD surrounding AI, especially on Twitter where people don't understand the technology. I've seen posts about Software Engineer jobs vanishing in 5 years thanks to ChatGPT. Artists are terrified that they will lose their lively-hood thanks to StableDiffusion etc.

Most of this chatter I have seen on Twitter, but there are artist communities battling AI communities on Reddit as well.

I personally think that everyone will just need to adapt as it has always been, but I am curious what others on HN think.

Are professionals in for a rude awakening? Are artists and software engineers and writers really going to be replaced with AI?

Will software engineering involve product managers talking to ChatGPT instead of Engineers, and if we're still in the mix, will our salaries be substantially reduced?

Obviously the technology will have SOME impact, even if there is no "apocalypse", so how should professionals be viewing this?

What are the best ways to prepare for the inevitable shift? And what should the message to the scared / confused public be?

42 comments

order
[+] Rzor|3 years ago|reply
I mean I'm still waiting for the impact of Tabnine/Copilot that people talked about last year. I'm in the camp where a charitable view is needed and AI will actually give us more power. You still need to know what is doing and be wary of any "overconfidence/hallucination". Let's say that it gets really good and now a senior developer could do the job of other 5-10 devs, God knows how; that could also mean that a lot of small to medium companies would be able to immensely pump their productivity, more with less and all, and perhaps more growth and openings.

Maybe I'm not seeing the leopard that will eventually eat my face, but in the worst case scenario, I don't think it's happening quite that fast, and if it is, it's probably more boring than we are imagining, unseen consequences and all. It's just that hard to predict the future.

[+] steve_adams_86|3 years ago|reply
> "overconfidence/hallucination"

For sure, I asked chatGPT to generate code for a certain embedded device to interface with a specific pH sensor and the result was totally bananas.

There is no reason to believe that niche and bespoke requirements will be swallowed up in an efficient, reliable, repeatable package in the very near future. It can be taught, and will be, but I don't think it'll be too snappy.

> immensely pump their productivity, more with less and all

The more I use chatGPT, the more I think this is the case. There are too many loose ends to tie, too many semantics to clarify and realign, and so on. It's immensely powerful today, but perhaps we sometimes forget how much we've learned and how far we've come.

I felt a little doomed on my first go with chatGPT, but now the shock has worn off. I'm optimistic that it'll be a useful tool. The more I play with it, the better I get at prompting it, the more I think alright – I can actually see this being useful one day. It might actually be a useful way to rubber duck problems, generate stubs, review rudimentary implementations for improvements, etc.

Given its current limitations and how deep they are entrenched by limitations of its design, I suspect it will require a significant breakthrough to overcome them and truly replace people.

[+] w4ffl35|3 years ago|reply
> could also mean that a lot of small to medium companies would be able to immensely pump their productivity, more with less and all, and perhaps more growth and openings.

This is where my mind has been as well - I feel we're heading towards a future that will benefit small business - but I don't want to be overly optimistic.

[+] theGnuMe|3 years ago|reply
I see these things right now as power tools. But that view might change. They also let people be creative without necessarily needing an army of others.

So take video game art. Looks like you could train an AI to generate all of that. And if can't it will happen soon. That will probably empower current digital artists and give them more capacity. It will also allow smaller shops to produce higher quality art perhaps with a creative director running prompts through the AI model vs hiring digital artists. However at some point the whole thing becomes quite complex to manage so you may have artists anyway.

At some point we will probably get prompts to movies as well.

prompts to SQL will probably happen as well as prompts to code (has already happened). This will first be code that a dev will refine. It can be dangerous because of subtle implications but that will eventually work itself out. So expect the same pattern as with digital artists for dev work. However at some point the whole thing becomes quite complex to manage so you may have devs anyway.

There will be prompt based no code solutions for business analysts as well. Will this replace the business analyst? Probably not, will it allow you to do more with less, probably. Will it scale, maybe not, you still might need a bunch of analysts to wrangle all of the systems.

In any case scale and growth will probably mean you need more people unless you can design the overall system well.

So in some sense we all become managers with little AI bots doing the IC work.

[+] f0e4c2f7|3 years ago|reply
In the industrial revolution you suddenly had all this technology that could replace manual labor.

But it actually took a while for companies to adapt and make use of that.

Marc Andreeson talks about this idea that new technologies follow a cycle. First they're ignored, then people fight them, then they settle on calling people names for using it.

You can look back on the industrial revolution, or more recently the internet for an idea of what that pattern looks like. Some companies might adapt fairly fast but I suspect that will be the rarity.

Instead what you'll have are small groups of individuals, highly leveraged by AI come in and make new products that wholesale replace non-ai companies. Some old companies will acquire new ones in time and survive, many won't.

One early example might be Lensa[0]. They use stable diffusion on their backend against a paid iPhone app. Pretty simple stack there, not even training any models themselves. And yet - they're now doing $1M/day in revenue.

We're going to see a lot more of these.

Big companies will "try" too but they'll mostly just have meetings and powergrabs about trying. The 2020s are the decade of the startup.

[0] https://apps.apple.com/us/app/lensa-ai-photo-video-editor/id...

[+] randomNumber7|3 years ago|reply
Talking with a chatbot, so that it produces code could still be considered programming imo. So even if it works perfectly, you still need programmers, the job will just be different.

So I think as a programmer you don't have to worry, but the other implications will probably be hughe.

[+] drooby|3 years ago|reply
That’s how I see it.

What is the abstraction above writing domain specific code? Writing ACs - aka “prompt” engineering.

I have always wanted more engineers to have this role. If product doesn’t know what they’re doing, going back and forth on faulty ACs is a huge cause of friction and costs the company way too much money.

Writing accurate and informative language that is also logically correct and encapsulates all of our desires.. is damn hard, and AI will never be able to speak and feel for us.

[+] FlyingSnake|3 years ago|reply
AI/GPT will most likely end up being a great tool in our toolkit like the previous innovations. Using it as an adversary to reduce developer headcount will end up in disaster. Just look at 4GLs, RUP, NoCode and other failed paradigms from recent history.

AI would be a great addition to tools like VCS, RDBMS, CI/CD paradigms, testing and should help developers in writing better and robust systems.

[+] jasfi|3 years ago|reply
I agree with you. There is more to be done to make progress we should make in our lifetimes than we can ever hope to complete. AI will help tremendously, but I suspect that we'll find that it's not enough.

Tech tends to replace some jobs and create new ones. It'll be the same with AI. We'll work differently.

[+] pydry|3 years ago|reply
The industrial revolution didn't create 90% unemployment. It changed the nature of 90% of jobs but it didn't cause unemployment in and of itself. This doesn't terrify me.

What was truly terrifying about the industrial revolution was the way it upgraded the horrors of warfare to an entirely new level and precipitated a world war brought about by the shifts in the relative power of dominant empires at the time.

I don't think AI will put us out of a job. I do think it could trigger terrifying new kinds of warfare and oppression.

I reckon there will be one or more Ottomans - dominant world powers who do not adjust to the new technological realities and get crushed as a result.

[+] rapjr9|3 years ago|reply
One problem with the current rash of AI's in the news is they are only backward looking. They only "know" that which was in the training data up until they were trained. So to incorporate new things you have to retrain them, which could be an enormous expense to do continually. Also, if people start to rely on such AI's then the training data disappears and is replaced by AI output, so the whole thing is likely to get locked in a loop and never change and biases will also get locked in. So if you want to generate a dance video say, and the AI was trained before Michael Jackson was around you'll probably never be able to generate a Michael Jackson style video. However, there is also human input and guidance that is the source from which the AI generates output, so the quality and usefulness of the output will depend on how precisely the input can be specified. I agree with others (on other forums) who suggest that this very quickly turns into something like a programming language, where specificity becomes important for generating good content, which requires...human skill. Still, that seems useful, if not a replacement for people, an enhancer of people.
[+] thunfischtoast|3 years ago|reply
I don't see the imminent main threat for professionals to be honest. Even today, the worth of a professionals work (be it artist or developer) is not the ability to look up how to do something specific or to churn out good looking, but random results, but to create results that are coherent on a long-term and and fit into the customers needs.

Example: AI can now generate great single concept arts, but in my opinion it will still take some time until it can do it coherently for a full project where everything needs to fit together. In the same manner the developer needs to write code that fits into existing systems. Both can of course profit already from AI today, but they are not to be replaced as easily.

The way bigger threat lies in all the social aspects of the internet. It's hard already to weed out all the crap when I want to find something specific e.g. on Youtube. I imagine it will be even harder when I need to filter through the low-quality generated content that will be uploaded just for the numbers. Also I see non-curated online discussion platforms and comment sections dying: How am I supposed to properly discuss when everytime I take a stance there will be instantly 10 bots screaming back at me?

[+] 082349872349872|3 years ago|reply
Compared to the early days of the net (4 decades ago), there's already an amazing amount of low-quality generated content that is being uploaded just for the numbers. Will in-silico generation make that worse than wetware generation? At the moment I'm somewhat optimistic that as the machine generated content now has a better grasp of spelling and grammar, it may eventually prove more coherent than any of the least-coherent wetware sources.

In the best case, people with good ideas but poor articulation could use AI (as a far far cheaper substitute for wetware legal counsel) to put their arguments in succinct and lucid forms.

[+] Madmallard|3 years ago|reply
To me it seems like ChatGPT will amplify the power of existing programmers. Think how much more productive you could be if you could basically speedrun large portions of necessarily tedious and long simple or boilerplate sections of code bases. Likewise with ultra tedious things like connecting to AWS or other third party middleware.
[+] gaurangt|3 years ago|reply
I see systems like ChatGPT and Stable Diffusion as "tools" that would aid us in our jobs.

Software Engineers or artists' jobs aren't going to "vanish" instantaneously because of AI; instead, it would make our lives easier.

Low-level menial, entry-level tasks like writing basic, repetitive code or basic design tasks would vanish or slowly phase away. Higher-level functions which require a lot of creativity and critical thinking won't be replaced with AI, at least for a VERY long time.

As it is currently, ChatGPT behaves more like a programmer who is just learning how to code. Just like Photoshop or Figma is a tool for designers, Software Engineers will soon start using ChatGPT to automate certain mundane tasks.

We are already doing that on sites like StackOverflow, where we find Regexes or stuff like that.

[+] fsloth|3 years ago|reply
Generally automation so far has not replaced labour, but increased productivity and by removing rote work changing the nature of work instead of removing the need for human work.

The future is not about everyone becoming unemployed. The future is one where everyone has their personal army of secretaries.

I bring up the analogue of the renaissance master painter, who often had studios of apprentices. When preparing a huge painting, instead of doing all by themselves, they let their apprentices paint the easy bits and then they did the hard parts (if needed) and signed the work away.

The downside is of course that the need for apprentices shrinks - but then again everyone can have their own art studio (when previously only few super stars could afford one).

[+] wnkrshm|3 years ago|reply
Imagine you're in an SME and your CEO, who doesn't know about how difficult things are to implement, asking chatGPT about certain topics and permanently, secretly believing the thing in parts.

That's the real nightmare, not which part of the implementation goes where.

[+] chriskanan|3 years ago|reply
I think we are going to move toward a world where many jobs have a co-pilot and some abilities will be democratized in that less personal investment will be needed to become highly proficient from an output perspective. In the slightly longer term, I think we will move toward interacting with machines similar to how people interacted with "the computer" in Star Trek TNG, which really seems like just a really advanced co-pilot.
[+] PaulHoule|3 years ago|reply
I was a bit of a hacker in high school and when I was in college I got enlisted by a friend to "steal" somebody else's CS 101 homework.

It was no problem finding an unprotected home directory with a solution in it, but we found the program didn't work. In the end we had to not only modify the program enough to not get caught, but we had fix the bugs in it.

Little did I know what good preparation this would be for my career in software development!

Devastated by the impact of a $100 million project failure I took an underpaid job for a small but proud web development shop based at a Superfund site were I completed roughly 20 projects that other programmers had started in about 9 months. It was the most acute example of something I'd experienced a lot in my career, both before and after, where somebody, anybody from a complete fresher to a certified genius to somebody getting a masters in A.I. because they really needed the intelligence, built something they couldn't finish and left behind a product that looked promising but needed serious rework to get it in front of customers. (... Then we ran out of projects, I cracked, and two days later got a job at the other web development shop that was landing all the new contracts that we were failing to get.)

I see GPT-3 as that fresher programmer who can make things that look promising to management but in the end turn out to need a huge amount of rework to put in front of customers. For a time I was greatly resentful that somebody would seem to do the "20% of the work that gets 80%" of the results and it seemed I'd do the "80% of the work that gets 20% of the results" and have people complain I took to long to do things, even during my annus mirabalus at Spider Graphics or many other times I'd saved a project that had been circling the drain for years.

GPT-3 has a hypnotic ability to get away with making mistakes which I think is a product of it being trained to produce the token with the highest probability. Like Andy Warhol, it is actively anti-creative.

Fixing the hard-to-find mistakes that it makes will be a maddening job and people will always be looking for ways to push the bubble out from under the rug and not realize the machine they are trying to build is impossible for fundamental logical reasons. I think of the dialogs of Achilles and the Tortoise from

https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach

where they are trying to build impossible machines and repeatedly failing because they have no idea that what they're trying to do is impossible. I've had people say GEB is a critique of the old symbolic AI but neural networks don't repeal the fundamental results of mathematical logic and computer science.

Sure, you can escape Gödel's theorem by building a system that doesn't get the right answer but then you have a system that doesn't get the right answer.

[+] sinuhe69|3 years ago|reply
That’s exactly what I am going to tell my son. One of the fundamental qualities every human engineer needs is the ability to communicate effectively, reading/picking up the work of others then fix it/improve on it. It’s the process of interactive development/engineering that fruits in valuable products, not the automatic machine translation.
[+] theGnuMe|3 years ago|reply
This would be a great teaching trick. Keep broken solutions in your home directory (as the prof or TA) for enterprising students to find and fix.
[+] w4ffl35|3 years ago|reply
Excellent points thanks for sharing these thoughts.
[+] ineedausername|3 years ago|reply
You don't need to prepare, you just adapt and this happens subconsciously. We all become the machine and it becomes all of us, which is inevitable.
[+] theGnuMe|3 years ago|reply
That's a neat idea, you see the Google GPT and Chat GPT being knowledge OSes in general which we interact and build on. Does your app work on google gpt and how do you port it to chat gpt etc...
[+] licebmi__at__|3 years ago|reply
It's not FSC (Full self coding), but I definitely see copilot increasing my output by helping me write boring parts. I'm thinking that maybe we will see an impact kinda like the industrial revolution, knowledge workers won't be obsolete, but the value of their work might be greatly devaluated.
[+] seydor|3 years ago|reply
each time it 's the same fud, though in this moment the Fear factor is bigger than ever before, but not because things are awful, it's becase people have been conditioned to yell every time there is something new.

GPTs are an explosive multiplier for productivity. They will become the new baseline and people will be asked to do more with them. Those that can't keep up will lose jobs etc, but not the majority. We are in for a huge jump in productivity.

The best we can do is educate the public about the existence of these tools. It's crazy that people have been trained to dismiss them because of bad press in the past 10 years. We can't really stop technology so we better join the ride

[+] lm28469|3 years ago|reply
> We can't really stop technology so we better join the ride

I see more and more people saying stuff like that on HN and I feel like we've collectively gave up on the idea of regulations &c.

It's our collective job to decide what's good or not and what we want or not. Saying "eh it's technology we _have_ to go with it" seems really dumb and a fairly new thing.

Thinking like that means weaponised autonomous robots is just the logical next step of "progress" for example. Or mass surveillance is inevitable and being so would even make it desirable by that logic

[+] TheOtherHobbes|3 years ago|reply
I don't think GPTs will be an explosive multiplier because quantity is not the same as quality.

More crudely, GPTs make it easier to produce disposable crap automatically by applying statistical data compression to large training sets.

They're purely mechanical and have no concept of semantics or subjectivity.

So if you ask GPT to write code, you may get something that mostly works, or you may get garbage. And if you're lucky enough to get something that mostly works you'll still need to refine and test it manually.

Human coding has the same perpetually-almost-there issue. But humans can understand a specification and also understand unstated requirements. So the almost-there is more likely to be fit for purpose.

Humans can also take the initiative with new classes of problems.

The real danger is that AI-ifying everything will cripple that imaginative ability, because we'll all be settling for AI worse-is-better instead of aiming higher.

[+] sinuhe69|3 years ago|reply
I still in the process of working out how to put a character/a face consistently in a series of SD productions. It’s one of the most basic tasks artist have to work with everyday.
[+] GoblinSlayer|3 years ago|reply
If webdevs are replaced with AI, maybe it will bother to write interoperable pages.