(no title)
dual_dingo | 3 years ago
The sad truth is that ChatGPT is about as good an AI as ELIZA was in 1966, it's just better (granted: much better) at hiding its total lack of actual human understanding. It's nothing more than an expensive parlor trick, IMHO.
Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...
So, no, I don't have an AI fatigue, because we absolutely have no AI anywhere. But I have a massive bullshit and hype fatigue that is getting worse all the time.
auctoritas|3 years ago
I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.
addcommitpush|3 years ago
> These arguments take the form, "I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X."
> [...]
> The criticisms that we are considering here are often disguised forms of the argument from consciousness, Usually if one maintains that a machine can do one of these things, and describes the kind of method that the machine could use, one will not make much of an impression.
Every time "learning machines" are able to do a new thing, there's a "wait, it is just mechanical, _real_ intelligence is the goalpost".
[0] https://www.espace-turing.fr/IMG/pdf/Computing_Machinery_and...
RivieraKid|3 years ago
It's important to note that this is your assumption which I believe to be wrong (for most people here).
latexr|3 years ago
Respectfully, that reads as needlessly combative within the context. It sounds like the blockchain proponents who say that the only people who are against cryptocurrencies are the ones who are “bitter for having missed the boat”.¹
It is possible and perfectly reasonable to identify problems in ChatGPT and similar technologies without feeling threatened. Simple example: someone who is retired and monetarily well off, whose way of living and sense of self worth are in no way affected by developments in AI, can still be critical and express valid concerns when these models tell you that it’s safe to boil a baby² or give other confident but absurdly wrong answers to important questions.
¹ I’m not saying that’s your intention, but consider that type of rhetoric may be counterproductive if you’re trying to make another understand your point of view.
² I passed by that specific example on Mastodon but I’m not finding it now.
rsynnott|3 years ago
For _what purpose_, tho? It's a good party trick, but its tendency to be confidently wrong makes using it for anything important a bit fraught.
liveoneggs|3 years ago
moron4hire|3 years ago
rcme|3 years ago
jvanderbot|3 years ago
In terms of closing the gap between AI hype and useful general purpose AI tools, no one can reasonably deny that it's an absolute quantum leap.
It's just not a daily driver for technical experts yet.
brookst|3 years ago
williamcotton|3 years ago
ChatGPT, when provided with a synthetic prompt is reliably a synthesizer, or to use the loaded term, a bullshiter.
When provided with an analytic prompt, it is reliably a translator.
Terms, etc: https://www.williamcotton.com/articles/chatgpt-and-the-analy...
jug|3 years ago
But yes indeed, there are many, many AI products launched during this era of rapid progress. Even kind of shoddy products can be monetized if they provide value over what we had before. I think the crowded market and all the bullshit and all the awesome, all at once, is a sign of very rapid progress in this space. It will probably not always be like this and who knows what we are approaching.
nunodonato|3 years ago
capableweb|3 years ago
If writing boilerplate becomes effortless, then you'll write more of it, instead of feeling the pain of writing it and then trying to reduce it, because you don't want to spend time writing it.
And since Copilot was accepted as a way to help the developers on the teams, the increase of boilerplate have been immersive.
I'm borderline pissed, but mostly at our own development processes, not at Copilot per se. But damn if I didn't wish it existed somehow, although it was inevitable it would at one point.
dual_dingo|3 years ago
What I have seen about it ranged from things that can be nearly just as well handled by your $EDITOR's snippet functionality to things where my argument kicked in - I have to verify this generated code does what I want, ergo I have to read and understand something not written by me. Paired with the at least somewhat legally and ethical questionable source of the training data, this is not for me.
naillo|3 years ago
sbilstein|3 years ago
morelisp|3 years ago
postexitus|3 years ago
traceroute66|3 years ago
I agree.
And the worst thing is that the bullshit hype comes round every decade or so, and people run around like headless chickens insisting that "this time its different", and "this time its the REAL THING".
As you say, first(ish) there was ELIZA. Than this that and everything else. Then Autonomy and all that dot-com era jazz. Now with compute becoming more powerful and more compact, any man and his dog can stuff some AI bullshit where it doesn't belong.
I have seen comments below on this thread where people talk about "well, it's closing the gap". The thing you have to understand is that the gap will always exist. Ultimately you will always be asking a computer to do something. And computers are dumb. They are and will always be beholden to the humans that program them and the information that you feed them. The human will always have the upper hand at any tasks that require actual intelligence (i.e. thoughtful reasoning, adapting to rapidly changing events etc.).
lordfrito|3 years ago
This. To answer the OPs question, this is what I'm fatigued about.
I'm glad we're making progress. It's a hell of a parlor trick. But the hype around it is astounding considering how often it's answers are completely wrong. People think computers are magic boxes, and so we must be just a few lever pulls away from making it correct all the time.
Or maybe my problem is that I've overestimated the average human's intelligence. If you can't tell ChatGPT apart from a good con-man, can we consider the Turing test passed? It's likely time for a redefinition of the Turing test.
Instead of AI making machines smarter, it seems that computers are making humans dumber. Perhaps the AI revolution is about dropping the level of average human intelligence to match the level of a computer. A mental race to the bottom?
I'm reminded of the old Rod Serling quote: We're developing a new citizenry. One that will be very selective about cereals and automobiles, but won't be able to think.
joyeuse6701|3 years ago
pixl97|3 years ago
sfpotter|3 years ago
dual_dingo|3 years ago
But defining "intelligence" is a philosopical question that doesn't necessarily have one answer for everything and everyone.
danaris|3 years ago
It is often frustrating that English has words with such different (but clearly related) definitions, as it can make it far too easy to end up talking past each other.
[0] https://en.wiktionary.org/wiki/artificial
Diggsey|3 years ago
version_five|3 years ago
HarHarVeryFunny|3 years ago
dual_dingo|3 years ago
BulgarianIdiot|3 years ago
NobleLie|3 years ago
I don't think there is any because there is no functional model for what organic intelligence is or how it operates. There are plethora of fascinating attempts / models but only a subset implore that it is solely "statistical". And even if it was statistical, the implementation of the wet system is absolutely not like a gigantic list of vectorized (stripped of their essence) tokens
brookst|3 years ago
thejammahimself|3 years ago
I think there's an argument to be made that AI is being used here to help you tackle the more trivial tasks so you have more time to focus on the more important, and challenging tasks. Albeit I recognise GitHub CoPilot is legally questionable.
But yes, I agree with your overall point that AI has still not been able to 'think' like a human but rather can only still pretend to think like a human, and history has shown that users are often fooled by this.
exegete|3 years ago
davidkuennen|3 years ago
Hype or not, it's incredibly useful and has increased my productivity by at least 20%. Worth every penny.
matwood|3 years ago
Co-pilot has been semi-useful. It's faster than search SO, but like you said, I still have to review all the code and it's often wrong in subtle ways.
bombcar|3 years ago
It will turn out to be a useful tool for those who know what they’re asking about so they can check the answer quickly; but it will be USED by tons of people who don’t have a way of verifying the answers given.
Al-Khwarizmi|3 years ago
Lack of actual human understanding? Of course, by definition a machine will always lack human understanding. Why does that matter so much if it's a helpful tool?
For what it's worth, I do agree that there is a lot of hype. But contrary to blockchain, NFTs, web3, etc., this is actually useful for many people in many everyday use cases.
I see it as more similar to the dot com hype - buying a domain and creating a silly generic website didn't really multiply the value of your company as some people thought in that era, but that doesn't mean that websites weren't a useful technology with staying power, as time has shown.
unknown|3 years ago
[deleted]
sharemywin|3 years ago
It you ask it to go through and comment code it does a pretty good job of that.
some things better than others(not that great at CSS)
need a basic definition of something. got it.
tell it to write a function it's not bad.
As a BA just tell it what your trying to do and what questions it should ask users. It will get some good ideas for you.
Want it to be a PM have create a loop asking every 10 minutes if your done yet.
Is it a senior engineer? no. can it pass a senior engineering interview? quite possibly.
debug code hit or miss.
I think the big thing it's not that great at front end code. It can't see so that probably makes sense. a fine-tuned version of clip that interacted with a browser would probably be pretty scary.
EVa5I7bHFq9mnYK|3 years ago
osigurdson|3 years ago
unknown|3 years ago
[deleted]
whiddershins|3 years ago
Time will tell, I certainly can’t predict.
unknown|3 years ago
[deleted]
smohare|3 years ago
[deleted]
joxel|3 years ago
[deleted]
dang|3 years ago
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
coldtea|3 years ago
[deleted]
reaperducer|3 years ago
We're about six minutes away from "AI bros" becoming a thing.
The same kind of grifters who always latch onto the latest thing and hype it up in order to make a quick buck are already knocking on AI's door.
See also: Cryptocurrency, and Beanie Babies.