top | item 34722857

(no title)

dual_dingo | 3 years ago

The "I" in AI is just complete bullshit and I can't understand why so many people are in a awe of a bit of software that chains words to another based on some statistical model.

The sad truth is that ChatGPT is about as good an AI as ELIZA was in 1966, it's just better (granted: much better) at hiding its total lack of actual human understanding. It's nothing more than an expensive parlor trick, IMHO.

Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...

So, no, I don't have an AI fatigue, because we absolutely have no AI anywhere. But I have a massive bullshit and hype fatigue that is getting worse all the time.

discuss

order

auctoritas|3 years ago

I'm more fatigued by people denying the obvious that ChatGPT and similar models are revolutionary. People have been fantasizing about the dawn of AI for almost a century and none managed to predict the rampant denialism of the past few months.

I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

addcommitpush|3 years ago

There's a fellow that kinda predicted it in 1950 [0]:

> These arguments take the form, "I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X."

> [...]

> The criticisms that we are considering here are often disguised forms of the argument from consciousness, Usually if one maintains that a machine can do one of these things, and describes the kind of method that the machine could use, one will not make much of an impression.

Every time "learning machines" are able to do a new thing, there's a "wait, it is just mechanical, _real_ intelligence is the goalpost".

[0] https://www.espace-turing.fr/IMG/pdf/Computing_Machinery_and...

RivieraKid|3 years ago

> I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

It's important to note that this is your assumption which I believe to be wrong (for most people here).

latexr|3 years ago

> I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

Respectfully, that reads as needlessly combative within the context. It sounds like the blockchain proponents who say that the only people who are against cryptocurrencies are the ones who are “bitter for having missed the boat”.¹

It is possible and perfectly reasonable to identify problems in ChatGPT and similar technologies without feeling threatened. Simple example: someone who is retired and monetarily well off, whose way of living and sense of self worth are in no way affected by developments in AI, can still be critical and express valid concerns when these models tell you that it’s safe to boil a baby² or give other confident but absurdly wrong answers to important questions.

¹ I’m not saying that’s your intention, but consider that type of rhetoric may be counterproductive if you’re trying to make another understand your point of view.

² I passed by that specific example on Mastodon but I’m not finding it now.

rsynnott|3 years ago

> ChatGPT and similar models are revolutionary

For _what purpose_, tho? It's a good party trick, but its tendency to be confidently wrong makes using it for anything important a bit fraught.

liveoneggs|3 years ago

So, to you, ChatGPT is approaching AGI?

moron4hire|3 years ago

The problem is that ChatGPT is about as useful as all the other dilettantes claiming to be polymaths. Shallow, unreliable knowledge on lots of things only gets you so far. Might be impressive at parties, but once there's real, hard work to do, these things fall apart.

rcme|3 years ago

As much as I’m sick of AI products, I’m even more sick of the “ChatGPT is bullshit” argument.

jvanderbot|3 years ago

It can be both bullshit and utterly astounding.

In terms of closing the gap between AI hype and useful general purpose AI tools, no one can reasonably deny that it's an absolute quantum leap.

It's just not a daily driver for technical experts yet.

brookst|3 years ago

The biggest thing I’ve learned from chatGPT is that real people struggle with the difference between intelligence, understanding, and consciousness / sentience.

williamcotton|3 years ago

The only problem with the “ChatGPT is bullshit” argument is that it is only half true.

ChatGPT, when provided with a synthetic prompt is reliably a synthesizer, or to use the loaded term, a bullshiter.

When provided with an analytic prompt, it is reliably a translator.

Terms, etc: https://www.williamcotton.com/articles/chatgpt-and-the-analy...

jug|3 years ago

I like this take. It has many clear applications already and LLM's are still only in their infancy. I both criticize and use ChatGPT at work. It has flaws and it has advantages. That it's bullshit or "ELIZA" is a short-sighted view that overvalues the importance of AGI and misses what we're already getting.

But yes indeed, there are many, many AI products launched during this era of rapid progress. Even kind of shoddy products can be monetized if they provide value over what we had before. I think the crowded market and all the bullshit and all the awesome, all at once, is a sign of very rapid progress in this space. It will probably not always be like this and who knows what we are approaching.

nunodonato|3 years ago

I can say with a certain degree of confidence that you haven't actually used CoPilot daily.

capableweb|3 years ago

I've worked with teams that used Copilot. They claim it's great "Hey, now I don't have to actually spend any time writing all this boilerplate!" while for me, the person who has to review their code before releasing stuff, easier ways of writing boilerplate is not a positive, it's a negative.

If writing boilerplate becomes effortless, then you'll write more of it, instead of feeling the pain of writing it and then trying to reduce it, because you don't want to spend time writing it.

And since Copilot was accepted as a way to help the developers on the teams, the increase of boilerplate have been immersive.

I'm borderline pissed, but mostly at our own development processes, not at Copilot per se. But damn if I didn't wish it existed somehow, although it was inevitable it would at one point.

dual_dingo|3 years ago

I haven't. Now you know for a fact :)

What I have seen about it ranged from things that can be nearly just as well handled by your $EDITOR's snippet functionality to things where my argument kicked in - I have to verify this generated code does what I want, ergo I have to read and understand something not written by me. Paired with the at least somewhat legally and ethical questionable source of the training data, this is not for me.

naillo|3 years ago

I've used it quite a lot and I agree with the original post. It seemed really useful at first but then it started introducing several bugs in large blocks of code. I've stopped using it in the end since the small snippets on the one line size is trivial enough to write myself (with just vim proficiency) and the larger blocks on the order of a function autocomplete is too bug prone (and kills too much willpower budget to fix).

sbilstein|3 years ago

Yep. I’m personally skeptical of so many other use cases for LLMs but CoPilot is fantastic and basically just autocomplete on rocket fuel. If you can use autocomplete, you can use CoPilot super effectively.

morelisp|3 years ago

This is such a bullshit answer. No, I don't use it daily because I tried it for a couple hours and it suggested nothing useful and several harmful. Why would I keep using it?

postexitus|3 years ago

I can say with a higher degree of confidence that you haven't actually used CoPilot daily for any respectably sized project.

traceroute66|3 years ago

> The "I" in AI is just complete bullshit and I can't understand why so many people are in a awe

I agree.

And the worst thing is that the bullshit hype comes round every decade or so, and people run around like headless chickens insisting that "this time its different", and "this time its the REAL THING".

As you say, first(ish) there was ELIZA. Than this that and everything else. Then Autonomy and all that dot-com era jazz. Now with compute becoming more powerful and more compact, any man and his dog can stuff some AI bullshit where it doesn't belong.

I have seen comments below on this thread where people talk about "well, it's closing the gap". The thing you have to understand is that the gap will always exist. Ultimately you will always be asking a computer to do something. And computers are dumb. They are and will always be beholden to the humans that program them and the information that you feed them. The human will always have the upper hand at any tasks that require actual intelligence (i.e. thoughtful reasoning, adapting to rapidly changing events etc.).

lordfrito|3 years ago

> And the worst thing is that the bullshit hype comes round every decade or so, and people run around like headless chickens insisting that "this time its different", and "this time its the REAL THING".

This. To answer the OPs question, this is what I'm fatigued about.

I'm glad we're making progress. It's a hell of a parlor trick. But the hype around it is astounding considering how often it's answers are completely wrong. People think computers are magic boxes, and so we must be just a few lever pulls away from making it correct all the time.

Or maybe my problem is that I've overestimated the average human's intelligence. If you can't tell ChatGPT apart from a good con-man, can we consider the Turing test passed? It's likely time for a redefinition of the Turing test.

Instead of AI making machines smarter, it seems that computers are making humans dumber. Perhaps the AI revolution is about dropping the level of average human intelligence to match the level of a computer. A mental race to the bottom?

I'm reminded of the old Rod Serling quote: We're developing a new citizenry. One that will be very selective about cereals and automobiles, but won't be able to think.

joyeuse6701|3 years ago

This is not always true, see Chess.

pixl97|3 years ago

Man, if this were 1800 you'd be stating that man would never fly and the horse would never be supplanted by the engine. I honestly don't believe you have any scientific or rational reasoning for the point you are attempting to make in your post, because if you were you'd be stating that animal intelligence is magical.

sfpotter|3 years ago

“AI” isn’t bull shit, it’s correctly labeled. It’s intelligence which is artificial: i.e. fake, ersatz, specious, not genuine… It’s our fault for not just reading the label. (I absolutely agree with your post and your viewpoint, just to be clear!)

dual_dingo|3 years ago

Artifical means "not human" in this context for me, but I understand "Intelligence" as the abiltiy to actual reason about something based on things you learned and/or experienced, and these "AI" tools don't do this at all.

But defining "intelligence" is a philosopical question that doesn't necessarily have one answer for everything and everyone.

danaris|3 years ago

The intention of the "artificial" in "AI" is not that particular meaning of "artificial", but the one for "constructed, man-made"—see meaning #1 in the Wiktionary definition[0]; the one you are using is #2.

It is often frustrating that English has words with such different (but clearly related) definitions, as it can make it far too easy to end up talking past each other.

[0] https://en.wiktionary.org/wiki/artificial

Diggsey|3 years ago

"Artificial" is not synonymous with "fake". "Fake" implies a level of deception.

version_five|3 years ago

I agree with you completely. I work in the field and I think your sentiment is way more common amongst people who know about the technology, vs the fair weather fans who have all jumped on the hype bandwagon recently. I actually posted the same thing (that it's no different than Eliza) a month or so ago, and got at least one hilarious dismissal, like the "I bet you make widgets" person that replied to you.

HarHarVeryFunny|3 years ago

If you believe that ChatGPT is similar to Eliza, then I can guarantee that you have no rigorous no-wriggle-room definition of what intelligence is. Maybe you think you understand it, or have defined it, but I'm 100% certain any such definition is not 100% reductive and instead relies on other ill-defined works like "reasoning" etc etc.

BulgarianIdiot|3 years ago

“It’s just statistics” is an evergreen way to dismiss AI. The problem is you’re also just statistics.

NobleLie|3 years ago

Source for consciousness / intelligence to be "statistics"?

I don't think there is any because there is no functional model for what organic intelligence is or how it operates. There are plethora of fascinating attempts / models but only a subset implore that it is solely "statistical". And even if it was statistical, the implementation of the wet system is absolutely not like a gigantic list of vectorized (stripped of their essence) tokens

brookst|3 years ago

Shh. The models don’t like hearing that.

thejammahimself|3 years ago

> Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...

I think there's an argument to be made that AI is being used here to help you tackle the more trivial tasks so you have more time to focus on the more important, and challenging tasks. Albeit I recognise GitHub CoPilot is legally questionable.

But yes, I agree with your overall point that AI has still not been able to 'think' like a human but rather can only still pretend to think like a human, and history has shown that users are often fooled by this.

exegete|3 years ago

I think the parent’s comment is probably referring to the fact if you use Copilot to write code then you have to go through and try to understand what it wrote and possibly debug it. And you don’t have the opportunity to ask it why it wrote it the way it did when reviewing its code.

davidkuennen|3 years ago

As soon as I open a fresh IDE these days I immediately miss CoPilot and it's the first thing I install.

Hype or not, it's incredibly useful and has increased my productivity by at least 20%. Worth every penny.

matwood|3 years ago

I agree. I didn't understand the big deal that it passed a google interview either. IMO, that said more about the uselessness of the interview than the 'AI'.

Co-pilot has been semi-useful. It's faster than search SO, but like you said, I still have to review all the code and it's often wrong in subtle ways.

bombcar|3 years ago

This is the meat of the issue - ChatGPT is exposing certain things a susceptible to bullshit attacks; humans have just been relatively bad at those.

It will turn out to be a useful tool for those who know what they’re asking about so they can check the answer quickly; but it will be USED by tons of people who don’t have a way of verifying the answers given.

Al-Khwarizmi|3 years ago

ChatGPT is of actual help for me in various daily tasks, which was never the case with ELIZA or earlier chatbots which were only good as a curiosity or to have some fun.

Lack of actual human understanding? Of course, by definition a machine will always lack human understanding. Why does that matter so much if it's a helpful tool?

For what it's worth, I do agree that there is a lot of hype. But contrary to blockchain, NFTs, web3, etc., this is actually useful for many people in many everyday use cases.

I see it as more similar to the dot com hype - buying a domain and creating a silly generic website didn't really multiply the value of your company as some people thought in that era, but that doesn't mean that websites weren't a useful technology with staying power, as time has shown.

sharemywin|3 years ago

I'm sorry I don't want it to get much smarter.

It you ask it to go through and comment code it does a pretty good job of that.

some things better than others(not that great at CSS)

need a basic definition of something. got it.

tell it to write a function it's not bad.

As a BA just tell it what your trying to do and what questions it should ask users. It will get some good ideas for you.

Want it to be a PM have create a loop asking every 10 minutes if your done yet.

Is it a senior engineer? no. can it pass a senior engineering interview? quite possibly.

debug code hit or miss.

I think the big thing it's not that great at front end code. It can't see so that probably makes sense. a fine-tuned version of clip that interacted with a browser would probably be pretty scary.

EVa5I7bHFq9mnYK|3 years ago

What's the point of letting it comment code? The programmer who reads the code can run it as well.

osigurdson|3 years ago

I don't really think of ChatGPT as AI at this point, just an incredibly useful tool.

whiddershins|3 years ago

I wonder if we will look back at this comment (and others like it) as similar to the infamous “takedown” of Dropbox when it was first posted on HN.

Time will tell, I certainly can’t predict.

joxel|3 years ago

[deleted]

dang|3 years ago

We've banned this account for repeatedly breaking the site guidelines and ignoring our request to stop.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.

reaperducer|3 years ago

The "I" in AI is just complete bullshit

We're about six minutes away from "AI bros" becoming a thing.

The same kind of grifters who always latch onto the latest thing and hype it up in order to make a quick buck are already knocking on AI's door.

See also: Cryptocurrency, and Beanie Babies.