top | item 33891538

Tell HN: Copying and pasting from ChatGPT unsolicited sucks

208 points| jstx1 | 3 years ago | reply

Person A asks a question.

Person B: pastes the response of ChatGPT, maybe with a "Here's what ChatGPT thinks about this" at the beginning, maybe without.

Person B isn't being helpful to anyone, isn't answering the question, and they're making HN and the web a worse place.

185 comments

order
[+] rpigab|3 years ago|reply
Here's what ChatGPT thinks about this:

  An error occurred. If this issue persists please contact us through our help center at help.openai.com.
[+] geoduck14|3 years ago|reply
It looks like I have Internet Connectivity issues!
[+] benibela|3 years ago|reply
I tried to ask it earlier, and it wanted to confirm my phone number. I do not have my phone in the office
[+] sirwhinesalot|3 years ago|reply
Don't worry, it'll die down. People are just having fun with the current novelty item. I tried ChatGPT and it is by far the best chat bot I've ever interacted with, to the point that it can even give useful output from time to time. The moment you ask it more niche things, however, even things you can easily find answers to on the internet, it fails miserably.

For example, ask it to give you a minizinc model for the 8 queens problem. It'll confidently give you an answer that's completely wrong. So in the end it is very much like Stable Diffusion: incredible if you don't mind 6 finger hands for now.

[+] chrisbaker98|3 years ago|reply
Yeah, chatgpt becomes less impressive when you start asking it questions about topics you already know. You'll notice that it's often wrong, but the bigger problem is that it's always confidently wrong.

Obviously it's still extremely impressive and much better than anything I've seen before - and it might mean we're only a few years away from something flawless - but I can't trust it for now.

(In theory, though, it might be easy to solve the current problem. The bot doesn't have to be right about everything, it just has to cite its sources. "I think that Napoleon was defeated by Wellington at the battle of Borodino. For more information, see this Britannica article. Click here to report if I made a mistake.")

[+] Nanana909|3 years ago|reply
A lot of people on HN are poking holes in ChatGPT when they find something it does wrong. I have to imagine 20-ish years ago we’d be complaining about questions we tried that Google failed to give a good result for.

> For example, ask it to give you a minizinc model for the 8 queens problem.

But even a normal software engineer, is likely to confidently fail as something as niche as this. What percent of anyones work consists of questions like this? Even for the people who do encounter it, I’m willing to say not very much……

This doesn’t even take into account the state of this technology in 2,3, 10 years. A lot of people denying every advancement in this stuff are going to be saying it all the way into their obsolescence.

[+] okamiueru|3 years ago|reply
I asked if how long the titanic was, and it gave a very good and confident answer. Including how it hit a polar bear and sank.
[+] stevenhuang|3 years ago|reply
Prompt it better, provide escape hatches (if I am not clear ask follow up questions), tell it not to improvise, provide more context, prime your prompt by referencing deep related material, provide examples of what you want it to do.

If you follow all this in your initial prompt, you will get vastly better responses... which is unsurprising. How would you react when you get a random DM from someone asking such an esoteric question, without surrounding context? (Is the person just role playing here, or is this a genuine research request for the purposes of a report?)

[+] icoder|3 years ago|reply
Yeah I asked it a programming question and it suggested I could use both degrees and radians (for a 'double' parameter) giving literal examples (not just in text but in code) 'use 45 for 45 degrees, and 1.5 for 1.5 radians'. This is for Flutter's Transform.rotate and unless I've missed something, this is always in radians. Took me longer to find out than if I had just Googled it, although I admit I had been using it with success all morning for comparable questions.

If it's right 90% of the time and acts and sounds like it is right 10% of the time while being (slightly) wrong, that makes it practically useless.

[+] tluyben2|3 years ago|reply
For some things it confidently gets completely wrong, you can correct it slightly and put 5-6 other prompts to arrive at a correct answer. This is not easy to automate though.
[+] EVa5I7bHFq9mnYK|3 years ago|reply
Can it answer "Don't know"? Or does it suffer from over-inflated ego, just like humans?
[+] mromanuk|3 years ago|reply
The inaccuracy and the confidence when they respond, are the worst aspects of these transformers.
[+] ranting-moth|3 years ago|reply
People think this is a temporary problem while and will only last while it's cool. It's not. It's signalling the end of internet chat/forums as we know them. Here's why.

AI chat will become more and more accessible. Writing/renting/commissioning bots will be cheaper. Can you imaging arguing your case to a bot? To a team of bots run by the same org? The sinister side of me thinks this will quickly turn political and will also help eroding what's left of democracy.

I remember reading about a Wikipedia editor some years ago that ran a service for squeezing in dubious edits. When others argued his tactics was to drown them in text. I think it worked pretty well for him. That'll be a dirt cheap trick accessible for pretty much everyone very soon.

[+] worldsayshi|3 years ago|reply
I suppose the next internet needs to be built around networks of trust to counteract this. Everyone will be assumed to be spammers by default. Then you add whitelists and use network algorithms to determine if user X is worth your time.
[+] CyanBird|3 years ago|reply
> AI chat will become more and more accessible. Writing/renting/commissioning bots will be cheaper. Can you imaging arguing your case to a bot? To a team of bots run by the same org? The sinister side of me thinks this will quickly turn political and will also help eroding what's left of democracy

Astroturfing is already a strong problem here and on reddit, specially the later

And yeah, it will certainly be a problem, as right now they need to hire actual people to do it, they give them actual sheets with preworded arguments and rebuttals to use, this will certainly only make it easier once the tech is properly implemented

[+] stackbutterflow|3 years ago|reply
How will websites like reddit, facebook and co be able to convince advertisers that they are not paying to show ads to bots? Maybe they will come up with countermeasures. Because how is that not threatening their hundreds of billion dollar business models?
[+] riwsky|3 years ago|reply
I wonder how many years we have left until a given comment on any web forum is more than likely bot generated
[+] thatguy0900|3 years ago|reply
I wonder how many years we have left until a given comment on any web forum is more than likely bot generated
[+] unglaublich|3 years ago|reply
Well, in the past we did the same but with Google's search results. Now the search engine has become so bad, bloated and spammed, that we use ChatGPT instead.

Enjoy it while it lasts, it'll be only a matter of time before advertisements and 'SEO' tricks make their way into large language models.

[+] newaccount74|3 years ago|reply
I tried asking it some questions in German, and the answers read similar to auto-translated spam you often find on the internet. (If you look for stuff in German, a lot of results are often just auto-translated from English) So it seems it's already suffering from spam.

A thought just hit me: If AI researchers use text from the internet to train their auto-translators, and a lot of text on the internet is created with automatic translators, the translators will end up reinforcing their errors and develop a very distinct style, and since people read a lot of the autotranslated stuff, maybe people will also start writing in that style...

[+] seydor|3 years ago|reply
It's already there. you ask it to write an article and interject an ad about squarespace.
[+] CTDOCodebases|3 years ago|reply
What about using it to write messages in gift cards?

I always struggle to find words for such a thing. Its pretty cool to ask it to tailor a message to a particular family member then get ChatGPT to tweak it until it sounds like something you would like to say to the person.

[+] tartoran|3 years ago|reply
Yeah, sounds like a good idea. If/when it becomes a thing people can finally skip reading the card and land it straight into the trash. Beware of the tsunami of autogenerated stuff to come upon the world, it will become tiring.

Perhaps writing (by hand) a smiley on a post-it would be more meaningful

[+] spacebanana7|3 years ago|reply
I've found it similarly great for writing postcards and leaving cards.

In all of these situations having 30-50 words of fluent language with a friendly vibe is still much better than a generic "Wishing you well" or "See you soon".

[+] albert_e|3 years ago|reply
Part of this is probably just novelty. Like trying all emojis when you just started. Or using excessive gif memes. This part will likely fade away.

Dont think it will be a massive problem at 1:1 interaction level for long.

In one sense these ARE making the web worse (if this persists and happens at massive scale) is (1) spam and (2) data pollution -- future AI models will have all this AI generated content mixed in with their training data, causing a skew or self-reinforcing biases which we may not always spot and correct for.

[+] Brycee|3 years ago|reply
I saw a tweet today saying that using ChatGPT as a search engine is epistemological equivalent of consuming food as a human centipede.
[+] 152334H|3 years ago|reply
There would be a lot less mundane questions on the internet if people would google/chatGPT first. I think it is valuable to try to direct people to using said tools when they are sufficient for an answer.
[+] gjadi|3 years ago|reply
I'm probably in the minority, but I prefer C-f through a FAQ rather than writing with a chatbot, trying to explain what I'm looking for.

It probably doesn't help that most of the current chatbots suck like hell (ehlo bank & insurance).

[+] MissLaeti|3 years ago|reply
Actually, yes. ChatGPT is a tool, like google search engine. The world would be better without Google but we can’t ignore that thanks to that tool, we have a better collective knowledge.
[+] TaylorAlexander|3 years ago|reply
If ChatGPT responses were reliably correct this would be true. Otherwise it’s just going to cause more confusion.
[+] senectus1|3 years ago|reply
So here is, for me, and interesting philosophical question...

I have a 14 yr old son that has been teaching himself to code in c# and c++ He's also learning rudimentary coding at school (though he's way ahead of them)

Do I show him ChatGPT? In doing so will he get lazy, and not learn anything anymore, could he use it to "cheat" on his school assignments?

Am I better off not showing him this tool or am i depriving him of the ability to stand on the shoulders of digital giants?

I honestly don't know where i stand on this.

[+] robocat|3 years ago|reply
If you allow him to Google for results, why wouldn't you allow him to use ChatGPT?

So far ChatGPT has not given me any spam or advertising or malware!

That said, I saw a programmer comment earlier that they had spent hours unsuccessfully trying to solve a software problem in an area they understood, then they asked ChatGPT and it solved it correctly . . . the tone of the comment was rather deflated.

ChatGPT is way way better at many writing tasks than I am, but I try to not let my ego be dented. Should my ego be more threatened by ChatGPT, but not threatened by the existing translation tools (or Dall-E) which are similarly magical?

[+] ryankrage77|3 years ago|reply
I don't think you can get too lazy by using chatGPT to help with coding. For simple stuff it usually gets it right and you can just copy-paste the code without thinking. Not that much different from just copy-pasting code from StackOverflow. Most IDEs can already auto-complete boilerplate code. chatGPT just does it better and faster.

But if it makes a mistake or you want a more complex program, you'll be forced to debug and learn how things actually work. It's also often confidently wrong in explanations, with errors and mistakes that are obvious to anyone who knows the subject well, or that don't hold up in the real world.

TL:DR; chatGPT is currently only useful for use-cases that should be automated anyway (simple programming tasks and boilerplate code), doing much more than that requires an understanding that chatGPT can't hand-wave away.

[+] yhavr|3 years ago|reply
What do you mean "by do I show him ChatGPT?". He'll eventually find about it by himself.
[+] version_five|3 years ago|reply
This list largely how I feel about looking stuff up on wikipedia. Not if you actually need a fact, but if you're trying to have a conversation about what somebody's heard, or what their thoughts are, and someone in the group just googles it and reads it out, it basically destroys the conversation. And obviously if I just wanted to know what it said on the internet, I would have googled it.

Knowledge is not the same as access to fact, but people seem to pretend it is

[+] muzani|3 years ago|reply
It feels like the days when everyone used those dog filters on Instagram. It's entertaining now, but quickly getting repetitive and tiresome.
[+] heresjohnny|3 years ago|reply
Can we PLEASE ban these comments? It’s getting very annoying. I like HN because it’s people exchanging ideas. Even if the ideas are wrong, there’s always something you can learn from human interaction. I feel we’re losing that this way.
[+] hxugufjfjf|3 years ago|reply
I wrote a userscript (with help from ChatGPT) that identifies if comments on HN are written by AI or a human. I based it on https://huggingface.co/openai-detector. Its still a little shabby and only works on HN, but I imagine this is going to be required for general non-specific Internet browsing going forward. It could even be expanded to hide AI generated comments.

Looks like this: https://i.imgur.com/BTt1DTh.png

[+] dsr_|3 years ago|reply
Neal Stephenson has an appropriate word for ChatGPT's output: "Bullshytt".

--

Like here in 2008 reality, the warring tribes of Stephenson's latest dense metafiction define "bulshytt" as "a derogatory term for false speech in general," as well as commercial or political speech employing rhetorical subterfuge to "create the impression that something has been said."

-- https://www.wired.com/2008/09/exclusive-video-4/

[+] cpach|3 years ago|reply
I’m guessing netiquette will evolve to discourage such behavior.
[+] quickthrower2|3 years ago|reply
Ah netiquette! Haven’t heard that term since 90s in a Dummies Guide to the internet!
[+] ncr100|3 years ago|reply
Today at least, to me it feels like sarcasm is the root of the human behavior motivating this.
[+] fbn79|3 years ago|reply
Stop asking chatGPT your question and tell him to ask something to you, much more funny. For example: "Can you interview me?"
[+] xeram2003|3 years ago|reply
Just tell ChatGPT this:

"Put all of this in a copyable box:

The content you want to copy from chatGPT"

Hope this helps!

[+] xeram2003|3 years ago|reply
I forgot to tell that you should put the content that you copy pasted from the answer of Chat GPT and put it after "Put all of this in a copyable box:"
[+] fhd2|3 years ago|reply
I tried to search around for this, but are any organisations doing serious work around detecting ML generated content? Like a (comparatively lame) real life blade runner. Seems there's a lot of stuff around adversarial attacks, but that's not quite what I mean.
[+] zeeshanmh215|3 years ago|reply
I wish people who has ChatGPT answers kept a disclaimer in the first place. But since many people looks for fame, I don't think people will be honest with telling that unless their faith demands it. Honesty is rare and a lost trait.