top | item 36116648

Professor assigns ChatGPT-prompted essays to highlight hallucinated info

27 points| indigodaddy | 2 years ago |twitter.com | reply

12 comments

order
[+] jruohonen|2 years ago|reply
Time and time again:

"Fake quotes, fake sources, or real sources misunderstood and mischaracterized."

What is the reason for using these if one has to fact-check everything, which takes more time than doing it yourself in the first place? Granted, marketing, propaganda, etc. are good use cases.

As usual, a lot of hype people in the thread disagreeing.

[+] circuit10|2 years ago|reply
One thing it’s good for is finding things that are hard to search for, another is doing simple text-based tasks that you could automate but it’s only a few things and it’s easier to just explain the problem, or for generating simple code that has a lot of boilerplate that does a relatively easy thing…

It’s not perfect, but it definitely is useful from time to time as long as you’re aware of the limitations

[+] gumballindie|2 years ago|reply
In this post truth society accuracy isnt relevant. What’s relevant is eyeballs and followers.
[+] ryanjshaw|2 years ago|reply
It's a text generator. It has more than one use. The space is new and still being explored. You should come back in a few years if you don't have the patience for this process.

For authoring fiction, fact checking is often not necessary: my daughter enjoys asking the AI to write her silly bedtime stories.

For programming, "fact checking" is otherwise known as compiling and testing.

For publishing, I can ask for a critique of my work and decide for myself which points to accept and which to reject.

For game design, I've had great success fleshing out a very vague concept I had into something more concrete and ChatGPT came up with some brilliant level design ideas.

etc.

[+] loa_in_|2 years ago|reply
For me it's accessibility. I know what I want and I know what to ask for, so that I type three sentences, and I get two paragraphs. My wrists can only take so much as a self taught programmer since 10 now being 30. And it's the first time I have a tool like this. Also I don't use this to assist with code, as code is pretty terse by itself.

Another is entertainment and brainstorming, to see multitudes of combinations written out and see what makes sense.

[+] cookieperson|2 years ago|reply
I mean that's the best use case. Mass spamming ads, propaganda, and custom phishing attacks. Other criminal schemes are easily envisioned but I'll keep those to myself because half the people on here would be like "heyyyy good idea thanks!"
[+] sillymath3|2 years ago|reply
Nice from one student: "I’m not worried about AI getting to where we are now. I’m much more worried about the possibility of us reverting to where AI is."
[+] justrealist|2 years ago|reply
It really bugs me when people post these threads without saying whether they used GPT-3 or GPT-4. And if it's academic, I sort of suspect it's not GPT-4, unless they are paying for subscriptions for all of their students.

It makes a difference... a big difference.

[+] rolae|2 years ago|reply
Both 3.5 and 4 hallucinated according to the professor:

> Most used 3.5. A few used 4 and those essays also had false info. I don't think they used any browsing plug-ins but it's possible--it was a take-home assignment and not one they did in class.

https://twitter.com/cwhowell123/status/1662517400770691072

[+] thinkingkong|2 years ago|reply
No it doesnt.

The public at large can be forgiven for not assuming a half version is the difference between being factually incorrect or not. If OpenAI actually even has confidence theyd call one model “sorta true sometimes” and “mostly true but only for certain things depending how convincing you need it to be”.

Users wont care whats true or not and blind trust in LLM output is the issue.