top | item 35547341

(no title)

chess_buster | 2 years ago

Imagine a world where children have grown up, relying on ChatGPT for each and every question.

discuss

order

bawolff|2 years ago

A world where children ask questions to unreliable entities who guess when they don't know the answer?

Pretty sure we just called it the 90s.

mr_toad|2 years ago

GPT-5: “I don’t know. Quit asking these dumb questions and go and play outside.”

istjohn|2 years ago

I would have killed to have ChatGPT growing up. It's amazing to have a patient teacher answer any question you can think of. GPT-4 is already far better than the answers you'll get on Quora or Reddit, and it's instant. So it's wrong sometimes. My teachers and parents were wrong plenty of times, too.

rep_lodsb|2 years ago

There's a difference between being wrong sometimes, and having no concept of objective reality at all.

I really don't understand how anyone can have such a positive impression. I refuse to register an account just to try it out myself, but that isn't necessary to form an opinion when people are spamming ChatGPT output which they think is impressive all over the Internet.

The best of that output might not always be possible to distinguish from what a human could write, but not the kind of human I'd like to spend time with. It has a certain style that - for me - evokes instant distrust and dislike for the "person" behind it. Something about the bland, corporate tone of helpfulness and political correctness. The complete absence of reflection, nuance, doubt, or curiosity with which it delivers "facts". Its refusal to consider any contradictions feels aggressive to me even - or especially - when delivered in the most non-judgemental kind of language.

It is like the text equivalent of nails on a chalkboard!

rvba|2 years ago

I'd argue that most children would kill for an automatic translator like DEEPL (or the much worse Google Translate) - because it would help them with their English / German / other language homework.

English speaks will probably never realize this, that most kids need to say learn English first, then programming.

ttctciyf|2 years ago

Apropos this, was tempted to submit https://www.youtube.com/watch?v=KfWVdXyPvWQ [1] after watching it last night, but maybe it's better to just leave it here, instead..

1: How A.I Will Self Destruct The Human Race (Camera Conspiracies channel)

bobmaxup|2 years ago

YouTube videos of stock imagery, memes, anecdotes, and speculation don't seem much better.

throw124|2 years ago

Imagine that in five years from now, ChatGPT or one of its competitors will reach 98% factual accuracy in responses. Would you not like to rely on it for answering your questions?

VonGallifrey|2 years ago

Saying this in a discussion about Citogenesis is funny to me. How would you even determine "factual accuracy"? Just look at the list. There are many instances where "reliable sources" repeated false information which was then used to "prove" that the information is reliable.

As far as I am concerned AI responses will never be reliable without verification. Same as any human responses, but there you can at least verify credentials.

rep_lodsb|2 years ago

Imagine that in five years, we will have cold fusion, world peace and FTL travel. ChatGPT told me so it must be true!

paisawalla|2 years ago

Scroll down TFA to the section called "terms that became real". When trolls or adversaries can use citogenesis to boostrap facts into the mainstream from a cold start, what does "98% factual accuracy" mean? At some point, you'll have to include the "formerly known as BS" facts.

falcor84|2 years ago

It all depends on the distribution of the questions asked. I would hazard a guess that given the silly stuff average people ask ChatGPT in practice, it's already at over 98% factual accuracy.

lm28469|2 years ago

Outside of maths and physics there is no such thing as factual truths

weaksauce|2 years ago

chatcpt outputs everything just so confident since it's basically just a bullshit generator. it's markov chain word bots on steroids.

saghm|2 years ago

There are people who do this too; I don't think that's a sufficient property to be a threat to humanity at large

vimax|2 years ago

It’ll be a world where it’s important to know the right question to ask

irrational|2 years ago

Even then, you have to know how to recognize that ChatGPT is feeding you made up information. In the case of these Citogenesis Incidents, 99% of the Wikipedia articles are legitimate. The trick is knowing what is the false 1%. How do you distinguish between the ChatGPT output that is true versus made up?