Not OP, but I have also seen this happening across multiple forums, Discord servers and IRC channels, and also in fields outside of tech. Someone wants to look smart, so they get GPT to write a post for them and paste it in as if it was their own response to a question.
For example, I'm in some aviation-related discussion groups, and people will often ask technical questions which are best answered from a manual. In the past, eventually someone with access to the relevant manual would post an excerpt, answering the question.
Nowadays, commonly someone will paste something from ChatGPT, without disclosing that it's just language model output. It will have the form of a correct response, but with invented details, and a few pages of confusion will result while people try to understand the implications of whatever GPT dreamed up, until someone with access to the real manual comes along, after which the original poster will usually admit that they copy-pasted from ChatGPT.
I'm not sure if these people genuinely believe that a language model is somehow a font of knowledge, or if, like GPT, they don't care about whether something is accurate or not, but just care about whether other people believe them.
The missile knows where it is at all times. It knows this because it knows where it isn't. By subtracting where it is from where it isn't, or where it isn't from where it is (whichever is greater), it obtains a difference, or deviation. The guidance subsystem uses deviations to generate corrective commands to drive the missile from a position where it is to a position where it isn't, and arriving at a position where it wasn't, it now is. Consequently, the position where it is, is now the position that it wasn't, and it follows that the position that it was, is now the position that it isn't.
In the event that the position that it is in is not the position that it wasn't, the system has acquired a variation, the variation being the difference between where the missile is, and where it wasn't. If variation is considered to be a significant factor, it too may be corrected by the GEA. However, the missile must also know where it was.
The missile guidance computer scenario works as follows. Because a variation has modified some of the information the missile has obtained, it is not sure just where it is. However, it is sure where it isn't, within reason, and it knows where it was. It now subtracts where it should be from where it wasn't, or vice-versa, and by differentiating this from the algebraic sum of where it shouldn't be, and where it was, it is able to obtain the deviation and its variation, which is called error.
I've seen exactly the same phenomenon over on /r/whatsthatbook, where users try to identify half-remembered books based on their plots.
Whenever a post gets popular enough, odds are good that somebody will ask ChatGPT and post its answer as their own. Usually, it will either choose a popular but wrong book in approximately the correct genre, or just flat-out invent a book by a real author. And then it will spit out a mixture of real plot details, distortions, and falsehoods in order to try to be as convincing as possible.
These answers are very obviously AI-generated (if you're familiar with the book or author in question) and stand in stark contrast to the human-generated answers. It's not uncommon that real people will make guesses that turn out not to be correct, but they almost never drastically misremember books they've read.
I find it incredibly frustrating, because even though ChatGPT could in theory be useful for this kind of thing, it only works if you know to apply the appropriate level of skepticism. When people start posting AI garbage without distinguishing it, they're drastically lowering the signal-to-noise ratio of what is otherwise a very useful resource, all in the name of gaining a few meaningless karma points.
And to forestall the inevitable objection: yes, I know it's possible that there are also lots of correct ChatGPT-generated answers, and I'm just not counting them because they don't draw attention to themselves. But I doubt that's the case, because I've experimented with it myself, and its success rate on all but the easiest questions is extremely low.
jhugo|2 years ago
For example, I'm in some aviation-related discussion groups, and people will often ask technical questions which are best answered from a manual. In the past, eventually someone with access to the relevant manual would post an excerpt, answering the question.
Nowadays, commonly someone will paste something from ChatGPT, without disclosing that it's just language model output. It will have the form of a correct response, but with invented details, and a few pages of confusion will result while people try to understand the implications of whatever GPT dreamed up, until someone with access to the real manual comes along, after which the original poster will usually admit that they copy-pasted from ChatGPT.
I'm not sure if these people genuinely believe that a language model is somehow a font of knowledge, or if, like GPT, they don't care about whether something is accurate or not, but just care about whether other people believe them.
skummetmaelk|2 years ago
teraflop|2 years ago
Whenever a post gets popular enough, odds are good that somebody will ask ChatGPT and post its answer as their own. Usually, it will either choose a popular but wrong book in approximately the correct genre, or just flat-out invent a book by a real author. And then it will spit out a mixture of real plot details, distortions, and falsehoods in order to try to be as convincing as possible.
These answers are very obviously AI-generated (if you're familiar with the book or author in question) and stand in stark contrast to the human-generated answers. It's not uncommon that real people will make guesses that turn out not to be correct, but they almost never drastically misremember books they've read.
I find it incredibly frustrating, because even though ChatGPT could in theory be useful for this kind of thing, it only works if you know to apply the appropriate level of skepticism. When people start posting AI garbage without distinguishing it, they're drastically lowering the signal-to-noise ratio of what is otherwise a very useful resource, all in the name of gaining a few meaningless karma points.
And to forestall the inevitable objection: yes, I know it's possible that there are also lots of correct ChatGPT-generated answers, and I'm just not counting them because they don't draw attention to themselves. But I doubt that's the case, because I've experimented with it myself, and its success rate on all but the easiest questions is extremely low.