(no title)
kizer | 2 years ago
The ChatGPT version is the least harmful in my opinion; sinister are the propagated problems when GPT is utilized under-the-hood as a component in services (such as Bing search).
kizer | 2 years ago
The ChatGPT version is the least harmful in my opinion; sinister are the propagated problems when GPT is utilized under-the-hood as a component in services (such as Bing search).
int_19h|2 years ago
It would be much better indeed if we knew exactly what the training data was for every given model. But they will still hallucinate things that aren't directly in that data, but could be inferred from it somehow, so that won't solve the problem.