This was long before. Google had conversational LLMs before ChatGPT (though they weren’t as good in my recollection), and they declined to productize. There was a sense at the time that you couldn’t productize anything with truly open ended content generation because you couldn’t guarantee it wouldn’t say something problematic.
I'm having a hard time believing this, or at least understanding the decision (not on your part). Why wouldn't they just continue R&D on it rather than drop it entirely?
Many products we use every day start out unsafe and dangerous during the early stages. Why would this be any different?
Neema was running a fully fledged Turing passing chatbot in 2019. It was suppressed. Then written about in open source and openAI copied it. Then Google was forced to compete.
gradys|15 days ago
See Facebook’s Galactica project for an example of what Google was afraid would happen: https://www.technologyreview.com/2022/11/18/1063487/meta-lar...
fennecbutt|15 days ago
Many products we use every day start out unsafe and dangerous during the early stages. Why would this be any different?
And why allow the paper to be published?
tsunamifury|15 days ago
This is all well known history.