(no title)
dror | 2 years ago
I find that GPT's answers are for the most part more reliable the searches, specifically today's searches. In the last 12 months, search results have become so spammy with AI generated pages (oh the irony), that it's hard to find reliable answers.
So like search, I look at GPT's answers with a grain of salt and validate them, but these days I use GPT all day every day and search rarely. To be fair, I use it a lot because I have a GPT CLI that works just the way I want it to, since I wrote it :-). https://github.com/drorm/gish
anotherpaulg|2 years ago
It seems like you've been using similar workflows to what I've been trying for coding with gpt?
https://github.com/paul-gauthier/easy-chat#created-by-chatgp...
dror|2 years ago
-----
#import ~/work/gish/tasks/coding.txt
Change the following so that it looks for the open AI key in the following fashion:
1. env variable
2. os.home()/.openai
3. Throws an exception telling the user to put it in one of the above, and then exits
#diff ~/work/gish/src/LLM.ts
-----
Puts me in vimdiff comparing the old code with the generated code letting me review and cherry pick the changes.
tornato7|2 years ago
It's actually $0.002/1k, FYI
akhilpotla|2 years ago
Also, I wonder how they decide what code is worth training on. Because a lot of code is written in poor style/has technical debt, it might be the case that these LLMs in the long run lead to an increase in the technical debt in our society. Plus, eventually, and this might already be happening, the LLM are going to end up training on their own outputs, so that could lead to self immolation by the model. I am not certain RLHF completely resolves this issue.
wwweston|2 years ago
This. The value proposition is very clearly tied to the quality of the training data, and if there's secret sauce for automatically determining information quality that's obviously huge. Google was built in part on such insights. I suspect they do have something. I'd be utterly astonished if quality sorting were an emergent property of LLMs (especially given it's iffy in humans).
The problem, of course, is that if they do have a way of privileging data for training, that information is going to be the center of the usual arms race for attention and thinking. It can't be truly public or it's dead.
jazzyjackson|2 years ago
lupire|2 years ago
swader999|2 years ago
fomine3|2 years ago
rimliu|2 years ago