top | item 35246141

(no title)

dcolkitt | 2 years ago

I'd also add that the almost all standardized tests are designed for introductory material across millions of people. That kind of information is likely to be highly represented in the training corpus. Whereas most jobs require highly specialized domain knowledge that's probably not well represented in the corpus, and probably too expansive to fit into the context window.

Therefore standardized tests are probably "easy mode" for GPT, and we shouldn't over-generalize its performance there to its ability to actually add economic value in actually economically useful jobs. Fine-tuning is maybe a possibility, but its expensive and fragile, and I don't think its likely that every single job is going to get a fine-tuned version of GPT.

discuss

order

kolbe|2 years ago

To add further, these parlor tricks are nothing new. Watson won Jeopardy in 2011, and never produced anything useful. Doing well on the SAT is just another slight-of-hand trick to distract us from the fact that it doesn't really do anything beyond aggregate online information.

WalterSear|2 years ago

The issue at hand is that a huge number of people make a living by aggregating online information. They might convey this to others via speech, but the 'human touch' isn't always adding anything to the interaction.

Tostino|2 years ago

From what i've gathered, fine tuning should be used to train the model on a task, such as: "the user asks a question, please provide an answer or follow up with more questions for the user if there are unfamiliar concepts."

Fine tuning should not be used to attempt to impart knowledge that didn't exist in the original training set, as it is just the wrong tool for the job.

Knowledge graphs and vector similarity search seem like the way forward for building a corpus of information that we can search and include within the context window for the specific question a user is asking without changing the model at all. It can also allow keeping only relevant information within the context window when the user wants to change the immediate task/goal.

Edit: You could think of it a little bit like the LLM as an analog to the CPU in a Von Neumann architecture and the external knowledge graph or vector database as RAM/Disk. You don't expect the CPU to be able to hold all the context necessary to complete every task your computer does; it just needs enough to store the complete context of the task it is working on right now.

fud101|2 years ago

>From what i've gathered, fine tuning should be used to train the model on a task, such as: "the user asks a question, please provide an answer or follow up with more questions for the user if there are unfamiliar concepts."

That isn't what finetuning usually means in this context. It usually means to retrain the model using the existing model as a base to start training.

visarga|2 years ago

There can be foot guns in the retrieval approach. Yes, you keep the model fixed and only add new data to your index, then you allow the model to query the index. But when the model gets two snippets from different documents it might combine information between them even when it doesn't make sense. The model has a lack of context when it just retrieves random things based on search.