top | item 44778375

(no title)

semitones | 7 months ago

Furthermore, it is very rare to have the following kind of text present in the training data: "What is the answer to X?" - "I don't know, I am not sure."

In this situation very often there won't be _any_ answer, plenty of difficult questions go unanswered on the internet. Yet the model probably does not interpret this scenario as such

discuss

order

philipswood|7 months ago

Has anybody tried what seems obvious?

Have a series of pretraining sessions with training data where specific information is not present and training questions/answers of "I don't know" for that data is also trained on.

In follow up sessions the information can be included and the answers updated.

Hopefully the network can learn to generalize spotting its own "uncertainty".

root_axis|7 months ago

It doesn't seem like that would work since all you're doing is locating "I don't know" in proximity to arbitrary locations in the embedding matrix, not actually with respect to the unbounded set of things that don't exist within it.

taneq|7 months ago

I don’t think this specific approach would wish to well (you’re training the network to answer ‘dunno’ to that question, not to questions it can’t answer) but I think you’ve got the right general idea.

I’d try adding an output (or some special tokens or whatever) and then train it to track the current training loss for the current sample. Hopefully during inference this output would indicate how out-of-distribution the current inputs are.

wincy|7 months ago

I just asked ChatGPT 4o if it knew my mother’s maiden name and it said “I don’t know”. Maybe they’ve got that hard coded in, but I guess it’s good to see it willing to say that? Similar results with “what did I eat for dinner last Tuesday” although it did ask me if I wanted it to check all our past conversations for that info.

sitkack|7 months ago

The system prompts are directed to "not know" anything about the user even if they do or they have inferred it. It reduces the spooky factor.

devmor|7 months ago

That’s a really astute observation. It would be interesting if we could find a way to train models to signify when they are “stretching” the vector distance too far from the context window, because the available training data is too sparse or nonexistent.

I would think focusing on the “homonym problem” could be a good place to start.

tdtr|7 months ago

I'm pretty sure that the canonical choice is either choosing vectors to be anchor - either by a knn distance with other vectors, or by "hand", or even stuff like cross entropy - but then that is already in the loss function. another method would be to create some kind of adversarial setup where the output is "stretched" intentionally and then criticized by another llm. afaik the problem is with scale, as manually going through a bunch of vectors to just ground the latent isnt exactly economical. also people are quite conservative, esp in the big model runs - stuff like muon isnt exactly popularized till the new qwen or kimi. obviously this is all speculation for open models and folks with more experience can chime in.

delusional|7 months ago

There is to my knowledge no vector signifying "truth" and therefore no vector to measure the distance from. You cannot get a "truthiness" measure out of these models, because they don't have the concept of truth. They use "likelyness" as a proxy for "truth".

You could decide that the text is "too unlikely" the problem there is that you'll quickly discover that most human sentences are actually pretty unlikely.

littlestymaar|7 months ago

The problem is even harder than you make it look: even if the model founds plenty of “I don't know” answer in its training corpus it doesn't mean that this is the desirable answer to the questions: the model can know the answer even if one person on the internet doesn't.

“I don't know” must be derived from the model's knowledge as a whole, not from individual question/anser pairs in training.

simianwords|7 months ago

i don't think this is correct - such training data is usually made at SFT level after unsupervised learning on all available data in the web. the SFT level dataset is manually curated meaning there would be conscious effort to create more training samples of the form to say "i'm not sure". same with RLHF.

therein|7 months ago

You mean I don't think this is automatically correct. Otherwise it very likely is correct. Either way, you're guessing the manual curation is done in a way that is favorable to include I don't know answers. Which it most likely doesn't.

astrange|7 months ago

"Rare" doesn't really mean much. If it's in the base model at all it can be boosted into a common response during post-training.