top | item 46907897

(no title)

Calavar | 24 days ago

Sure, maybe it's tricky to coerce an LLM into spitting out a near verbatim copy of prior data, but that's orthoginal to whether or not the data to create a near verbatim copy exists in the model weights.

discuss

order

D-Machine|24 days ago

Especially since the recalls achieved in the paper are 96% (based on block largest-common substring approaches), the effort of extraction is utterly irrelevant.

Paradigma11|23 days ago

Like with those chimpanzees creating Shakespeare.