top | item 33858111 (no title) mrdrozdov | 3 years ago This might provide some guidance: http://gltr.io/ discuss order hn newest Der_Einzige|3 years ago This and related techniques are trivially foolable by fine-tuning the model.They're also trivially foolable by using sampling techniques or settings which encourage the model to generate rare words a lot.Also foolable with filter-assisted decoding: https://paperswithcode.com/paper/most-language-models-can-be...
Der_Einzige|3 years ago This and related techniques are trivially foolable by fine-tuning the model.They're also trivially foolable by using sampling techniques or settings which encourage the model to generate rare words a lot.Also foolable with filter-assisted decoding: https://paperswithcode.com/paper/most-language-models-can-be...
Der_Einzige|3 years ago
They're also trivially foolable by using sampling techniques or settings which encourage the model to generate rare words a lot.
Also foolable with filter-assisted decoding: https://paperswithcode.com/paper/most-language-models-can-be...