(no title)
weijiacheng | 2 years ago
I'm sure someone sufficiently determined and good at prompt engineering, and integrating LLMs into a larger toolset, could come up with something even better. I'm personally very skeptical of LLMs as a technology, but even I have to admit that this was a pretty ideal and unobjectionable use of LLMs.
That being said, though it was a fun experiment, I later found that it was easier (and less wasteful of natural resources) to just do the same thing with a bit of custom markup and a search and replace script.
duskwuff|2 years ago
The most natural application of a language model in proofreading is to compute perplexity across the text; if all goes well, errors should be detectable as points of unusually high perplexity. (In principle, this should even be able to spot otherwise undetectable errors like missing words.)
weijiacheng|2 years ago