(no title)
pedro_caetano | 3 months ago
I've been using Anthropic's models with gptel on Emacs for the past few months. It has been amazing for overviews and literature review on topics I am less familiar with.
Surprisingly (for me) just slightly playing with system prompts immediately creates a writing style and voice that matches what _I_ would expect from a flesh agent.
We're naturally biased to believe our intuition 'classifier' is able to spot slop. But perhaps we are only able to stop the typical ChatGPTesque 'voice' and the rest of slop is left to roam free in the wild.
Perhaps we need some form of double blind test to get a sense of false negative rates using this approach.
chemotaxis|3 months ago
If you spend days or weeks fine-tuning prompts to strike the right tone, reviewing the output for accuracy, etc, then pretty much by definition, you're undermining the economic benefits of slopification. And you might accidentally end up producing content that's actually insightful and useful, in which case, you know... maybe that's fine.
guffins|3 months ago