(no title)
dominicwhyte | 2 years ago
Then we have an inbox app (also made in Retool) that our support team uses to manually review any submissions that are isLikelySpam = true. The <reason> helps to understand why it was flagged.
Our use case is for a form builder (https://fillout.com) but I imagine this type of use case is pretty common for any app that has user-generated content
hereonout2|2 years ago
Spam detection is a classic example for classification problems. I guess I'm trying to gauge whether there's an entire suite of traditional problems that llms solve well enough by simply asking a question of the base model. I've found a few areas in my own work where this is the case.
dominicwhyte|2 years ago
We also have other spam filters that are not LLM-based. One of the main benefits of the LLM-based approach is that it's good at catching people who try to avoid detection (e.g. someone purposefully mis-spelling suspicious words like "pa$$word")