(no title)
colonCapitalDee | 13 days ago
> A common pitfall is for Claude to create skills and fill them up with generated information about how to complete a task. The problem with this is that the generated content is all content that's already inside Claude's probability space. Claude is effectively telling itself information that it already knows!
> Instead, Claude should strive to document in SKILL.md only information that:
> 1. Is outside of Claude's training data (information that Claude had to learn through research, experimentation, or experience) > 2. Is context specific (something that Claude knows now, but won't know later after its context window is cleared) > 3. Aligns future Claude with current Claude (information that will guide future Claude in acting how we want it to act)
> Claude should also avoid recording derived data. Lead a horse to water, don't teach it how to drink. If there's an easily available source that will tell Claude all it needs to know, point Claude at that source. If the information Claude needs can be trivially derived from information Claude already knows or has already been provided, don't provide the derived data.
For those interested the full skill is here: https://github.com/j-r-beckett/SpeedReader/blob/main/.claude...
dimitri-vs|13 days ago
j45|13 days ago
It's fairly common we notice these types of threads where one thing is being postulated and then there's comments upon comments of doer's showing what they have done.
siva7|13 days ago
lkoczorowski|12 days ago
Claude's training data is the internet. The internet is full of Express tutorials that use app.use(cors()) with no origin restriction. Stack Overflow answers that store JWTs in localStorage, etc.
Claude's probability space isn't a clean hierarchy of "best to worst." It's a weighted distribution shaped by frequency in training data.
So even though it "knows" stuff, it doesn't necessarily know what you want, or what a professional in production environment do.
Unless I'm missing something?
nmilo|12 days ago