That first image, “Structure Prompts with XML”, just screams AI-written. The bullet lists don’t line up, the numbering starts at (2), random bolding. Why would anyone trust hallucinated documentation for prompting? At least with AI-generated software documentation, the context is the code itself, being regurgitated into bulleted english. But for instructions on using the LLM itself, it seems pretty lazy to not hand-type the preferred usage and human-learned tips.
rafram|9 hours ago
The post even links to that page, although there’s a typo in the link.
glth|9 hours ago
And yes, these are screenshots from Anthropic’s documentation.
dmd|9 hours ago
Calavar|10 hours ago
croes|8 hours ago
michaelcampbell|8 hours ago
doctorpangloss|7 hours ago