top | item 47207552

(no title)

TheJoeMan | 10 hours ago

That first image, “Structure Prompts with XML”, just screams AI-written. The bullet lists don’t line up, the numbering starts at (2), random bolding. Why would anyone trust hallucinated documentation for prompting? At least with AI-generated software documentation, the context is the code itself, being regurgitated into bulleted english. But for instructions on using the LLM itself, it seems pretty lazy to not hand-type the preferred usage and human-learned tips.

discuss

order

rafram|9 hours ago

No, it’s two screenshots from Anthropic documentation, stitched together: https://platform.claude.com/docs/en/build-with-claude/prompt...

The post even links to that page, although there’s a typo in the link.

glth|9 hours ago

Author here: I have just fixed the typo. Thank you.

And yes, these are screenshots from Anthropic’s documentation.

dmd|9 hours ago

They're not even stitched together ; there's just no padding between the two images.

Calavar|10 hours ago

It looks like a screenshot from the Claude desktop app, so I don't think the author is trying to disguise the AI origin of the marerial

croes|8 hours ago

You just hallucinated the content is AI generated.

michaelcampbell|8 hours ago

"This is AI" is the new "This is 'shopped, I can tell by the pixels."

doctorpangloss|7 hours ago

There must be an OpenClaw YouTube video helping people post to hacker news, or something, because the front page is overrun with AI slop like this article, that makes no sense anyway. The author literally has no idea what any of this stuff means.