The problem with bouncing ideas off of AI is that you still need to know enough to know when something is likely a hallucination. Because unless you're double-checking it with some kind of regular cadence you're probably accepting fiction as fact. Its really easy to just trust everything these chatbots output because of the style of communication. I'll be the first admit I fall for this trap all the time.
epolanski|11 days ago
For what is worth I have two different skills in claude code which are two reviewers with two distinct personalities.
At every plan I write, I have them review it and find edge cases, critique.
I don't think I've seen hallucinations since few opus versions at least.
Feedback is useful 4 times out of 5. Very useful actually. And one time is not very valuable or wrong (but it requires the two different skills to agree).