top | item 46295881

(no title)

pacjam | 2 months ago

IMO context poisoning is only fatal when you can't see what's going on (eg black box memory systems like ChatGPT memory). The memory system used in the OP is fully white box - you can see every raw LLM request (and see exactly how the memory influenced the final prompt payload).

discuss

order

handfuloflight|2 months ago

That's significant, you can improve it in your own environment then.

pacjam|2 months ago

Yeah exactly - it's all just tokens that you have full control over (you can run CRUD operations on). No hidden prompts / hidden memory.