top | item 43461688

(no title)

cafed00d | 11 months ago

How do you think about core business logic (or at least, _significant_ business logic) being embedded within prompts like here: https://github.com/langmanus/langmanus/blob/main/src/prompts...

Do you think or worry about not-being able to test these things? (Or is that just me :))

Details: I ack/understand this comes from a dependency (ReAct agents); not directly langmanus.

But, still, curious what the community/hn-tech thinks of testability, veracity, potentially conflicting or overlapping instructions across agents, etc, wrt “prompts” as sources of logic. Ack its a general practice with LLMs.

discuss

order

cship2|11 months ago

Thought the logic was going to go with cpu and inference related tasks will use GPU.