top | item 41032689

(no title)

guywald | 1 year ago

Reasonable question, BTW agents were part of this but I removed them temporarily. There is somewhat agentic behavior in magic-cli which you can take a look at.

My main motivation was a gap in the Rust ecosystem for this, as well as a desire to have reasonable abstractions for model alignment, agents and structured response generation with error correction.

In addition, Ollama is a first-class citizen so local LLMs are supported (it calls the locally hosted APIs which Ollama exposes).

And as a last point, it’s just a fun project to hack on. If you have suggestions for similar abstractions I missed, please let me know!

discuss

order

wokwokwok|1 year ago

If you want feedback…

I’m not sure it’s a good abstraction if it generates a prompt.

Generating good prompts is a Very Hard Problem, and machine generated ones are almost always worse at it than a hand crafted one.

I think if you’re serious you should look at how you can build these systems so the user can use them with entirely hand crafted prompts.

Look at your library from that perspective; if the “generates prompt” part doesn’t exist, what parts are still left?

For example, imagine an agent sandbox where the agent has a set of “tools” like web, command line, code editor and has to pick between tools and craft structured arguments to invoke the various tools.

Given that a) the prompts have to be handed crafted with tweaks per LLM target, b) the set of tools is entirely configurable by the library user, c) at runtime you can pick the set of available tools and LLM to use… that’s an abstraction worth using.

…but it’s hard.

Some other ideas: Eg; agent back off retry for api outages, agents voting on best solution, agent to check output of another agent library automatically generates a new response if the overseer agent rejects the first response, agent to generate code, library parses, executes code. Agents with different system prompts like “civ5 advisors” that can generate suggestions for solving a problem in different ways, multiple api end points to distribute requests, “high and low” agents where an agent can “ask for help” from a more powerful LLM if it gets stuck (eg. For coding, if the generated code fails too many times).

Not: “literally anything” -> library generates terrible prompt -> returns response from API.

guywald|1 year ago

Thanks. You can take a look at the alignment module (there’s an example but it’s not in the README), it implements the “overseer” concept. And the prompts are mostly customizable, except for some hard-coded ones.