Show HN: Symphony – Use GPT-4 to call functions in sequence
93 points| jrmyphlmn | 2 years ago |symphony.run
I'm excited to introduce Symphony – a toolkit designed to help developers write functions and let GPT-4 call them in whatever sequence that makes most sense based on conversation.
I've been quite amazed[1] by GPT-4's recent ability to both detect when a function needs to be called and to respond with JSON that adheres to the function's signature.
Since developers currently append descriptions of functions to API calls[2], I often found myself wishing for a toolkit that would automatically create these descriptions as I added and debugged functions during development.
You can get started by cloning the repository and adding functions by following the guide at https://symphony.run/docs
As of now, the toolkit supports functions in TypeScript. I'll be adding support for more languages and features based on your feedback :)
[1] - Symphony Showcase: https://symphony.run/showcase
[2] - Function calling and other API updates from OpenAI: https://openai.com/blog/function-calling-and-other-api-updat...
ilaksh|2 years ago
So the main point of this seems to be extracting the interface from the module and converting it into the OpenAI API call's functions format.
It's a good idea. But for me I would rather just have an npm package with a function like
which I could then use inside of my own project which already handles the rest of it.jrmyphlmn|2 years ago
One aspect I'm excited about is the possibility of rendering the JSON outputs from these function calls into UI components, as previewed here: https://symphony.run/showcase. Using a function's type definitions is a nice starting point to embed interfaces into the conversation.
Additionally, I hope to make the toolkit language-agnostic. I'd like to incorporate some of my .py and .rs scripts to make them ready for use as well. Not sure if packaging it as an npm package would go against that objective, but will definitely consider :)
lukasb|2 years ago
andrewguenther|2 years ago
collinc777|2 years ago
jrmyphlmn|2 years ago
> state machines with llm directed function calling is going to be a huge unlock
This was my intuition as well, glad you're able to resonate with that :)
> One thing I’m curious about is narrowing the scope of accessible functions based on a state machine that is designed to match the business domain.
This is an interesting question, I can definitely see how state machines can help with narrowing the scope of accessible functions.
justanotheratom|2 years ago
https://microsoft.github.io/TypeChat/
jrmyphlmn|2 years ago
swozey|2 years ago
ilaksh|2 years ago
nprateem|2 years ago
greggh|2 years ago
michaelmior|2 years ago
mkmk|2 years ago
NickNaraghi|2 years ago
jrmyphlmn|2 years ago
IIRC, only the function signatures (or descriptions) are counted as part of the context window so you could add as many till you exceed that limit. Since the contents of the function itself are not counted, your function can be whatever length.
> is there any way to choose only a subject of the functions to share for a given user query?
As of now, no. I can see why this may be a problem soon since right now all functions are available for gpt-4 and each call can become expensive pretty quickly if you send like 50 functions every time.
I'm not sure how to address this yet, but I'd like to think of it as some form of fine-tuning that happens after having a few conversations. Will keep you in the loop!
nurple|2 years ago
abrgr|2 years ago
It's such a good fit for multi-step LLM apps and a really nice abstraction for generic backend flows as well.
jrmyphlmn|2 years ago
mjirv|2 years ago
We’re all typescript under the hood so I’ll give this a look and see if we can use it.
Symphony wouldn’t support other LLMs currently, right? Only GPT-4?
[0] https://delphihq.com
jrmyphlmn|2 years ago
Right, currently Symphony only supports GPT-4 and GPT-3.5-turbo since they're the only ones with native API support.
ushakov|2 years ago
unknown|2 years ago
[deleted]
puffyblogger|2 years ago
[deleted]