top | item 37571732

Show HN: Symphony – Use GPT-4 to call functions in sequence

93 points| jrmyphlmn | 2 years ago |symphony.run

Hey HN!

I'm excited to introduce Symphony – a toolkit designed to help developers write functions and let GPT-4 call them in whatever sequence that makes most sense based on conversation.

I've been quite amazed[1] by GPT-4's recent ability to both detect when a function needs to be called and to respond with JSON that adheres to the function's signature.

Since developers currently append descriptions of functions to API calls[2], I often found myself wishing for a toolkit that would automatically create these descriptions as I added and debugged functions during development.

You can get started by cloning the repository and adding functions by following the guide at https://symphony.run/docs

As of now, the toolkit supports functions in TypeScript. I'll be adding support for more languages and features based on your feedback :)

[1] - Symphony Showcase: https://symphony.run/showcase

[2] - Function calling and other API updates from OpenAI: https://openai.com/blog/function-calling-and-other-api-updat...

33 comments

order

ilaksh|2 years ago

Nice. Just to point out, calling a sequence of functions is what GPT-4 does automatically if you keep feeding it the responses and it is clear to it that is necessary given it's instructions.

So the main point of this seems to be extracting the interface from the module and converting it into the OpenAI API call's functions format.

It's a good idea. But for me I would rather just have an npm package with a function like

  extractFunctions(srcFilename)
which I could then use inside of my own project which already handles the rest of it.

jrmyphlmn|2 years ago

Thank you! You're right; the main focus right now is extraction.

One aspect I'm excited about is the possibility of rendering the JSON outputs from these function calls into UI components, as previewed here: https://symphony.run/showcase. Using a function's type definitions is a nice starting point to embed interfaces into the conversation.

Additionally, I hope to make the toolkit language-agnostic. I'd like to incorporate some of my .py and .rs scripts to make them ready for use as well. Not sure if packaging it as an npm package would go against that objective, but will definitely consider :)

lukasb|2 years ago

Well, this knows when to stop calling GPT-4, which seems non-trivial.

andrewguenther|2 years ago

Yep. I built this for Python recently and it uses introspection to automatically build this same info. It's maybe 100 lines in total? The OpenAI API is pretty amazing.

collinc777|2 years ago

Love this! I’m an fp-ts and xstate fan and am happy to see them in this usecase. My intuition tells me that state machines with llm directed function calling is going to be a huge unlock. One thing I’m curious about is narrowing the scope of accessible functions based on a state machine that is designed to match the business domain. This might involve machine to machine communication which I know XState supports

jrmyphlmn|2 years ago

Thanks!

> state machines with llm directed function calling is going to be a huge unlock

This was my intuition as well, glad you're able to resonate with that :)

> One thing I’m curious about is narrowing the scope of accessible functions based on a state machine that is designed to match the business domain.

This is an interesting question, I can definitely see how state machines can help with narrowing the scope of accessible functions.

justanotheratom|2 years ago

I tried TypeChat for my use case and ended up defining functions as typescript data types. This approach sounds much better, and leverages the newer OpenAI function calling, which should be more reliable I would think. Thanks for creating+sharing.

https://microsoft.github.io/TypeChat/

jrmyphlmn|2 years ago

Thanks, excited to hear what you think after trying it out :)

swozey|2 years ago

Is there still an absolutely pathetic amount of GPT4 calls per day? I pay the $20 to chatgpt but I never ever pick GPT4 because 10 minutes into an AI directed conversation I'll get the "sorry ur out of gpt4 today" message. The only time I wind up using that limit is when I don't even realize it's using it. I have no idea what the difference actually is because its limit is to low for me to even consider relying on over 3.5.

ilaksh|2 years ago

Try using the API. There are multiple open source ChatGPT clones or terminal clients. The API limits are not a problem for an individual.

nprateem|2 years ago

No it's been OK the last month or 2

greggh|2 years ago

First thing I thought when reading the title was "some AI for Symphony PHP, interesting". I know its hard to name things at this point, these clashes are just going to happen with so many projects over the last 50 years. But at some point I think clashing with a project in such heavy use is just a detriment to the brand.

michaelmior|2 years ago

It's worth noting that the PHP framework is spelled "Symfony."

mkmk|2 years ago

If I'm understanding correctly, this makes all of the functions available to gpt-4 at once and then gpt-4 decides which one to use, right? What are the limits on the number and length of functions, and is there any way to choose only a subject of the functions to share for a given user query?

NickNaraghi|2 years ago

Pretty cool that this is a recreation of expert systems, which was a dominant approach to building AIs for decades :)

jrmyphlmn|2 years ago

You're right!

IIRC, only the function signatures (or descriptions) are counted as part of the context window so you could add as many till you exceed that limit. Since the contents of the function itself are not counted, your function can be whatever length.

> is there any way to choose only a subject of the functions to share for a given user query?

As of now, no. I can see why this may be a problem soon since right now all functions are available for gpt-4 and each call can become expensive pretty quickly if you send like 50 functions every time.

I'm not sure how to address this yet, but I'd like to think of it as some form of fine-tuning that happens after having a few conversations. Will keep you in the loop!

nurple|2 years ago

Sure, give the LLM a discovery function. If you're trying to figure out what tools to give the LLM based on user input, you're probably squandering the main power of the pattern.

abrgr|2 years ago

This is really cool. Love to see usage of state machines on the backend!

It's such a good fit for multi-step LLM apps and a really nice abstraction for generic backend flows as well.

mjirv|2 years ago

Nice. We (Delphi[0] - enterprise AI data assistant) do something similar internally to power our app.

We’re all typescript under the hood so I’ll give this a look and see if we can use it.

Symphony wouldn’t support other LLMs currently, right? Only GPT-4?

[0] https://delphihq.com

jrmyphlmn|2 years ago

Glad to hear that!

Right, currently Symphony only supports GPT-4 and GPT-3.5-turbo since they're the only ones with native API support.

ushakov|2 years ago

How is it different from LangChain?