top | item 40395311

(no title)

xrendan | 1 year ago

I'd be interested in knowing if anyone is seriously using the assistants API, it feels like such a lock in to OpenAIs platform when your can alternatively just use completions that are much more easily interchanged.

discuss

order

Nedomas|1 year ago

I do and built Assistants API compat layer for Groq and Anthropic: https://github.com/supercorp-ai/supercompat I’d argue that Assistants API DX > manual completions API.

tomrod|1 year ago

Aye, but your FinOps will be comolaining even with simple use.

brianjking|1 year ago

Are you using Assistants API v2 with streaming?

phh|1 year ago

I've indeed refused to work with some providers giving only a chat interface and not a completion interface because it made the communication "less natural" to the model (like adding new system messages in between for function calling on models which don't officially does it, or adding other categories than system/user/assistant)

metaskills|1 year ago

Great points. Dont even get me started about how function calling in other LLMs costs me tokens. Something OpenAI provides OOTB. I'm also not a big fan of OpenAI's lock in. Right now I'm on a huge Claude 3 Haiku kick. That said, OpenAI does seem to get the APIs right and my hunch is the new Assistants API is going to potentially disrupt things again. Time will tell.

BoorishBears|1 year ago

I'm not sure you're talking about the same thing: OpenAI specifically has a "Assistants API" that manages long term memory and tool usage for the consumer: https://platform.openai.com/assistants

I'd guestimate 99% of people using LLMs are using instruct-based message interfaces that have a variation of system/user/assistant. The top models mostly only come as a completion models, and even Anthropic has switched to a message based API

j45|1 year ago

I've used it and in some cases it's taking days and weeks of development away to get to testing the market.

In some cases the lock in is what it is for now because a particular model in reality is so far ahead, or staying ahead.

It doesn't mean other options won't become available, but it does matter to relate your need to your actions.

Getting something working consistently for example might be the first goal, and then learning to implement it with multiple models might be secondary. The chances of that increase the later other models are explored in some cases.

It should be possible to tell pretty quickly if something works in a particular model that's the leader, how others compare to it and how to track the rate of change between them.

oddthink|1 year ago

I know at least one team is at work is using the Assistants API, and I'm talking with another team that is leaning pretty heavily towards using it over building a custom RAG solution themselves, or even over other in-house frameworks.

stavros|1 year ago

I use it mostly exclusively (I've even developed a Python library for it, https://github.com/skorokithakis/ez-openai), because it does RAG and function calling out of the box. It's pretty convenient, even if OpenAI's APIs are generally a trash fire.