top | item 42751840

(no title)

davmar | 1 year ago

i think this type of interaction is the future in lots of areas. i can imagine we replace API's completely with a single endpoint where you hit it up with a description of what you want back. like, hit up 'news.ycombinator.com/api' with "give me all the highest rated submissions over the past week about LLMs". a server side LLM translates that to SQL, executes the query, returns the results.

this approach is broadly applicable to lots of domains just like FFMpeg. very very cool to see things moving in this direction.

discuss

order

sitkack|1 year ago

Do you envision the LLMs creating a protocol? Would the caller supply the schema for the response?

andai|1 year ago

I mentioned here recently that I let LLMs design the APIs which they are going to use. I got quite a negative response to that, which surprised me.

halJordan|1 year ago

The big protocol doing this is called "Model Context Protocol" and it should've been a widely read/discussed post except hn has taken a wide anti-ai stance

varispeed|1 year ago

Imagine that every API will be behind government gateway, checking all the queries before passing on to the real API and then checking its replies.

mochajocha|1 year ago

Except you don't need an LLM to do any of this, and it's already computationally cheaper. If you don't know the results you want, you should figure that out first, instead of asking a Markov chain to do it.

tomrod|1 year ago

I believe this approach is destined for a lot of disappointment. LLMs enable a LOT of entry- and mid-level performance, quickly. Rightfully, you and I worry about the edge cases and bugs. But people will trend towards things that enable them to do things faster.