top | item 46390474

(no title)

pell | 2 months ago

Was there any concern about giving the LLM access to this return data? Reading your article I wondered if there could be an approach that limits the LLM to running the function calls without ever seeing the output itself fully, e.g., only seeing the start of a JSON string with a status like “success” or “not found”. But I guess it would be complicated to have a continuous conversation that way.

discuss

order

aidos|2 months ago

> No model should ever know Jon Snow’s phone number from a SaaS service, but this approach allows this sort of retrieval.

This reads to me like they think that the response from the tool doesn’t go back to the LLM.

I’ve not worked with tools but my understanding is that they’re a way to allow the LLM to request additional data from the client. Once the client executes the requested function, that response data then goes to the LLM to be further processed into a final response.

simonw|2 months ago

I was confused by that too. I think I've figured it out.

They're saying that a public LLM won't know the email address of Jon Snow, but they still want to be able to answer questions about their private SaaS data which DOES know that.

Then they describe building a typical tool-based LLM system where the model can run searches against private data and round-trip the results through the model to generate chat responses.

They're relying on the AI labs to keep their promises about not training in data from paying API customers. I think that's a safe bet, personally.

timrogers|2 months ago

That would be the normal pattern. But you could certainly stop after the LLM picks the tool and provides the arguments, and not present the result back to the model.