top | item 43459820

(no title)

tb1989 | 11 months ago

- The official example strongly promotes the Anthropic API, which is on GitHub. This is clear evidence.

- There is no clear explanation of the coupling between the system prompt and the tool call. Even if it mentions the open source Gemma or Deepseek, it would be much better.

The official attitude makes it difficult to trust this project.

The point you made is exactly the cunning part. Anyone can copy it, but without official support, it is simply impossible: this is pure community exploitation

discuss

order

Leynos|11 months ago

If you want an LLM to use a tool, you just need to implement a parser in your LLM client that extracts the tool call from the LLM's response, then give the LLM a syntax it can use to make the tool call.

For example, in Roo Code:

``` TOOL USE

You have access to a set of tools that are executed upon the user's approval. You can use one tool per message, and will receive the result of that tool use in the user's response. You use tools step-by-step to accomplish a given task, with each tool use informed by the result of the previous tool use.

# Tool Use Formatting

Tool use is formatted using XML-style tags. The tool name is enclosed in opening and closing tags, and each parameter is similarly enclosed within its own set of tags. Here's the structure:

<tool_name> <parameter1_name>value1</parameter1_name> <parameter2_name>value2</parameter2_name> ... </tool_name>

For example:

<read_file> <path>src/main.js</path> </read_file>

Always adhere to this format for the tool use to ensure proper parsing and execution. ```