top | item 41625306

(no title)

romanleeb | 1 year ago

So the difference is we inject software review text chunks into the conversation as hidden context for the LLM to answer the query. Based on your input we do a cosine similarity search in the vector DB and retrieve the most relevant results which are then analyzed and subsequently the model formulates a response.

Maybe we need to make this even more clear on the LP - like the comment earlier where we show responses side-by-side from e.g. plain vanilla OpenAI/Claude and Reviewradar.

discuss

order

No comments yet.