top | item 43800305

(no title)

anougaret | 10 months ago

[dead]

discuss

order

godelski|10 months ago

Are you saying that LLMs will generate shitty code and the fix that by using your LLM? That seems... inconsistent...

anougaret|10 months ago

we don't do the LLM part per say

we instrument your code automatically which is a compiler like approach under the hood, then we aggregate the traces

this allows context engineering the most exhaustive & informative prompt for LLMs to debug with

now if they still fail to debug at least we gave them all they should have needed

saagarjha|10 months ago

Why do you need to store a copy of my code to support what seems to be a time traveling debugger?

anougaret|10 months ago

valid concerns of course

- we are planning a hosted AI debugging feature that can aggregate multiple traces & code snippets from different related codebases and feed it all into one llm prompt, that benefits a lot from having it all centralized on our servers

- for now the rewriting algorithms are quite unstable, it helps me debug it to have failing code files in sight

- we only store your code for 48hours as I assume it's completely unnecessary to store for longer

- a self hosted ver will be released for users that cannot accept this for valid reasons