(no title)
frikk | 3 months ago
Now when I ask questions about design decisions, the LLM refers to the original paper and cites the decisions without googling or hallucinating.
With just these two things in my local repo, the LLM created test scripts to compare our results versus the paper and fixed bugs automatically, helped me make decisions based on the paper's findings, helped me tune parameters based on the empirical outcomes, and even discovered a critical bug in our code that was caused by our training data being random generated versus the paper's training data being a permutation over the whole solution space.
All of this work was done in one evening and I'm still blown away by it. We even ported our code to golang, parallelized it, and saw a 10x speedup in the processing. Right before heading to bed, I had the LLM spin up a novel simulator using a quirky set of tests that I invented using hypothetical sensors and data that have not yet been implemented, and it nailed it first try - using smart abstractions and not touching the original engine implementation at all. This tech is getting freaky.
No comments yet.