(no title)
emeryberger | 1 year ago
One major limitation the LLM has is that it can't run a profiler on the code,
but we can. (This would be a fun thing to do in the future - feed the output
of perf to the LLM and say 'now optimize').
This has been a feature of the Scalene Python profiler (https://github.com/plasma-umass/scalene) for some time (at this point, about 1.5 years) - bring your own API key for OpenAI / Azure / Bedrock, also works with Ollama. Optimizing Python code to use NumPy or other similar native libraries can easily yield multiple order of magnitude improvements in real-world settings. We tried it on several of the success stories of Scalene (before the integration with LLMs); see https://github.com/plasma-umass/scalene/issues/58 - and found that it often automatically yielded the same or better optimizations - see https://github.com/plasma-umass/scalene/issues/554. (Full disclosure: I am one of the principal designers of Scalene.)
randomtoast|1 year ago
jmathai|1 year ago
I'm drafting a blog post that talks about how but for now the documentation will have to do.
[1] https://withlattice.com/documentation
emeryberger|1 year ago
pplonski86|1 year ago
proofpositive|1 year ago
[deleted]