top | item 47174655

(no title)

al_borland | 3 days ago

From everything I’ve seen, LLMs aren’t exactly known for writing extremely optimized code.

Also, what happens to the stability and security of my phone after they let an LLM loose on the entire code base for a weekend?

There are 1.5 billion iPhones out there. It’s not a place to play fast and loose with bleeding edge tech known for hallucinations and poor architecture.

discuss

order

teeray|3 days ago

> LLMs aren’t exactly known for writing extremely optimized code.

They are trained on everything, and as a result write code like the Internet average developer.

okanat|2 days ago

The average developers suck. The distribution is also unbalanced. It is bulkier on the low-skill side.

Great UIs are written by above average or even exceptional developers. Such experience is tied to the real-life reasoning and combining unique years-long human experience of interacting with the world. You need true general intelligence for that.

martinpw|2 days ago

Is that really how it works - everything is just weighted equally? I would hope there would be at least some kind of tuning, so <well-regarded-codebase> gets more weight than <random-persons-first-coding-project>? If not, that seems like an opportunity. But no idea how these things are actually configured.

charcircuit|2 days ago

>write code like the Internet average developer

Before post training (GPT3 2020 class models). Post training makes it no longer act like the average.

rescbr|3 days ago

If you ask an LLM to code whatever, it definitely won’t produce optimized code.

If you direct it to do a specific task to find memory and cpu optimization points, based on perf metrics, then it’s a completely different world.

jfim|3 days ago

You can also tell it the optimization to implement.

I asked Claude to find all the valid words on a Boggle board given a dictionary and it wrote a simple implementation that basically tried to search for every single word on the board. Telling it to prune the dictionary first by building a bit mask of the letters in each word and on the board and then checking if the word is even possible to have on the board gave something like a 600x speedup with just a simple prompt of what to do.

That does assume that one has an idea of how to optimize though and what are the bottlenecks.

robinwassen|3 days ago

They kind do if you prompt them, I had mine reimplement the Windows calc (almost fully feature complete) in rust running with 2mb RAM instead of 40mb or whatever the win 11 version uses as a POC.

A handwritten c implementation would most likely be better, but there is so much to gain from just slaughtering the abstraction bloat it does not really matter.