(no title)
ognarb | 1 month ago
It seems the code was written with AI, I hope the author knows what he is doing. Last time I tried to use AI to optimize CPU-heavy C++ code (StackBlur) with SIMD, this failed :/
ognarb | 1 month ago
It seems the code was written with AI, I hope the author knows what he is doing. Last time I tried to use AI to optimize CPU-heavy C++ code (StackBlur) with SIMD, this failed :/
klaussilveira|1 month ago
Have you tried to do any OpenGL or Vulkan work with it? Very frustrating.
React and HTML, though, pretty awesome.
inetknght|1 month ago
I've started adding this to all of my new conversations and it seems to help:
My question to the LLM then follows in the next paragraph. Foregoing most of the LLM's code-writing capabilities in favor of giving observations and ideas seems to be a much better choice for productivity. It can still lead me down rabbit holes or wrong directions, but at least I don't have to deal with 10 pages of prose in its output or 50 pages of ineffectual code.simonw|1 month ago
It's possible Opus 4.5 and GPT-5.2 are significantly less terrible with C++ than previous models. Those only came out within the past 2 months.
They also have significantly more recent knowledge cut-off dates.
DrBazza|1 month ago
I suppose part of the problem is that training a model on publicly available C++ isn't going to be great because syntactically broken code gets posted to the web all the time, along with suboptimal solutions. I recall a talk saying that functional languages are better for agents because the code published publicly is formally correct.
FpUser|1 month ago
Also to generate boilerplate / repetitive.
Overall I consider it a win.
nurettin|1 month ago
seg_fault|1 month ago
leopoldj|1 month ago
LoganDark|1 month ago
https://github.com/logandark/stackblur-iter
shihab|1 month ago
AI is always good at going from 0 to 80%, it's the last 20% it struggles with. It'd be interesting to see a claude-written code making its way to a well-established library.