(no title)
mycentstoo | 6 months ago
I’d love to see how this compares when either the problem space is different or the language/ecosystem is different.
It was a great read regardless!
mycentstoo | 6 months ago
I’d love to see how this compares when either the problem space is different or the language/ecosystem is different.
It was a great read regardless!
dazzawazza|6 months ago
LLMs are nothing more than rubber ducking in game dev. The code they generate is often useful as a starting point or to lighten the mood because it's so bad you get a laugh. Beyond that it's broadly useless.
I put this down to the relatively small number of people who work in game dev resulting in relatively small number of blogs from which to "learn" game dev.
Game Dev is a conservative industry with a lot of magic sauce hidden inside companies for VERY good reasons.
Lerc|6 months ago
Multiplying two 24 bit posits in 8-bit Avr for instance. No models have succeeded yet, but usually because they try and put more than 8 bits into a register. Algorithmically it seems like they are on the right track but they don't seem to be able to hold the idea that registers are only 8-bits through the entirety of their response.
bugglebeetle|6 months ago
Insanity|6 months ago
Although in fairness this was a year ago on GPT 3.5 IIRC
diggan|6 months ago
GPT3.5 was impressive at the time, but today's SOTA (like GPT 5 Pro) are almost night-and-difference both in terms of just producing better code for wider range of languages (I mostly do Rust and Clojure, handles those fine now, was awful with 3.5) and more importantly, in terms of following your instructions in user/system prompts, so it's easier to get higher quality code from it now, as long as you can put into words what "higher quality code" means for you.
ocharles|6 months ago
r_lee|6 months ago
danielbln|6 months ago
computerex|6 months ago
johnisgood|6 months ago
SatvikBeri|6 months ago
bugglebeetle|6 months ago
jszymborski|6 months ago