(no title)
moconnor | 4 months ago
I've been 5x more productive using codex-cli for weeks. I have no trouble getting it to convert a combination of unusually-structured source code and internal SVGs of execution traces to a custom internal JSON graph format - very clearly out-of-domain tasks compared to their training data. Or mining a large mixed python/C++ codebase including low-level kernels for our RISCV accelerators for ever-more accurate docs, to the level of documenting bugs as known issues that the team ran into the same day.
We are seeing wildly different outcomes from the same tools and I'm really curious about why.
vachina|4 months ago
hitarpetar|4 months ago
pancsta|4 months ago
sorcercode|4 months ago
I'd wager a majority of software engineers today are using techniques that are well established... that most models are trained on.
most current creation (IMHO) comes from wielding existing techniques in different combinations. which i wager is very much possible with LLMs