(no title)
romland | 2 years ago
Down the line you will be able to (cheaply) have LLMs know about your entire code-base and at that point, it will definitely become a pretty good option.
On prompt-length, yeah, some of those prompts took a long time to craft. The longer I spend on a prompt, the more variations of the same code I have seen -- I probably get impatient and biased and home in on the exact solution I want to see instead of explaining myself better. When it's gone that far, it's probably not worth it. Very often I should probably also start over on the prompt as it probably can be described differently. That said, if it was in the real world and I was fine with going in and massaging the code fully, quite some time could be saved.
If you don't know how to code, I think it will be very hard. You would at the very least need a lot more patience. But on the flip side, you can ask for explanations of the code that is returned and I must actually say that that is often pretty good -- albeit very verbose in ChatGPT's case. I find it hard to throw a real conclusion out there, but I can say that domain knowledge will always help you. A lot.
I think if you know javascript, you could easily make a game even though you had never ever thought about making a game before. The nice thing about that is that you will probably not do any premature optimization at least :-)
All in all, some prompts was nailed down on first try, the simple particle system was one such example. Some other prompts -- for instance the map-generation with Perlin noise -- might be 50 attempts.
A lot of small decisions are helpful, such as deciding against any external dependencies. It's pretty dodgy to ask for code around some that (e.g. some noise library) that you need to fit into your project. I decided pretty early that there should be no external dependencies at all and all graphics would be procedurally generated. It has helped me as I don't need to understand any libraries I have never used before.
Another note that is related to the above, there are upsides and downsides with high-ish temperature is you get varying results. I think I should probably change my behaviour around that and possibly tweak it depending on how exact I feel my prompt is.
I find myself often wondering where the cap of today's LLM's are, even if we go in the direction of multi-models and have a base which does the reasoning -- and I have to say I keep finding myself getting surprised. I think there is a good possibility that this will be the way some kinds of development will be. But, well, we'd need good local models for that if we work on projects that might be of a sensitive nature.
Related to amount of prompt attempts: I think the game has cost me around $6 in OpenAI fees so far.
One particularly irritating (time consuming) prompt was getting animated legs and feet: https://github.com/romland/llemmings/commit/e9852a353f89c217...
unknown|2 years ago
[deleted]