(no title)
payneio | 4 months ago
- These systems can re-abstract and decompose things just fine. If you want to make it resilient or scalable it will follow whatever patterns you want to give it. These patterns are well known and are definitely in the training data for these models.
- I didn't jump to the conclusion that doing small things will make anything possible. I listed a series of discoveries/innovations/patterns/whatever that we've worked on over the past two years to increase the scale of the programs that can be generated/worked-on with these systems. The point is I'm now seeing them work on systems at the level of what I would generally write at a startup, open source project, or enterprise software. I'm sure we'll get some metrics soon on how functional these are for something like Windows, which, I believe is literally the world's single largest code base.
- "creativity" and novel-seeking functions can be added to the system. I gave a recent example in my post about how I asked it to write three different approaches to integrate two code bases. In the old world this would look like handing a project off to three different developers and seeing what they came up with. You can just brush this all of with "their just knowledge bases" but then you have to explain how a knowledge base can write software that would take a human engineer a month on command. We have developed the principle "hard to do, easy to review" that helps with this, too. Give the LLM-system a task that would be tedious for a human and then make the results easy for a human to review. This allows forward progress to be made on a task at a much-accelerated pace. Finally, my post was about programming... how much creativity do you generally see in most programming teams where they take a set of requirements from the PM and the engineering manager and turn that into a code on a framework that's been handed to them. Or take the analogy back in time... how much creativity is still exhibited in assembly compilers? Once creativity has been injected into the system, it's there. Most of the work is just in implementing the decisions.
- You hit the point that I was trying to make... and what sets something like Amplifier apart from something like Claude Code. You have to do MUCH less prompting. You can just give it an app and tell it to improve it, fix bugs, and add new features based on usage metrics. We've been doing these things for months. Your assertion that "we would have already replaced ALL programmers" is the logical next conclusion... which is why I wrote the post. Take it from someone who has been developing these systems for close to three years now... it's coming. Amplifier will not be the thing that does this... but it shows techniques and patterns that have solved the "risky" parts enough to show the products will be coming.
Madmallard|4 months ago
No? It absolutely does not do this correctly. It does what "looks" right. Not what IS right. And that ends up being wrong literally the majority of the time for anything even mildly complex.
" I'm sure we'll get some metrics soon on how functional these are for something like Windows, which, I believe is literally the world's single largest code base."
Now that's just not true at all. Windows doesn't even lay a finger to Google's code-base.
"and then make the results easy for a human to review."
This is in no way doable for anything not completely trivial from what an LLM produces. Software is genuinely hard and time-consuming if you want it to actually not be brittle and address the things it needs to and with trade-offs that are NOT detrimental to the future of your product.
payneio|4 months ago
proc0|4 months ago
In theory, any prompt should result in a good output just as if I suggest it to an engineer. In practice I find that there are real limitations that require a lot of iterations and "handholding" that is unless I want something that has already been solved and the solution is widely available. One simple example is I prompted for a physics simulation in C++ with a physics library, and it got a good portion of it correct, but the code didn't compile. When it compiled, it didn't work, and when it worked it wasn't even remotely close to being "good" in the sense of how a human engineer would judge their output if I where to ask for the same thing, not to mention making it production ready or multiplatform. I just have not experienced any LLM capable of taking ANY prompt... but because they do complete some prompts and those prompts do have some value it seems as if the possibilities are endless.
This is a lot easier to see with generative image models, i.e. Flux, Sora, etc. We can see amazing examples, but does that mean anything I can imagine I can prompt and it will be capable of generating? In my experience, not even close. I can imagine some wild things and I can express them in whatever detail is necessary. I have experimented with generative models and it turns out that they have real limitations as to what they can "imagine". Maybe they can generate car driving through a road in the mountains, and it's rendered perfectly, but when you change the prompt to something less generic, i.e. adding more details like car model, maybe time of the day, it starts to break down. When you try and prompt something completely wild, i.e. make the car transform into a robot and do a back flip, it fails spectacularly. There is no "logic" to what it can or cannot generate, as one might think. A talented artist that can create a 3d scene with a car can also create a scene with a car transforming into a robot (granted it might take more time and require experimentation).
The main point is that there is a creative capability that LLMs are lacking and this will translate to engineering in some form but it's not something that can be easily measured right away. Orgs will adapt and are already extracting value from LLMs, but I'm wondering what is going to be the real long term cost.
payneio|4 months ago
These things are not "creative"... they are just piecing together decent infrastructure and giving the "actor" the ability to use it.
Then break planning, design, implementation, testing, etc. apart and do the same for each phase--reduce "creativity" to process and the systems can follow the process quite nicely with minimal intervention.
Then, any time you do need to intervene, use the system to help you automate the next thing so you don't have to intervene in the same way again next time.
This is what we've been doing for months and it's working well.