top | item 43279698

(no title)

OriginalMrPink | 1 year ago

It would have been great if they had disclosed which products. I've been building tons of projects with AI lately, and while this is a massive productivity boost, the code itself doesn't scale. The acceleration when you start with zero is massive, but with a growing code base, AI hits a wall at some point. You better understand what you've built going from there.

discuss

order

bluefirebrand|1 year ago

> while this is a massive productivity boost, the code itself doesn't scale

Then it's not a productivity boost imo

"Produces more tech debt faster" is the worst possible outcome of a 'productivity' tool

scarface_74|1 year ago

I think you misunderstand the purpose. Who cares if it adds technical debt later if your goal is just to have something to show off to get investments? The goal of every startup is to get funding and an exit. The concern is not long term maintainability.

OriginalMrPink|1 year ago

It actually is. The trick is to be in charge of architecture and scope. But this is not a 100% AI build like stated in the article.

threecheese|1 year ago

Nobody cares about tech debt when large orgs are happy to rewrite every workload each decade. Every reorg finds the debt, blames it on “the last guy” (now in management), and they replace some components with new tech. Rinse and repeat.

Source code just isn’t an asset anymore, and it’s been slowly growing since Serverless; genai just accelerated it, and “bucket o’ lambdas” is a valid architecture now.

xhkkffbf|1 year ago

I watched this happen decades ago. Smart coders knew about memory allocations. Okay coders just assumed that the garbage collector would handle it. One friend of mine wrote code that was 1000 times faster than the people in the next cubicle over. Why? Because he was careful with memory usage and didn't trigger the virtual memory thrashing.

AI is just another form of automation.

Karellen|1 year ago

Yeah, these new-fangled "compilers" will never catch on.

Programmers who rely on them will stop learning machine code, and won't know how their program really works. That's if the compiler actually compiles your code at all, without throwing an internal error, making you change your (correct) code around arbitrarily until it actually accepts it. But at least with an internal compiler error you know the compiler has broken - rather than it silently miscompiling your code to do the wrong thing.

But even then, even if the compiler accepts your code without barfing, and generates correct machine code from it, it still won't generate as efficient machine code as you could write by hand yourself.

Nope, these compilers will never catch on, and never get reliable enough to be useful for serious software engineering.

-- Some programmer circa 1975, probably, who lives in my head mumbling this to themselves whenever I'm sure generative-AI-based "programming" is a crock of shit. Although, to be fair, the 2005-era developer who is drunkenly ranting that UML diagrams will make programming 100x more productive any day now, is a handy counterpoint.

feoren|1 year ago

> One friend of mine wrote code that was 1000 times faster than the people in the next cubicle over

And did that 1000x speedup make a difference to users? Are we talking about an on-click event that now took 10μs instead of 10ms? Was this a 1000x speedup in a hot critical-path bottleneck, or was it an already quick in-memory post-processing operation that fired after waiting 30 seconds on a sluggish database query?

Sorry to doubt so much, but the vast majority of times someone boasts about a speedup like this, it turns out to be done for bragging rights rather than for the benefit of the project. A 1000x speedup is only impressive if you can show that the time you improved upon was actually a problem.

jbreckmckye|1 year ago

There must be a name for the self-balancing phenomenon where any improvement in productivity is soon swallowed up by an equal increase in waste