top | item 33883054

I don't think people understand the monumental changes coming to software

9 points| kbuchanan | 3 years ago |twitter.com

30 comments

order

coldtea|3 years ago

>The last major productivity boost in software was OSS. Each of those steps was 10-100x boost but then it stopped...

I don't think this author understand what is a productivity boost. OSS is a development model, it didn't turn into any "productivity boost", any more than the general technology level (including mainly proprietary technology) offered.

>Programmers will command armies of software agents to build increasingly complex software in insane record times. Non-programmers will also be able to use these agents to get software tasks done. Everyone in the world will be at least John Carmack-level software capable.

/rolls eyes

>At Replit, we're building an AI pair programmer that uses the IDE like a human does and has full access to all the tooling, open-source software, and the internet.

A, OK, this is building up commercial hype. Makes sense now.

eimrine|3 years ago

OSS allows really hard pieces of software to be built, most of FOSS is unable to be built if not FOSS. What is the biggest and/or hardest proprietary project? Windows7+? Modern chipset including CPU? Apple jail? And what will happen with this technology if all the source code leaks somehow, don't you believe it will become 10x-100x better by some measures?

bennysonething|3 years ago

I thought he just meant access to thousands of oss packages that are easy to leverage, npm, nugget etc.

jleyank|3 years ago

Per Kernighan, debugging is 2X harder. If the AI jockeys don't understand what they're being given, man, it's going to be humorous watching them putting out fires. And how can statistically fit models exceed their training set w/o going random? How is AI going to string together equations to do physics or engineering? They're bloody squiggly symbols and letters.

And the marketplace still isn't interested in fixing bugs over "oooh, shiny", so my concerns might never be addressed.

im3w1l|3 years ago

Imagine if the AI can provide both a program and a machine-verifiable proof of correctness though. Then all you have to manually verify is that the proof proves the right thing.

AnimalMuppet|3 years ago

"Oooh, shiny" that doesn't work has a very short run in the marketplace.

cjk|3 years ago

> Everyone in the world will be at least John Carmack-level software capable.

lol

I'm sure that for simple tasks, AI-based pair programming will offer some level of acceleration, but until it can understand the semantics of the code it's generating, and how it fits into the broader _system_, it will not be able to be trusted. I do not look forward to a world where I have to spend my time debugging AI-generated code.

zorr|3 years ago

I'm not saying it will reach Carmack levels of proficiency any time soon but have you tried pasting a non-trivial method into ChatGPT and ask the AI what the method does and how the code can be improved? I was very impressed about the way it was able to explain the code and suggest improvements.

zorr|3 years ago

I was skeptic about AI's writing code but after playing with ChatGPT for a bit I have to adjust my views.

I think tools like this can be great for generating skeletons and draft implementations for simple CRUD-like things. For example I asked it "write an Android layout XML for a login screen with username, password and a login spinner using components from the material library" and it did exactly that. I followed up with "write the corresponding activity in Kotlin" and it did. It generated a correct implementation, including a few paragraphs explaining how it worked and that it mocked the login method with an artificial delay for demo purposes.

Another thread that convinced me was when I gave it a Kotlin interface for a CRUD Taskrepository and asked it to write the implementation. It wrote a correct implementation backed by a Map. With some followup prompts it was able to write save/load methods to store state in a JSON file, and generate events to a Flow whenever a task was created, updated or deleted.

Another one: I asked it how I could debug why a gstreamer pipeline had a refcount of 2 after the pipeline stopped running and it pointed me to a number of debug tools and environment variables I could set to trace refs in the pipeline.

avmich|3 years ago

I think it's frequent when professionals in an area are skeptical about massive changes coming to them, even when those later prove to be significant.

However, if there are some courses, videos, detailed documentation about the new way of doing software development, I'd be interested to look at that.

hulitu|3 years ago

Yes, AI will improve things. They said that 30 years ago. Even MIT had an AI lab.

In the same time, testing has not really improved in the last 30 years.

eimrine|3 years ago

> AI is the next 100x productivity boost.

I do not agree with this statement. There is totally no progress in AI since cryptowinter. Just there are too much people with always-online smartphones, so governments considered this field as too big to be out of their control. And it leads to 100x increase of no-brain programming job where everything what is needed from that kind of programmers - to fight against users.

The author is right about big changes is coming, but not the changes he is writing about.

CuriouslyC|3 years ago

AI will definitely eventually allow one person to perform the work that would take 100 today, and more. It's pretty easy to see the path there by extrapolating from what's happening in image/video diffusion models right now, and the language models have shown that they can task generalize well into basic problem solving, to the extent that a problem is similar to something that has been solved many times before. Simple tools built using the models of today could easily double the productivity of an artist or writer, so we're somewhere between 1 and 2 orders of magnitude off, which seems very achievable to me given the progress of the last 50 years.