(no title)
stupidcar | 4 years ago
It appears to me that when it comes to language models, intelligence = experience * context. Where experience is the amount what's encoded in the model, and context is the prompt. And the biggest limitation on Copilot currently is context. It behaves as an "advanced autocomplete" because it all is has to go on is what regular autocomplete sees, e.g. the last few characters and lines of code.
So, you can write a function name called createUserInDB() and it will attempt to complete it for you. But how does it know what DB technology you're using? Or what your user record looks like? It doesn't, and so you typically end up with a "generic" looking function using the most common DB tech and naming conventions for your language of choice.
But now imagine a future version of Copilot that is automatically provided with a lot more context. It also gets fed a list of your dependencies, from which it can derive which DB library you're using. It gets any locatable SQL schema file, so it can determine the columns in the user table. It gets the text of the Jira ticket, so it can determine the requirements.
As a programmer a great deal of time is spent checking these different sources and synthesising them in your head into an approach, which you then code. But they are all just text, of one form or another, and language models can work with them just as easily, and much faster, than you can.
And one the ML train coding gets running, it'll only get faster. Sooner or later Github will have a "Copilot bot" that can automatically make a stab at fixing issues, which you then approve, reject, or fix. And as thousands of these issues pile up, the training set will get bigger, and the model will get better. Sooner or later it'll be possible to create a repo, start filing issues, and rely on the bot to implement everything.
karmasimida|4 years ago
I didn't find reading largely correct but still often wrong code is a good experience for me, or it adds up any efficiency.
It does do a very good job in intelligently synthesize boilerplate for you, but be Copilot or this AlphaCode, they still don't understand the coding fundamentals, in the sense causatively, what would one instruction impact the space of states.
Still, those are exciting technology, but again, there is a big if whether such machine learning model would happen at all.
solarmist|4 years ago
I see it continuing to evolve and becoming a far superior auto-complete with full context, but, short of actual general AI, there will always be a step that takes a high-level description of a problem and turns it into something a computer can implement.
So while it will make the remaining programmers MUCH more productive, thereby reducing the needed number of programmers, I can't see it driving that number to zero.
mabub24|4 years ago
https://www.newyorker.com/magazine/2022/01/24/the-rise-of-ai...
Maybe. It might never get to that level though.
TSiege|4 years ago
Veedrac|4 years ago
Emotional skepticism carries a lot more weight in worlds where AI isn't constantly doing things that are meant to be infeasible, like coming 54th percentile in a competitive programming competition.
People need to remember that AlexNet is 10 years old. At no point in this span have neural networks stopped solving things they weren't meant to be able to solve.
Hgsb|4 years ago