“It could allow non-coders to simply describe an idea for a program and let the system build it”
This is not a new thought. The problem with it, however, is that such a description would have to be very precise or else leave room for different interpretations which in turn could lead to very different programs being generated. In particular, as programmers know, it's often the edge cases that pose the main difficulty not the typical case.
Describing what your program should do in sufficient detail will probably end up being not very far from the actual program itself.
As a technical manager, this actually isn't that far from what happens with teams of engineers. The less precise I am in describing the requirements of a system, the less correct solution is produced. This isn't to say that my team is incompetent or mindless -- far from it, I find them to be some of the most talented engineers I've had the pleasure of working with.
Perhaps the solution to this particular problem is a clear specification of the requirements of the problem -- very similar to how corps define requirements documents for what their product does/behaves like?
Not only is it not a new thought, but it's been the holy grail of large software companies for at least 30 years. You'd be surprised at how much money has been spent researching this problem. And, for those just coming into software engineering, this isn't a good sign. I see a lot of software engineers that don't fear the risk of automation. But, because software development is such a huge cost center and so much is controlled by software developers, complete automation is one of the most sought after technologies.
I don't mean to spew hyperbole, because complete automation is likely very far off. But, I just wanted to point out the market forces behind this type of technology are gigantic.
In my experience, people have trouble describing something in sufficient detail that another human can build what they want. I think we're a ways off yet from AI being able to do so.
Doesn't TDD basically provide a spec? Write a test, then let the AI generate a program that passes the test. If the program is still buggy, you didn't write sufficient tests.
If you can describe something then you can just code it. I can see AI being used in an advanced IDE as an assistant. Just write your specs and let the AI build the app.
One of the things about human software engineers is that they can make assumptions based on what the system is for. For example you may say that "this system has to accept payments from counterparties" and they will intuitively know that they are dealing with people's money, and to be careful of situations like double billing and the like.
In automatic programming discussions, one commenter said long ago that we already have people that deal with English this precise: lawyers. Such a model would basically turn programmers into people writing legal documents that the AI then turned into programs.
If I can create using English, there is a lot of possibilities for me. I can work on expanding language and build on skills I already have. I figure something like this will spill over into other areas. For example, I could theoretically make animation by describing the scene with words. Detail varies depending on how detailed I get.
It would still be a vast amount of work, but suddenly it is doable just by honing skills I already have. This is where the hope lies with folks like me.
Well, I think the are refering to that the computer would solve it's specific problems by itself just as we do as programmers.
I guess that you have worked for someone else as a programmer? Didn't they tell you what they wanted? How did you achieve it? They probably didn't have to write it for you in very specific terms unless a problem occurrs.
A nuance that is underappreciated by a lot of people / the media is the degree to which any of the AI generates X models (faces, drawings, fills in photos, increases sharpness, etc.) simply copy and paste and interpolate from the training set.
Unlike general supervised learning problems, for many of the deep learning "generative" models that get posted to HN regularly, there is no objective "test set" to measure generalization, so it's extremely easy to claim the model has learned something. When we see a cool demo / audio sample / pictures, how do we know the model hasn't just simply interpolated a bunch of training examples? In many cases this is quite clear, like when you start seeing cats everywhere in the generated imagery. It's very hard to ferret out the BS with these models.
Actually, there is a formal method to evaluate generative models. All you need is a validation set. Then you need to find what the odds are that the model would have generated your validation set. That probability is the one you want to maximize [0]. In order to be able to calculate or estimate this probability, you need to make some amendments to your model architecture. But for instance PixelCNN's know the probability exactly, variational auto-encoders have a lower bound on the probability, etc. It is mainly the generative adverserial networks (GANs) which cannot. And this is the reason research in the latter is limited, despite the remarkable visualizations it produces.
But the judging of models by their visualizations is something which is still done too often, and it annoys quite some researchers in the area that those papers still pass review at machine learning conferences. They should be sent to computer graphics conferences, because that is what they actually do /vent.
One way to check for overfitting is to compare samples to their nearest neighbors in the training set. Of course that doesn't cover interpolations between two points in the training set, but I would argue plausible interpolations are already something "learned".
A different problem I see with faces in particular though is that our visual system is actually wired to do some really heavy denoising/pattern matching on faces (for example people seeing the face of Jesus on slices of toast), so the generation of faces doesn't actually need to be that good to produce results that seem appealing to humans.
Interesting how you consider interpolating from training data not learning. Isn't that like exactly what learning is? If you get unintended repeating patterns then there isn't enough diverse training data
The approach taken seems to be quite unique. Especially when compared to GPs and GA evolved custom assemblies. Those approaches work too, but mostly fail (or take too long) at solving slightly complex problems.
No, AI didn't write code. A program explicitly built and trained to write code, wrote code.
A bit too pedantic, perhaps, but there isn't some singular program out there which first learned to play chess, then see and catalog pictures, create creepy art, play Go, drive cars, and now write code. Which is what "AI learns to X" seems to imply.
Of course, that's not as interesting of a headline.
It's even less interesting because programs to generate code have already been around for a while. In fact, at PLDI, program synthesis is considered a specific track of research papers.
Hell, you're probably using computer-written code right now. FFTW and ATLAS are autogenerated and autotuned kernels for solving FFT instances of known size and linear algebra routines, respectively, and they're among the most common implementations of these APIs.
In the meanwhile, Mark Cuban says: "I personally think there's going to be a greater demand in 10 years for liberal arts majors than for programming majors and maybe even engineering"
I know he knows the business world and he obviously knows how to manage a business but what does he know about tech? This prediction is beyond nonsensical.
I thought this would take longer. "Developer" will stick around as a job for people who need new and unique things, or for apps that need speed, but if your job is making forms to take in data, putting it in a database, performing some sort of analysis on it, and then printing a report to the screen then you should really starting learning something else, because those jobs will not exist in 10 years.
At least when code monkeys copy-paste from Stack Overflow, there's a trace of common sense in the loop. Yes, it's possible to copy-paste your way to working software without having a clue. No, it's not a very good idea. How about if we shut these clowns down and use the money for research that has a chance of improving anything?
Well, it's obvious that there will be an improvement in specifiying the syntax for a computer program as it has been for the last decades.
e.g. Assembler -> C -> C++
There recently has been a post @ HN about the missing programming paradigm (http://wiki.c2.com/?ThereAreExactlyThreeParadigms).
With the emerge of smarter tools, programming will get easier in one way or the other, releasing the coder from a lot of pain ( as C or C++ did realse us from tedious, painful assembler ).
However, I am quite sure that it won't replace programmers since our job is actually not to code but more to solve a given problem with a range of tools.
Smarter tools will probably boost productivity of a single person to handle bigger and more complex architectures or other kinds of new problem areas will come up. Research will go faster. Products will get developed faster. Everything will kind of speed up. Nevertheless, the problems to get solve / implement will remain until there's some kind of GAI.
If there's an GAI smart enough to solve our problems probably most of the Jobs have been replaced.
>It could allow non-coders to simply describe an idea for a program and let the system build it
Well that's the thing, "describing" an idea to the point where you are explicit enough to get the actual b behavior you want, you are basically writing code. Granted, you might have to add superfluous constructs and syntax to make it fit the programming languages we currently have, but that is a different kind of problem.
This just goes to show that nobody's job is safe. The crux is that AI won't get tired and won't balk at changing requirements. Many people will think it's still very far, until it suddenly arrives. The horse buggy drivers never saw the automobiles coming.
On the other hand, until that day arrives, we could take the technology into a direction that actually helps users/customers and software engineers communicate better. The software engineer could have the user feed requirements to a bot, and systematically identify and explain issues in the requirements, based on the bot's output.
Programming is translation of requirements written in natural language, graphical notation, and mathematical notation to a deterministic syntax. There has been much improvement in making these syntaxes more closely match these older forms of communication, and I'm sure this is one tool that will be used in the future. However until AI can gather its own requirements, there will always be a translation step that people must do. When this technology comes to fruition, requirements gathering will become programming.
What we need are libraries of (automatically verifiable) requirements. As open source components which can be compose together, like we do with the code today.
Then we, possibly assisted by "AI", can assemble a solution matching the requirements.
[+] [-] kleiba|9 years ago|reply
This is not a new thought. The problem with it, however, is that such a description would have to be very precise or else leave room for different interpretations which in turn could lead to very different programs being generated. In particular, as programmers know, it's often the edge cases that pose the main difficulty not the typical case.
Describing what your program should do in sufficient detail will probably end up being not very far from the actual program itself.
[+] [-] mondoshawan|9 years ago|reply
Perhaps the solution to this particular problem is a clear specification of the requirements of the problem -- very similar to how corps define requirements documents for what their product does/behaves like?
[+] [-] afpx|9 years ago|reply
I don't mean to spew hyperbole, because complete automation is likely very far off. But, I just wanted to point out the market forces behind this type of technology are gigantic.
[+] [-] imgabe|9 years ago|reply
[+] [-] geophile|9 years ago|reply
[+] [-] mattferderer|9 years ago|reply
Basically as time progresses we keep creating languages that are a "higher level".
[+] [-] rorykoehler|9 years ago|reply
[+] [-] officelineback|9 years ago|reply
[+] [-] nickpsecurity|9 years ago|reply
[+] [-] amai|9 years ago|reply
Isn't that the idea behind https://en.wikipedia.org/wiki/Prolog ?
[+] [-] Broken_Hippo|9 years ago|reply
If I can create using English, there is a lot of possibilities for me. I can work on expanding language and build on skills I already have. I figure something like this will spill over into other areas. For example, I could theoretically make animation by describing the scene with words. Detail varies depending on how detailed I get.
It would still be a vast amount of work, but suddenly it is doable just by honing skills I already have. This is where the hope lies with folks like me.
[+] [-] staticelf|9 years ago|reply
I guess that you have worked for someone else as a programmer? Didn't they tell you what they wanted? How did you achieve it? They probably didn't have to write it for you in very specific terms unless a problem occurrs.
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] namlem|9 years ago|reply
[+] [-] pasquinelli|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] argonaut|9 years ago|reply
Unlike general supervised learning problems, for many of the deep learning "generative" models that get posted to HN regularly, there is no objective "test set" to measure generalization, so it's extremely easy to claim the model has learned something. When we see a cool demo / audio sample / pictures, how do we know the model hasn't just simply interpolated a bunch of training examples? In many cases this is quite clear, like when you start seeing cats everywhere in the generated imagery. It's very hard to ferret out the BS with these models.
[+] [-] 317070|9 years ago|reply
But the judging of models by their visualizations is something which is still done too often, and it annoys quite some researchers in the area that those papers still pass review at machine learning conferences. They should be sent to computer graphics conferences, because that is what they actually do /vent.
[0] https://arxiv.org/abs/1511.01844
[+] [-] phreeza|9 years ago|reply
A different problem I see with faces in particular though is that our visual system is actually wired to do some really heavy denoising/pattern matching on faces (for example people seeing the face of Jesus on slices of toast), so the generation of faces doesn't actually need to be that good to produce results that seem appealing to humans.
[+] [-] divbit|9 years ago|reply
So I guess we are all out of jobs in a couple years...
[+] [-] RugnirViking|9 years ago|reply
[+] [-] yetihehe|9 years ago|reply
[+] [-] darpa_escapee|9 years ago|reply
[+] [-] ijidak|9 years ago|reply
[+] [-] muyuu|9 years ago|reply
[+] [-] sumitgt|9 years ago|reply
[+] [-] smdz|9 years ago|reply
The approach taken seems to be quite unique. Especially when compared to GPs and GA evolved custom assemblies. Those approaches work too, but mostly fail (or take too long) at solving slightly complex problems.
[+] [-] bdcravens|9 years ago|reply
[+] [-] falcolas|9 years ago|reply
No, AI didn't write code. A program explicitly built and trained to write code, wrote code.
A bit too pedantic, perhaps, but there isn't some singular program out there which first learned to play chess, then see and catalog pictures, create creepy art, play Go, drive cars, and now write code. Which is what "AI learns to X" seems to imply.
Of course, that's not as interesting of a headline.
[+] [-] nickdavidhaynes|9 years ago|reply
[+] [-] jcranmer|9 years ago|reply
Hell, you're probably using computer-written code right now. FFTW and ATLAS are autogenerated and autotuned kernels for solving FFT instances of known size and linear algebra routines, respectively, and they're among the most common implementations of these APIs.
[+] [-] empath75|9 years ago|reply
[+] [-] bartread|9 years ago|reply
Much like non-artificial intelligence, then.
[+] [-] shultays|9 years ago|reply
[+] [-] owebmaster|9 years ago|reply
[+] [-] wimagguc|9 years ago|reply
(From this article: http://www.inc.com/betsy-mikel/mark-cuban-says-this-will-soo...)
[+] [-] michaelmrose|9 years ago|reply
[+] [-] onion2k|9 years ago|reply
[+] [-] michaelmrose|9 years ago|reply
[+] [-] serndo4|9 years ago|reply
the problem is human computer interaction. to specify what is needed is harder than the actual coding.
i would guess this will be implemented in an IDE and will reduce coding times tremendously
[+] [-] codr4life|9 years ago|reply
[+] [-] SNBasti|9 years ago|reply
e.g. Assembler -> C -> C++
There recently has been a post @ HN about the missing programming paradigm (http://wiki.c2.com/?ThereAreExactlyThreeParadigms). With the emerge of smarter tools, programming will get easier in one way or the other, releasing the coder from a lot of pain ( as C or C++ did realse us from tedious, painful assembler ). However, I am quite sure that it won't replace programmers since our job is actually not to code but more to solve a given problem with a range of tools. Smarter tools will probably boost productivity of a single person to handle bigger and more complex architectures or other kinds of new problem areas will come up. Research will go faster. Products will get developed faster. Everything will kind of speed up. Nevertheless, the problems to get solve / implement will remain until there's some kind of GAI. If there's an GAI smart enough to solve our problems probably most of the Jobs have been replaced.
[+] [-] kristiandupont|9 years ago|reply
Well that's the thing, "describing" an idea to the point where you are explicit enough to get the actual b behavior you want, you are basically writing code. Granted, you might have to add superfluous constructs and syntax to make it fit the programming languages we currently have, but that is a different kind of problem.
[+] [-] homarp|9 years ago|reply
[+] [-] mdekkers|9 years ago|reply
[+] [-] k_sze|9 years ago|reply
On the other hand, until that day arrives, we could take the technology into a direction that actually helps users/customers and software engineers communicate better. The software engineer could have the user feed requirements to a bot, and systematically identify and explain issues in the requirements, based on the bot's output.
[+] [-] jlebrech|9 years ago|reply
[+] [-] michaelmrose|9 years ago|reply
[+] [-] jackcosgrove|9 years ago|reply
[+] [-] jononor|9 years ago|reply