I was pretty sceptical of Copilot when it was announced, but after having used it for a while I think it's more that it's been kind of sold as something that it's not.
What it does offer for me, in practice, is basically "clever autocomplete". Say I wrote a little bit of code:
bounds.max.x = bounds.min.x + width
Then copilot will suggest that the next line is:
bounds.max.y = bounds.min.y + height
That's reasonably smart, and it adapts pretty well to whatever code you are currently writing. It makes writing those little bits of slightly repetitive code less annoying, and I like that – it feels like a useful productivity boost in the same way as regular "dumb" autocomplete is.
However I'd say too much of the initial coverage was along the lines of "you can just write the function name and it will fill in the body!". That will work if you want to write `isEven` or `capitalise` or something, which again is quite nice. But I have found that basically any other prompts product a function that either does something completely wrong, or introduces a subtle and non-obvious bug. And this actually feels kind of dangerous – I'd absolutely caught myself a few times from just going "it'll probably be fine" and accepting whatever Copilot has spewed out.
I'll remain sceptical of it as a system that sucks up and regurgitates vast lakes of open-source code and sends my every keystroke to Microsoft, among other things. But it's definitely something I'd pay for a local version of.
I'd be VERY surprised if the data from all the interactions with it don't result in a product people care less and less about Microsoft harvesting their data from because of the value produced in exchange.
By 2025, either these models will have hit a wall on diminishing returns and it will take a complete pivot to some other approach to continue to see notable gains, or the products will continue to have improved at compounding rates and access will have become business critical in any industry with a modicum of competition.
It's also really good at writing test cases. Once you've written a couple test cases manually it can take the "it should" string and auto-complete the body of the test with very high accuracy.
I largely agree with this. Occasionally it suggests a few lines that are more or less what I want, but rarely more than that.
One use case that I reach for relatively often is writing a comment to describe the one line of code I want when I can’t remember the specifics of the API I’m working with. E.g. # reset numpy random seed, and copilot pretty reliably gives me what I want.
I agree with your last point - with an added dash of being wary of getting dependent on sophisticated tools that have one vendor.
I'm not skeptical of the idea at all, however. I see it as the rough analogy of mathematicians using proof assistants. This is a coding assistant.
It's generally true that the vast, vast majority of code one writes is boilerplate or otherwise rote. The core logic and control flow that defines the meaningful bits of any complex program are nested in the middle of tons of error handling, logging, data munging, incidental book-keeping, and other very prosaic tasks.
It will not just be handy to have a friendly intelligence assist in these tasks. It doesn't require much "intelligence" (of the sort required for figuring out what the program should do in the first place and how to architect it).
Once the technology gets firmed up a bit, the productivity multiplier will be palpable and impossible to ignore. Programmers using these tools will complete tasks faster than ones that don't, and the market will shift.
Further extensions of this technology lead to automatic generation of unit tests, automatic refactoring, etc. These all exist as very specialized tools that are explicitly coded, but it's clear to see that adding a bit of ML-driven intuition to them would greatly increase their scope and effectiveness.
This could be useful for languages like Go where simple repetition and loops are preferred over clever language features like object or array destructuring that can add cognitive or performance overhead.
I've been using Copilot for 5 months while building another AI productivity tool. It's changed my habits my I'm becoming a bit dependent on it for autocompletion. It feels so good just hitting TAB and moving on.
I know some developers that aren't embracing it the same way I do, making judgements without even trying it. "This is the future", I tell them, "it makes your life much easier", but there's resistance.
Prompt engineering is quite interesting too, and it may turn into a job skill later. While using Codex, I understood the importance of knowing how to ask for the right things to a non-human. Is bit like talking to Alexa in the early days, in the sense that I couldn't talk to Alexa like a human yet, I had to be specific, clear and intentional. I still see that people who are less experienced with a smart personal assistant struggle to get their commands done.
If you love this technology and would love to try it for Explaining Code in your browser, check out the extension ExplainDev (https://explain.dev). It works on GitHub, StackOverflow and documentation websites.
Disclaimer: I built ExplainDev, with the help of Copilot.
Looks interesting. I signed up and received an email with the chrome store link, but the email didn't include an access code (it mentions it, "...and the access key below.", but the only thing below is the store link). Is this a bug or am I missing something?
EDIT: A second email showed up about 30min later that did contain the code.
We tried using OpenAI/Davinci for SQL query authoring, but it quickly became obvious that we are still really far from something the business could find value in. The state of the art as described below is nowhere near where we would need it to be:
To be clear, we haven't tried this on actual source code (i.e. procedural concerns), so I feel like this is a slightly different battle.
The biggest challenge I see is that the queries we would need the most assistance with are the same ones that are the rarest to come by in terms of training data. They are also incredibly specific in the edge cases, many time requiring subjective evaluation criteria to produce an acceptable outcome (i.e. recursive query vs 5k lines of unrolled garbage).
While others here have touched on the idea that Codex has changed their coding habits, what I find interesting is that Codex has changed how I write code altogether. For example, I had to connect a database to an API a little while ago. Obviously I had the option to use an ORM as one would normally. But instead, I just wrote out all the SQL commands and wrapper functions in one big file. Since it was all tedious and predictable, Codex helped me to write it in just a few minutes, and I didn't need to muck around with some complex ORM. These are the tradeoffs I'm personally excited about.
Until the Copilot product is accessible to the public on the same terms as the free software on which it is based, it is another example of corporate exploitation of the commons, or, in other words, open theft.
If it separately becomes a long term trend for private companies to use neural net regurgitation to allow them to use free software without complying with the GPL, free software must be completely abandoned.
Copilot is wonderful, when you use it appropriately. It autocompletes and has a lot of good knowledge of the code.
I had interview that usually lasts an hour and people don't get to do all the code, I did it in 15 minutes or so mostly because I knew what needs to happen and code was autocompleted quickly as I would start writing.
It really helps you focus on what you want to do and don't have to think about syntax or correct variable name.
You still need to be a good developer, but it helps you greatly to do your work.
I wonder what the copyright status of things written by Copilot is? Since a human didn't write some of the code produced, does that mean that those portions aren't copyrightable?
Related article "If Software is My Copilot, Who Programmed My Software?"
I don't wanna jinx it because it's certainly a useful tool but I wonder what will happen when Copilot is widely available and CS students start handing in their programming assignments done by Copilot (and only Copilot).
Easy to check, just put in the first few words and if copilot autocomplete is identical to the remaining solution you just caught a cheater.
Or use names in the assignment that are banned to prevent copilot from working. For example Q_rsqrt is banned because copilot had a tendency to just copy paste the original quake source including comments verbatim.
Just started using w/ clojure & scheme. No where near as useful as popular languages. My thoughts may change as I use it more, but I'll say it's barely better than w/o right now.
[+] [-] matthewmacleod|4 years ago|reply
What it does offer for me, in practice, is basically "clever autocomplete". Say I wrote a little bit of code:
Then copilot will suggest that the next line is: That's reasonably smart, and it adapts pretty well to whatever code you are currently writing. It makes writing those little bits of slightly repetitive code less annoying, and I like that – it feels like a useful productivity boost in the same way as regular "dumb" autocomplete is.However I'd say too much of the initial coverage was along the lines of "you can just write the function name and it will fill in the body!". That will work if you want to write `isEven` or `capitalise` or something, which again is quite nice. But I have found that basically any other prompts product a function that either does something completely wrong, or introduces a subtle and non-obvious bug. And this actually feels kind of dangerous – I'd absolutely caught myself a few times from just going "it'll probably be fine" and accepting whatever Copilot has spewed out.
I'll remain sceptical of it as a system that sucks up and regurgitates vast lakes of open-source code and sends my every keystroke to Microsoft, among other things. But it's definitely something I'd pay for a local version of.
[+] [-] karmicthreat|4 years ago|reply
So github copilot is pretty neat I entered this code:
Then when I put in result_dataFrame['yearOverYearChange'] Copilot gave me So it's a fancy context aware autocomplete. And big productivity booster on code entry.[+] [-] sireat|4 years ago|reply
However it is shockingly good for filling out Python snippets - ie smarter autocomplete when teaching.
Popular libraries like Pandas, Beautiful Soup, Flask are perfect for this.
About 80% time it will fill out the code exactly they way I would want. About 10% time it will be something you want to correct or nudge.
Then about 10% of time it will be a howler or anti-pattern.
Then you simply explain it to students why it is not so great to say insert something at a beginning of a Python list.
Edit: Copilot is also great for filling out comments when teaching
# it can generate Captain Obvious what comments and also some nice why type comments as well
[+] [-] kromem|4 years ago|reply
We're only a year into this thing existing.
I'd be VERY surprised if the data from all the interactions with it don't result in a product people care less and less about Microsoft harvesting their data from because of the value produced in exchange.
By 2025, either these models will have hit a wall on diminishing returns and it will take a complete pivot to some other approach to continue to see notable gains, or the products will continue to have improved at compounding rates and access will have become business critical in any industry with a modicum of competition.
[+] [-] teaearlgraycold|4 years ago|reply
[+] [-] siddboots|4 years ago|reply
One use case that I reach for relatively often is writing a comment to describe the one line of code I want when I can’t remember the specifics of the API I’m working with. E.g. # reset numpy random seed, and copilot pretty reliably gives me what I want.
[+] [-] kannanvijayan|4 years ago|reply
I'm not skeptical of the idea at all, however. I see it as the rough analogy of mathematicians using proof assistants. This is a coding assistant.
It's generally true that the vast, vast majority of code one writes is boilerplate or otherwise rote. The core logic and control flow that defines the meaningful bits of any complex program are nested in the middle of tons of error handling, logging, data munging, incidental book-keeping, and other very prosaic tasks.
It will not just be handy to have a friendly intelligence assist in these tasks. It doesn't require much "intelligence" (of the sort required for figuring out what the program should do in the first place and how to architect it).
Once the technology gets firmed up a bit, the productivity multiplier will be palpable and impossible to ignore. Programmers using these tools will complete tasks faster than ones that don't, and the market will shift.
Further extensions of this technology lead to automatic generation of unit tests, automatic refactoring, etc. These all exist as very specialized tools that are explicitly coded, but it's clear to see that adding a bit of ML-driven intuition to them would greatly increase their scope and effectiveness.
[+] [-] udbhavs|4 years ago|reply
[+] [-] Zerverus|4 years ago|reply
Take English strings.en.json, copy to strings.th.json, open side by side, delete English text and watch copilot fill in the thai translation
[+] [-] ianai|4 years ago|reply
[+] [-] abecedarius|4 years ago|reply
(Haven't used Copilot yet because it didn't have Emacs support.)
[+] [-] alisonkisk|4 years ago|reply
[deleted]
[+] [-] ediardo|4 years ago|reply
I know some developers that aren't embracing it the same way I do, making judgements without even trying it. "This is the future", I tell them, "it makes your life much easier", but there's resistance.
Prompt engineering is quite interesting too, and it may turn into a job skill later. While using Codex, I understood the importance of knowing how to ask for the right things to a non-human. Is bit like talking to Alexa in the early days, in the sense that I couldn't talk to Alexa like a human yet, I had to be specific, clear and intentional. I still see that people who are less experienced with a smart personal assistant struggle to get their commands done.
If you love this technology and would love to try it for Explaining Code in your browser, check out the extension ExplainDev (https://explain.dev). It works on GitHub, StackOverflow and documentation websites.
Disclaimer: I built ExplainDev, with the help of Copilot.
[+] [-] asxd|4 years ago|reply
EDIT: A second email showed up about 30min later that did contain the code.
[+] [-] ExtraE|4 years ago|reply
Also: what’s involved in writing a browser extension? What was the experience like?
[+] [-] fuzzythinker|4 years ago|reply
[+] [-] superasn|4 years ago|reply
https://web.archive.org/web/20220315101151/https://www.wired...
P.S. Just came to know our government has banned archive.is domain. Wth goi :/
[+] [-] yunohn|4 years ago|reply
[https://community.cloudflare.com/t/archive-is-not-accessible...]
[+] [-] 1024core|4 years ago|reply
[+] [-] la64710|4 years ago|reply
[+] [-] bob1029|4 years ago|reply
https://yale-lily.github.io/spider
https://arxiv.org/abs/2109.05093
https://github.com/ElementAI/picard
To be clear, we haven't tried this on actual source code (i.e. procedural concerns), so I feel like this is a slightly different battle.
The biggest challenge I see is that the queries we would need the most assistance with are the same ones that are the rarest to come by in terms of training data. They are also incredibly specific in the edge cases, many time requiring subjective evaluation criteria to produce an acceptable outcome (i.e. recursive query vs 5k lines of unrolled garbage).
[+] [-] seibelj|4 years ago|reply
My experience with basically everything that is marketed as AI.
[+] [-] m00dy|4 years ago|reply
here are my notes
1-)I suggest every developer to try it at least
2-)It will increase productivity for sure
3-)The bugs caused by copilot will trigger a new nerve in your brain. So, it is dangerous but danger is good.
[+] [-] abel_|4 years ago|reply
[+] [-] CyberRabbi|4 years ago|reply
If it separately becomes a long term trend for private companies to use neural net regurgitation to allow them to use free software without complying with the GPL, free software must be completely abandoned.
[+] [-] andybak|4 years ago|reply
On the whole, the things that Copilot "steals" are the things that probably shouldn't be legally protected in the first place.
[+] [-] desireco42|4 years ago|reply
I had interview that usually lasts an hour and people don't get to do all the code, I did it in 15 minutes or so mostly because I knew what needs to happen and code was autocompleted quickly as I would start writing.
It really helps you focus on what you want to do and don't have to think about syntax or correct variable name.
You still need to be a good developer, but it helps you greatly to do your work.
[+] [-] lolinder|4 years ago|reply
I'm curious to know more about this story. A job interview? Did the interviewer know you used copilot and not care?
[+] [-] pabs3|4 years ago|reply
Related article "If Software is My Copilot, Who Programmed My Software?"
https://sfconservancy.org/blog/2022/feb/03/github-copilot-co...
[+] [-] YeGoblynQueenne|4 years ago|reply
[+] [-] josefx|4 years ago|reply
Or use names in the assignment that are banned to prevent copilot from working. For example Q_rsqrt is banned because copilot had a tendency to just copy paste the original quake source including comments verbatim.
[+] [-] hawthornio|4 years ago|reply
[+] [-] fuzzythinker|4 years ago|reply
[+] [-] black_13|4 years ago|reply
[deleted]
[+] [-] unknown|4 years ago|reply
[deleted]