(no title)
ayush--s | 4 years ago
1. A lot of people on this thread are concerned about licensing issues with GPL etc. I am sure Github will restrict the beta until it figures out that stuff.
2. I wonder if eventually our corrections to the code suggested by the model would be used to feedback to the model, and if that'll lead to a differential pricing - If I let it see my code, I get charged lesser.
3. I believe a mini-GPT-3 model is where it's at. GPT-3 (and similar) models look to be to too big to run locally. I've been using TabNine for past year or so & it gives me anywhere between 5-10% productivity boost. But one of the main reasons why it works so well is because it trains on my repo as well. TabNine is based off GPT-2 from what i've heard.
4. prediction: Microsoft is probably going to milk GPT-3. Expect a bumpy ride.
5. In all likeliness, this would be a great tool to make developers productive, rather than take their jobs - at least at levels that are more than just code-coolie.
6. Eventually all tasks with enough data around it will see automation using AI.
No comments yet.