* Trained 17GB of code from the top 10,000 most popular Debian packages. The source files were deduplicated using a process similar to the OpenWebText preprocessing (basically a locality-sensitive hash to detect near-duplicates).
If you want to look at some more fake samples, here are 256 generated snippets. These ones start at the beginning of the file each time, so you should get a better sense of context:
I got a function that assigned the same expression to three variables. Then it declared a void function with documentation stating "returns true on success, false otherwise". Apparently that code was written by a human, which makes me either doubt the correctness of that website, or the quality of the code it was fed with
First code it showed me had getXXX() methods returning void, each of which contained nothing but a printf using the same string variable with no apparent connection to XXX, along with invalid format strings. Surely code this nonsensical has to be generated. Yet when I clicked "GPT2" it said I was wrong.
This made me worried, so I went and spot-checked 5-6. Using the "cheat sheet" I was always able to guess correctly, so I think the site is working fine.
The list of packages the real snippets are drawn from is here (maybe if you want to avoid using them... ;) ):
Note that the GPT samples are prompted with 128 characters randomly selected from those same packages, so you will see GPT2-generated code that mentions the package name etc. However, these packages were not used for training.
This looks like overfitting to me. Some of the GPT samples were definitely real code, or largely real code. One looked like something from Xorg, another like it was straight from the COLLADA SDK. It’s really hard to define what “truly new code” is, if it’s just the same code copy pasted in different order. Blah blah Ship of Theseus etc.
The generated snippets are prompted with 128 characters from real code (but not code from the training data), so they can often pick up on the name of the project etc.
I got some code related to VICE emulator. It looked pretty real, referring to concepts that make sense in the context of a C64 emulator, but the results said it was GPT not real code. It even had the correct GPL license matching that project. It seems the GPT model has learned quite a bit about the real projects it was fed as input.
There was some code about TIFF headers, and it was apparently GPT2 generated
TIFF is a real thing, so some human was involved in some part of that code, it has just been garbled up by GPT2... In other words, the training set is showing quite visibly in the result
Would be nice if the back button worked so you could see what you guessed wrong. This is a good example of where POST is used unnecessarily and URLs should be idempotent.
For the ones that were just part of the header file, listing a bunch of instance variables and function names, it seems impossible. But for the actual code, it is possible but still quite difficult, though I spent too long in finding some logical inconsistency that gave it away.
This was so much harder than I thought it was going to be. I would get a few right and then be absolutely sure of the next one and be wrong. After a while I felt like I was noticing more aesthetic differences between the gpt and real, rather than distinguishing between the two based on their content. Very interesting...
I wonder how likely are code invariants found in the training set preserved in GPT-2/3. In other words, if I train GPT-2 with C source produce by Csmith (a program generator which I believe produces C programs without undefined behaviour) would programs produced by GPT-2/3 also do not exhibit undefined behaviour?
I understand that GPT-2/3 is just a very smart parrot that has no semantic knowledge of what it is outputting. Like I guess let's take a very dumb markov chain that was "trained" on the following input:
a.c
```
int array[6];
array[5] = 1;
```
b.c
```
int array[4];
array[3] = 2;
```
I guess a markov chain could theoretically produce the following code:
out.c
```
int array[4];
array[5] = 1;
```
which is undefined behaviour. But it is produced from two programs where no undefined behaviour is present. A better question would be, how can we guarantee that certain program invariants (like lack of undefined behaviour) can be preserved in the produced code? Or if there are no guarantees, can we calculate a probability? (Sorry, not an expert on machine learning, just excited about a potentially new way to fuzz programs. Technically, one could just instrument C programs with sanitizers and produce backwards static slices from the C sanitizer instrumentation to the beginning of the program and you get a sample of a program without undefined behaviour... so there is already the potential for some training set to be expanded beyond what Csmith can provide.)
This was a fun exercise, definitely think this could be difficult to suss out for greener devs or even more experienced ones. It’d be hilarious to have this model power a live screensaver in lieu of actually being busy at times.
I was confused on most examples I got because they started in the middle of a block comment. That's clearly wrong, but was it an artefact of the presentation rather than the generation?
It's easy after a while... all the terribly written code is human made and the clean tidy code is GPT2, I for one welcome our new programming overlords.
There was one sample where the code written by gpt-3 had nonsensical grammar in the comments and another where it had an overloaded > operator with 3 parameters (is that even valid c++?)
This is actually quite impressive. Try reading the comments in the code. The comments often make perfect sense in the local context even if it’s GPT-2 gibberish.
The real examples have worse comments at times.
The only flaw is that it shows fake code most of the time so you can game it that way.
A lot of the real code seems superficially nonsensical as well!
Functions with lots of arguments while the body consists of "return true;"
I guess it tells what AI often tells us about ourselves: That what we do makes much less sense than we think it does. It is thus easy to fake.
How is it possible to churn out so much music or so many books, or so much software? Well, because most creative works are either not very original or are quite procedural or random.
And this kind of work could be automated indeed (or examined if it needs to be done in the first place).
I found this impressively hard at first glance. It just goes to show how difficult getting into context is in an unfamiliar codebase. I think with any amount of knowledge of anything allegedly involved (or, you know, a compiler), these examples would fall apart, but it's still an achievement.
I'm also pretty sure there are formatting, commenting, and in-string-text "tells" that indicate whether something is GPT2 reliably. Maybe I should try training an AI to figure that out...
I was always able to correctly identify GPT2 but on a few occasions, I misidentified human-written code as being written by GPT2. Usually when the code was poorly written or the comments were unclear.
GPT2's code looks like correct code at a glance but when you try to understand what it's doing, that's when you understand that it could not have been written by a human.
It's similar to the articles produced by GPT3; they have the right form but no substance.
I ran into some value setters which simply logged their parameters but did not store them. Hah! obviously GPT2, not understanding that a setter should, well, set.
Wrong, of course. Now maybe the human concerned was far down some inheritance tree and simply wanted to document misuse of a deliberately limited class but assert()ing would have been too punitive. Or may it was indeed "right form but no substance", but authored by an SWE.
It's fascinating that the main reason this is hard is that some of the human code is bad enough that it's hard to believe it's not GPT-2 output. (The first time this happened for me, I had to look it up to convince myself it's really human code.)
It reminds me of how GPT-3 is good at producing a certain sort of bad writing.
My guess as to why this happens: we humans have abilities of logical reasoning, clarity, and purposefulness that GPT doesn't have. When we use those abilities, we produce writing and code that GPT can't match. If we don't, though, our output isn't much better than GPT's.
I got 4/4 GPT-2 guesses right. It is impressive but the "tell" I've found so far is just poor structure in the logic of how something is arranged. For example: a bunch of `if` statements in sequence without any `else` clauses with some directly opposing prior clauses. Another example was repeating the same operation a few times in individual lines of code which most human programmers would write in a simpler manner.
It's harder to do with some of the smaller excerpts though, and I'm sure there are probably examples of terrible human programmers who write worse code than GPT-2.
If only. Humans leave dead comments all the time and I was wrong when I guessed “gpt” wrote this based on that. Being confusing isn’t reliable either unless it’s a syntax error
This is difficult... because these models are just regurgitating after training on real code. Fun little site but I hope nobody reads too much into this.
[+] [-] moyix|5 years ago|reply
* Trained 17GB of code from the top 10,000 most popular Debian packages. The source files were deduplicated using a process similar to the OpenWebText preprocessing (basically a locality-sensitive hash to detect near-duplicates).
* I used the [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) code for training. Training took about 1 month on 4x RTX8000 GPUs.
* You can download the trained model here: https://moyix.net/~moyix/csrc_final.zip and the dataset/BPE vocab here: https://moyix.net/~moyix/csrc_dataset_large.json.gz https://moyix.net/~moyix/csrc_vocab_large.zip
Happy to answer any questions!
[+] [-] pabs3|5 years ago|reply
https://codesearch.debian.net/
[+] [-] moyix|5 years ago|reply
https://moyix.net/~moyix/unconditional_samples.txt
[+] [-] ivraatiems|5 years ago|reply
Did people find it to be as challenging when you showed it to them as some of us are here? Did you expect that level of complexity?
[+] [-] lostmsu|5 years ago|reply
[+] [-] RantyDave|5 years ago|reply
[+] [-] Felk|5 years ago|reply
[+] [-] skissane|5 years ago|reply
[+] [-] moyix|5 years ago|reply
The list of packages the real snippets are drawn from is here (maybe if you want to avoid using them... ;) ):
https://moyix.net/~moyix/sample_pkgnames.txt
Note that the GPT samples are prompted with 128 characters randomly selected from those same packages, so you will see GPT2-generated code that mentions the package name etc. However, these packages were not used for training.
[+] [-] psyklic|5 years ago|reply
[+] [-] dsilin|5 years ago|reply
[+] [-] et1337|5 years ago|reply
[+] [-] moyix|5 years ago|reply
[+] [-] minimaxir|5 years ago|reply
It's possible training for a month may be too much.
[+] [-] sdflhasjd|5 years ago|reply
Sadly, I can't do back and see it again
[+] [-] dzdt|5 years ago|reply
[+] [-] Aardwolf|5 years ago|reply
TIFF is a real thing, so some human was involved in some part of that code, it has just been garbled up by GPT2... In other words, the training set is showing quite visibly in the result
[+] [-] ironmagma|5 years ago|reply
[+] [-] moyix|5 years ago|reply
[+] [-] klik99|5 years ago|reply
[+] [-] damenut|5 years ago|reply
[+] [-] efferifick|5 years ago|reply
I understand that GPT-2/3 is just a very smart parrot that has no semantic knowledge of what it is outputting. Like I guess let's take a very dumb markov chain that was "trained" on the following input:
a.c ``` int array[6]; array[5] = 1; ```
b.c
``` int array[4]; array[3] = 2; ```
I guess a markov chain could theoretically produce the following code:
out.c
``` int array[4]; array[5] = 1; ```
which is undefined behaviour. But it is produced from two programs where no undefined behaviour is present. A better question would be, how can we guarantee that certain program invariants (like lack of undefined behaviour) can be preserved in the produced code? Or if there are no guarantees, can we calculate a probability? (Sorry, not an expert on machine learning, just excited about a potentially new way to fuzz programs. Technically, one could just instrument C programs with sanitizers and produce backwards static slices from the C sanitizer instrumentation to the beginning of the program and you get a sample of a program without undefined behaviour... so there is already the potential for some training set to be expanded beyond what Csmith can provide.)
EDIT: I don't know how to format here...
[+] [-] technologia|5 years ago|reply
[+] [-] iconara|5 years ago|reply
[+] [-] reillyse|5 years ago|reply
[+] [-] dnautics|5 years ago|reply
[+] [-] thewarrior|5 years ago|reply
The real examples have worse comments at times.
The only flaw is that it shows fake code most of the time so you can game it that way.
[+] [-] jackson1442|5 years ago|reply
[+] [-] Gravityloss|5 years ago|reply
Functions with lots of arguments while the body consists of "return true;"
I guess it tells what AI often tells us about ourselves: That what we do makes much less sense than we think it does. It is thus easy to fake.
How is it possible to churn out so much music or so many books, or so much software? Well, because most creative works are either not very original or are quite procedural or random.
And this kind of work could be automated indeed (or examined if it needs to be done in the first place).
[+] [-] ivraatiems|5 years ago|reply
I'm also pretty sure there are formatting, commenting, and in-string-text "tells" that indicate whether something is GPT2 reliably. Maybe I should try training an AI to figure that out...
[+] [-] pwinnski|5 years ago|reply
[+] [-] cryptica|5 years ago|reply
GPT2's code looks like correct code at a glance but when you try to understand what it's doing, that's when you understand that it could not have been written by a human.
It's similar to the articles produced by GPT3; they have the right form but no substance.
[+] [-] thombat|5 years ago|reply
Wrong, of course. Now maybe the human concerned was far down some inheritance tree and simply wanted to document misuse of a deliberately limited class but assert()ing would have been too punitive. Or may it was indeed "right form but no substance", but authored by an SWE.
[+] [-] blueblimp|5 years ago|reply
It reminds me of how GPT-3 is good at producing a certain sort of bad writing.
My guess as to why this happens: we humans have abilities of logical reasoning, clarity, and purposefulness that GPT doesn't have. When we use those abilities, we produce writing and code that GPT can't match. If we don't, though, our output isn't much better than GPT's.
[+] [-] _coveredInBees|5 years ago|reply
It's harder to do with some of the smaller excerpts though, and I'm sure there are probably examples of terrible human programmers who write worse code than GPT-2.
[+] [-] Aardwolf|5 years ago|reply
[+] [-] TehCorwiz|5 years ago|reply
[+] [-] hertzrat|5 years ago|reply
[+] [-] nickysielicki|5 years ago|reply
[+] [-] moyix|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]