ProlificInquiry's comments

ProlificInquiry | 3 years ago | on: Ownership of AI-Generated Code Hotly Disputed

It's far worse than that: to attribute a particular output to an input you don't just need the input data, you need the gradient updates that data caused in the training run. A couple hundred gigabytes input tokens times 175 billion parameters equals... impossibility.

ProlificInquiry | 3 years ago | on: Ownership of AI-Generated Code Hotly Disputed

This quote seems to fundamentally misunderstand what transformers are doing at all. Technically I suppose you could save all gradient updates from every input token, and do some weighted averaging to show what inputs affected the particular output the most, but saving all those gradient updates would be unimaginably space consuming. "Feasible" is doing a lot of work there.

It's very hard for people to get away from the idea that GPT is "copying" something, but that's not what it's doing. The reality is, to get the exact artifact which produced the code in question, you need "Call me Ishmael" from Moby Dick just as much as the Linux kernel source.

ProlificInquiry | 4 years ago | on: Want to be an actuary? Odds are, you’ll fail the test

It was extremely misleading for the author to list questions from exam P as if they were representative of the exams at a large. I'm an actuary, and I passed that exam with a 10 (maximum grade), with a nights review of my probability class notes.

Then I failed my next exam twice. The first couple exams are very simple, and then it ramps up considerably.

As others have mentioned, the hard part of these exams is not the individual material, which is never far beyond undergrad level, but the sheer quantity of it. It's also very unmotivated in some cases, so it's hard to piece together a full picture of the material that you're trying to learn.

page 1