Once the model is trained, it doesn't need to keep the training data around.
GPT-3 is about 175 billion parameters (though I have no idea how many bits per parameter OpenAI uses at inference-time), and is apparently trained on 45 TB of data[0]
Presumably if you are training a robot to use a new/different tool you'll want the ability to train on site. If you buy an iHop restaurant the pancake robot in the kitchen ought to be able to be repurposed as a hamburger robot for your cheeseburger business. Omlette scrambling robots could be trained to mix small batches of cookie dough. Etc etc. Toyota is working on developing a framework for this already.
These are the trained weights and biases, the training data is unknown in size but could be terabytes… I’ve no idea how to even guess at the size of the training data but that doesn’t all need to in ram at the same time.
ben_w|2 years ago
GPT-3 is about 175 billion parameters (though I have no idea how many bits per parameter OpenAI uses at inference-time), and is apparently trained on 45 TB of data[0]
[0] Caution: citation was first hit on google, YMMV — https://www.springboard.com/blog/data-science/machine-learni...
hadlock|2 years ago
andy_ppp|2 years ago