top | item 40642871

NanoGPT: The simplest, fastest repository for training medium-sized GPTs

114 points| ulrischa | 1 year ago |github.com

21 comments

order
[+] paradite|1 year ago|reply
It takes a bit of effort to setup and train a proper GPT-2 model, especially if you are not familiar with GPU drivers and python environment.

Also don't try to train GPT-2 on your own machine as it takes days, even with a good gaming GPU.

If you are interested in trying it out, but don't have the right GPU or OS, you can check out the guide I wrote on how I did it on Azure with T4 GPU instance:

https://16x.engineer/2023/12/29/nanoGPT-azure-T4-ubuntu-guid...

[+] alok-g|1 year ago|reply
Thanks a lot for sharing this.

How much does the net cost come out to for a run? (Am not a startup.)

Also, any smaller datasets you would recommend that still demonstrate some useful capability?

Thanks.

[+] CapsAdmin|1 year ago|reply
Something from previous discussions about the whole

"Rather than paying $50k up front for 8 x A100's, you can just rent some GPU's for $1.2k to train the whole thing in 4 days"

That feel off to me is that it completely ignores the compute time spent exploring new ideas, failing, tweaking the training data, etc.

[+] mikeqq2024|1 year ago|reply
Link for the previous discussion? Which model, dataset, training strategy?
[+] VagabundoP|1 year ago|reply
Would any GPU - even an old one - work for training these models?

I have a bunch of little home apps/ideas for some specific training I'd like to do, but don't currently have any recent GPU of note. Just a few old ones in my attic somewhere.

EDIT: ah someone has posted a blog post in the comments with some info

EDIT2: to be clearer I was talking about finetuning. I assume thats quicker/less intensive. I really need to checkout some online courses I think.

[+] mromanuk|1 year ago|reply
Yes, fine tuning is fast, cheap and "easy", you can do it in a no so expensive GPU without issues.
[+] mromanuk|1 year ago|reply
Have in mind that instead of training a GPT-2 model from scratch, you can opt for fine-tuning. Fine-tuning allows you to take a pre-trained model and adapt it to your specific task with much less computational resources and time. It leverages the existing knowledge of the model, resulting in better performance and faster development cycles.
[+] serverlord|1 year ago|reply
We need a simpler way to make GPT get mass adoption and I don't think GPT Store from OpenAI will do that.

We need a unique interface with a standardized process to help people make use-case-specific GPTs.

Let's see what the future holds.

[+] paradite|1 year ago|reply
I actually built a purpose-made UI (desktop app) for coding using ChatGPT, because I found that chat interface is not ideal for daily coding tasks.

Curious what you think about it: https://prompt.16x.engineer/