top | item 35101630

ChatGPT-J: The Privacy-First, Self-Hosted Chatbot Built on GPT-J's Powerful AI

54 points| jarrell_mark | 3 years ago |colab.research.google.com

9 comments

order

two_handfuls|3 years ago

"Privacy-First", but also working in a colab notebook - meaning running on someone else's machine? That doesn't seem very private.

was_a_dev|3 years ago

Download the notebook and run locally?

johntash|3 years ago

Can this be run locally without beefy GPUs by any chance?

quesomaster9000|3 years ago

ggml (https://github.com/ggerganov/ggml) has a GPT-J example, the 6B parameter model runs happily on the CPU 16gb of ram and 8 cores at a couple of words per second, no GPUs necessary.

    gptj_model_load: ggml ctx size = 13334.86 MB
    gptj_model_load: memory_size =  1792.00 MB, n_mem = 57344
    gptj_model_load: model size = 11542.79 MB / num tensors = 285
    main: number of tokens in prompt = 12

    An example of GPT-J running on the CPU is shown in Fig. [4](#Fig4

    main: mem per token = 16179460 bytes
    main:     load time =  7463.20 ms
    main:   sample time =     3.24 ms
    main:  predict time =  4887.26 ms / 232.73 ms per token
    main:    total time = 13203.91 ms

jerpint|3 years ago

There have been CPU implementations of LLAMA (7b parameters, comparable in size) with very impressive performance

ops|3 years ago

I haven't used this yet, but I am currently running GPT-J on my Mac Studio, so I suspect so.

jarrell_mark|3 years ago

It should work with about 12gb GPU RAM.

I got it to load on a GTX 1070 with 8GB GPU RAM, but then it crashed before it could generate a response.

It needs less RAM than regular GPT-J because the weights are converted to 8-bit