> Chat with GPT is an open-source, unofficial ChatGPT app with extra features and more ways to customize your experience. It connects ChatGPT with ElevenLabs to give ChatGPT a realistic human voice.
Looks like only GUI aspects of the UI are self-hosted, but that the text and speech aspects of the UI (and the bulk of the computation and IP) are provided by two SaaS services.
Self-hosted (and some degree of open) ML models are what a lot of people might want, so we should probably be careful when saying "self-hosted" right now, to not disappoint people or confuse discussion when talking about what we want.
It's somewhat ambiguous language - "self-hosted ChatGPT UI" could lead many to believe it's completely self-hosted.
However, sophisticated readers familiar with ChatGPT will know the model and weights haven't been released and absent a leak/hack/release by OpenAI a completely self-hosted ChatGPT solution is impossible. Eventually we'll almost certainly see a "Completely self-hosted ChatGPT equivalent" (similar to Dall-E vs Stable Diffusion) but that's another thread for another time.
Based on my native speaker parsing of English "Self-hosted ChatGPT UI" is accurate and I'm not sure how else I would write it to disambiguate between a self-hosted UI and a completely self-hosted ChatGPT with a UI.
It's a self-hosted UI for ChatGPT right now, but my primary goal is to build a good open source chat interface that can be adapted to open source chat models as they become available.
Integrating with Alpaca, Llama, ChatGLM, OpenChatBox and whatever comes next should be straightforward once people figure out reliable and fast methods to run the models locally.
I have tried this and many, many other ChatGPT frontends. I recently did a search for "chatgpt" on GitHub and filtered for frontends, but I was a bit disappointed with the results. Most of them seemed to be pretty similar and didn't offer anything new or unique.
I'm really interested in finding a frontend with LangChain integration that can switch between chat mode and doc mode or something along those lines. It would be great to have a more versatile tool for communication and collaboration.
Do any of you have any recommendations or know of any projects that fit this description?
It's a shame that the screencast has no sound. I was curious about what it would sound like. I could try it myself via the netlify app but I don't feel very comfortable sharing my API key somewhere...
I posted a screencast on Reddit earlier in the development process with audio demonstrating the text-to-speech feature. The UI has changed a bit since then, but you can hear what the voices sound like:
ChatGPT API can be a lot more useful when you use it in context. Like selecting a chunk of text on any web page, right-click, and select summarize/translate/ELI5. Or executing your own custom prompt.
I'm building a chrome extension called SublimeGPT[1] to do exactly that. Right now, you can log in to your existing ChatGPT account, go to any page, and open a chat overlay. Next version will have the context options.
Is this allowed under OpenAI's ToS? I just don't want to connect my account and then get it banned.
Edit: It seems like it is just using the API instead of the web interface, and thus charging my account each time. I originally thought it was injecting into the free web interface. But is changing the system prompt going to get me banned?
Changing the system prompt is not going to get you banned, as it's something OpenAI encourages people to do when making API calls to gpt-3.5-turbo. [0]
Thanks for sharing. It's really quick with responses. At least compared to couple of other frontend projects for chatgpt/OpenAI API clients I've used in the past few days.
What I think i need is something like this, but in bookmarklet form. I click it, it prompt()s me for the prompt and displays the output in a textarea so i can quickly paste it. Thinking of it it should be possible to put the output straight into the clipboard, right?
The use case of course would be email/forum communication.
The problem is that you have to make a UI to embed the API key into the code, because pasting it into an urlencoded script is bound to be a pain.
What I think would be cool is taking automatically from highlighted text in any app, falling back to my clipboard as input, and then outputting to my clipboard.
That way it works in any app automatically.
Seamless system wide clipboard read is a big ask though, so ideally you'd want a self hosted model like llama.cpp
Apologies if this is so unrelated as to be off-topic, but I'm new to this and so my mental model is incomplete at best and completely wrong at worst. My question is:
How would one create a "domain expert" version of this? The idea would be to feed the model a bunch of specialized, domain-specific content, and then use an app like this as the UX for that.
Either you can try it out with a longer system prompt, or wait until OpenAI releases a fine-tune API for the gpt-3.5-turbo model. The system prompts aren't designed to be very long, so the fine-tune is definitely what you'd be looking for. But it's only provided for the older models, so it's outdated at this point.
I guess you could also try to tack on an extra layer before the actual API call, and make your own system that includes key bits of info to the prompt from a more specific data set. But I'd guess at this rate of new releases from OpenAI, it might be a safe bet to wait the couple of weeks until they update the fine-tune API.
You really want it integrated with an OpenAI API clone rather than directly integrated. Otherwise, interoperability will suffer greatly as new and improved models are released.
I like it. The chat.openai.com frontend is very slow and frequently breaks, so I would consider using this.
Have you considered adding different tts providers? It doesn't get better than elevenlabs right now, but they are also much more expensive than for example the azure neural voices.
True, and the free version does it lot, almost on purpose.
The paid version is a lot faster and doesn’t break as often, but it still breaks (eg. For the last two days, the chat list on the sidebar disappeared and it showed a message saying “don’t worry, your chats will show up eventually”).
Do you know if people get charged for prompts now on the original chatGPT site now that the API is out? Or is it still free for users that use the original site?
I can't wait to test this! As other have mentioned, the "free" chat frontend is slow and the "Plus" one, not much better. Also, at $20/month, based on my usage, it's actually more expensive than using the API.
The last hurdle: as ChatGPT is not GDPR compliant, it would be really interesting/useful to find a way to "hide" the queries from openai and prevent the usage of your input in future training - basically, a self-hosted, non-leaking, chatGPT.
neilv|3 years ago
Looks like only GUI aspects of the UI are self-hosted, but that the text and speech aspects of the UI (and the bulk of the computation and IP) are provided by two SaaS services.
Self-hosted (and some degree of open) ML models are what a lot of people might want, so we should probably be careful when saying "self-hosted" right now, to not disappoint people or confuse discussion when talking about what we want.
kkielhofner|3 years ago
However, sophisticated readers familiar with ChatGPT will know the model and weights haven't been released and absent a leak/hack/release by OpenAI a completely self-hosted ChatGPT solution is impossible. Eventually we'll almost certainly see a "Completely self-hosted ChatGPT equivalent" (similar to Dall-E vs Stable Diffusion) but that's another thread for another time.
Based on my native speaker parsing of English "Self-hosted ChatGPT UI" is accurate and I'm not sure how else I would write it to disambiguate between a self-hosted UI and a completely self-hosted ChatGPT with a UI.
tottenval|3 years ago
Integrating with Alpaca, Llama, ChatGLM, OpenChatBox and whatever comes next should be straightforward once people figure out reliable and fast methods to run the models locally.
whartung|3 years ago
Assuming the model was available, how big are the models and what kind of hardware is necessary to run the instance?
nirav72|3 years ago
corobo|3 years ago
benatkin|3 years ago
bobobob420|3 years ago
Edit i think the title name was changed. Dang can you please show revision history otherwise i cant dicuss properly
monkmartinez|3 years ago
I'm really interested in finding a frontend with LangChain integration that can switch between chat mode and doc mode or something along those lines. It would be great to have a more versatile tool for communication and collaboration.
Do any of you have any recommendations or know of any projects that fit this description?
MrLeap|3 years ago
https://twitter.com/LeapJosh/status/1635735318665068549
tottenval|3 years ago
filenox|3 years ago
tottenval|3 years ago
https://old.reddit.com/r/OpenAI/comments/11k19en/i_made_an_a...
1xdevloper|3 years ago
I'm building a chrome extension called SublimeGPT[1] to do exactly that. Right now, you can log in to your existing ChatGPT account, go to any page, and open a chat overlay. Next version will have the context options.
[1] https://sublimegpt.com
throwaway71271|3 years ago
NiekvdMaas|3 years ago
https://github.com/Niek/chatgpt-web
humanistbot|3 years ago
Edit: It seems like it is just using the API instead of the web interface, and thus charging my account each time. I originally thought it was injecting into the free web interface. But is changing the system prompt going to get me banned?
helloplanets|3 years ago
[0]: https://platform.openai.com/docs/guides/chat/introduction
nirav72|3 years ago
prenoob|3 years ago
TOMDM|3 years ago
That way it works in any app automatically. Seamless system wide clipboard read is a big ask though, so ideally you'd want a self hosted model like llama.cpp
CharlesW|3 years ago
How would one create a "domain expert" version of this? The idea would be to feed the model a bunch of specialized, domain-specific content, and then use an app like this as the UX for that.
helloplanets|3 years ago
I guess you could also try to tack on an extra layer before the actual API call, and make your own system that includes key bits of info to the prompt from a more specific data set. But I'd guess at this rate of new releases from OpenAI, it might be a safe bet to wait the couple of weeks until they update the fine-tune API.
thefourthchime|3 years ago
https://dev.to/dhanushreddy29/fine-tune-gpt-3-on-custom-data...
ricklamers|3 years ago
Tostino|3 years ago
Version467|3 years ago
nico|3 years ago
True, and the free version does it lot, almost on purpose.
The paid version is a lot faster and doesn’t break as often, but it still breaks (eg. For the last two days, the chat list on the sidebar disappeared and it showed a message saying “don’t worry, your chats will show up eventually”).
tottenval|3 years ago
unknown|3 years ago
[deleted]
skerit|3 years ago
ar9av|3 years ago
qingdao99|3 years ago
smusamashah|3 years ago
tagyro|3 years ago
I can't wait to test this! As other have mentioned, the "free" chat frontend is slow and the "Plus" one, not much better. Also, at $20/month, based on my usage, it's actually more expensive than using the API.
The last hurdle: as ChatGPT is not GDPR compliant, it would be really interesting/useful to find a way to "hide" the queries from openai and prevent the usage of your input in future training - basically, a self-hosted, non-leaking, chatGPT.