top | item 39584901

Ask HN: Is anyone considering cancelling their ChatGPT subscription?

56 points| osigurdson | 2 years ago | reply

I like the functionality and use it a lot but it has become so annoyingly unstable lately. Literally, every time I try to use it, it fails in one way or another. Perhaps my $20 / month isn't that interesting to them in the grand pursuit of AGI but I want to use the service, not merely fund research.

69 comments

order
[+] throwitaway222|2 years ago|reply
I don't know why anyone pays for 4 when 3.5 is already a billion times better than what we had 2 years ago. 3.5-turbo, at least from an API perspective is an extremely cost effective way for you to add more intelligent decision making to your applications and backend processes. We're going to use GPT 3.5-turbo to help us decide if a specific thing is probably "this" or "that" or "one of the following".... Super easy to use it that way than rolling our own crappy bag-of-words neural network that uses word2vec. Tiny bit slower, but worth it.
[+] jasonjmcghee|2 years ago|reply
I use ChatGPT 4 daily (usually one of my custom GPTs). 3.5 is next to useless on anything I'm reaching to ChatGPT for.

For pipeline / LLM project stuff- sometimes I can use 3.5, but again most interesting things I take on requires 4. 3.5 is so unreliable, is terrible at detailed / specific prompts.

Fine tuning 3.5 helps massively though. Especially for things like using it as a general classifier.

[+] fragmede|2 years ago|reply
4 is still better than 3.5. Enough to justify the (low) cost. Most recently, 3.5 couldn't identity that February 29th is a valid date this year, but gpt-4 does.
[+] muzani|2 years ago|reply
Also we're not just paying for GPT4. The paid version comes with image, so you can take photos of log dumps or diagrams. It also does some image manipulation stuff for me and passes me the code.

There's also DALL-E 3, but it's more of a bonus than something I'd pay for. PDF is cool too, but it's not the only service out there.

I'd likely have unsubscribed if it was only 4 because the API version is better.

[+] Gooblebrai|2 years ago|reply
The image feature is game changing compared to 3.5
[+] weird-eye-issue|2 years ago|reply
For some things it makes a huge difference. We use several different models in production depending on the exact use case
[+] hackerlight|2 years ago|reply
3.5 wastes my time with hallucinated code. 4 wastes less time. A lot less.
[+] cloudking|2 years ago|reply
No, the hours it saves per month are worth a lot more than $20. It really depends on your use case, personally I find it superior to any other model I've tried for coding, debugging and troubleshooting server issues.
[+] osigurdson|2 years ago|reply
I agree, but it barely works anymore for me. That is the main issue I have.
[+] kreijstal|2 years ago|reply
Funny that you posted this, I canceled it an hour ago. If I need it, I'll use the gpt-4 api, why I cancelled it? It can't think anymore, it is arrogantly wrongly overconfident, it can't detect it's own mistakes anymore, and it has goldfish memory. For the record, it was brilliant and the best LLM ever, but now it feels like GPT2 levels of quality. It all started 2 weeks ago, when chatgpt got `silently` updated (https://chat.openai.com/share/512002b1-ceb3-48b5-9a29-d44b63...), In the beginning, it is decent, you can see towards the end there is certain gibberish (when the update happened). When creating a new chat, the quality seriously went down. I am looking for replacements, not sure where everyone went, what are they using nowadays?
[+] muzani|2 years ago|reply
I like how it flipped from being easily gaslit to arrogantly overconfident lol
[+] atleastoptimal|2 years ago|reply
The fact that their now 1.5k token system prompt is forced into every response makes it not worth it, even though it's "unlimited". API makes a lot more sense for most purposes.
[+] osigurdson|2 years ago|reply
What does the 1.5K system prompt include? I'm surprised they don't use fine tuning for this aspect.
[+] jorisboris|2 years ago|reply
Is the api without the system prompt? To use the API, you still need to have the subscription?
[+] JojoFatsani|2 years ago|reply
Get an API key and an interface like MacGPT. I spend like $4 a month on tokens now.
[+] eveb|2 years ago|reply
I paid for a subscription today and cancelled the subscription today and was refunded. It was freezing non stop, it kept adding color to my black and white line art and then told me it had no control over that, then it was telling me I ran a single prompt too many times. I said it kept getting it wrong that's why and it would do it again but it was slowing me down every time it was refusing my commands. The final straw was when it shut down and said I'd used too many requests and to come back near midnight the next day... so I'd waste a whole day of not being able to use it because I'd be in bed by midnight. If I'm paying, I want to able to use the thing for more than 20 images and I don't want to have to argue with the chatbot about what I'm doing. It should have no say or thoughts about how many times I've run a prompt.
[+] K0IN|2 years ago|reply
we switched to librechat [0] it is a great app if you don't use gpt 4 often enough to hit the 20$ worth of tokens, also it supports plugins. If you use more than 20$ then just stay with gpt plus.

https://docs.librechat.ai/

[+] stavros|2 years ago|reply
We deployed LibreChat at $JOB, it's much better than paying $25/mo for a hundred people. We spend around $200/mo, instead of thousands.
[+] willnz|2 years ago|reply
Cancelled this month.

I started getting too many responses which were clearly incorrect, I point out the correction to which the response was, "You are indeed correct [rephrases the answer correctly]"

Okay, great, but what about all the responses where I don't actually know the topic enough to know it's incorrect?

[+] fddrdplktrew|2 years ago|reply
Isnt ChatGPT 4 available (for free) on Bing Chat anyways?
[+] remixer-dec|2 years ago|reply
No, it is labeled as GPT-4, but last time I checked it was using GPT-3.5 under the hood. I asked about something recent that GPT-4 should know, can't remember what exactly, maybe about stable diffusion, and asked not to use the web search and it answered like GPT-3.5
[+] jascination|2 years ago|reply
Yep, I use this very frequently (I use edge browser fwiw). Now I can be disappointed by gpt for free!
[+] DoingFedTime|2 years ago|reply
Yes. I asked it to write a 50 word description of some text I gave it, it wrote a single 10 word sentence. I told it that was wrong, and to do it again, this time write a 50 word description, failed again. On the 5th time I did it myself. This is a basic example. I've been using ChatGPT for over a year and have love it up until now, it feels like it was completely lobotomized. 1/2 the prompt is it repeating your question or prompt back to you.
[+] DoingFedTime|2 years ago|reply
Yes. I asked it to write a 50 word description of some text I gave it, it wrote a single 10 word sentence. I told it that was wrong, and to do it again, this time write a 50 word description, failed again. On the 5th time I did it myself. This is a basic example. I've been using ChatGPT for over a year and have love it up until now, it feels like it was utterly lobotomized. 1/2 the prompt is it repeating your question or prompt back to you.
[+] gtirloni|2 years ago|reply
I'm considering switching to Gemini when the mobile app is available in my country and the web app gets better.

ChatGPT lately has its confidence really high and hallucinates in the middle of really basic technical stuff. It will say what a command should be for something really mundane like the `ip` command in Linux and then it sprinkles stuff that doesn't exist in the middle... you start to lose confidence.

I think LLMs are better than search engines but if I have to fact check everything, I'll switch to another LLM or go back to a search engine.

I don't know what has changed in ChatGPT but it's worse lately.

[+] remixer-dec|2 years ago|reply
I use a fork of https://github.com/Krivich/GPT-Over-API, I edited it to support recent models and added cost estimation that keeps track of all the money spent on all requests. For most of the tasks ChatGPT3.5 is fine, but for more complex/recent-data related tasks, GPT-4 performs better. Why this over some fancy ui on the web? Well, this can be hosted locally and will not steal your OpenAI key, and it allows to set max.tokens and select history items on every request.
[+] motoxpro|2 years ago|reply
I canceled mine and I now pay for Gemini. Only thing I really use it for is brainstorming and coding and it’s better for me at both those by a long shot.
[+] lmiller1990|2 years ago|reply
How are people using GPT4 via the API? I signed up and loaded some money, but I still can’t use GPT for by the API. Only 3.5 turbo.

I still find ChatGPT 4 well worth the money. The coding is way better than 3.5. I wonder what their system prompt is, I don’t get as good results from the API (maybe I am doing it wrong).

[+] hugovie|2 years ago|reply
To avoid both instability and strict limitations, you can utilize the ChatGPT API. By adding the API key into clients like MindMac[0], you will gain access to a pleasant UI with numerous additional features.

[0] https://mindmac.app

[+] culopatin|2 years ago|reply
Is the cost via api access included in the $20?
[+] longnguyen|2 years ago|reply
Many of my customers canceled their ChatGPT subscription and switched 100% to API.

If you want to use GPTs or to generate images with Dall-E then ChatGPT Plus is a no brainer I guess.

But if your focus is on GPT-4 then I highly recommend to use an API instead.

There are multiple pros of using an API Key:

- You pay for your usages. I've been using API Key exclusively and most of the time, it cost me around $5-$10 a month

- Your data is not used for training. This is important for a privacy-minded user like me. Though you can disable this in ChatGPT (but you will lose chat history)

- No message limit. Though there are rate limits to prevent abuse but generally, you do not have the message limit like the ChatGPT

- You can choose previous GPT models or other open-source models via

Depends on the applications, you can also get these:

- Access to multiple AI services: OpenAI, Azure or OpenRouter

- Local LLMs via Ollama etc.

- Build custom AI workflows

- Voice search & text-to-speech etc.

- Deeper integrations with other apps & services

There are also a few cons:

- No GPTs support yet

- If you use Dall E a lot then ChatGPT plus is more affordable. Generating images using Dall E API can be quite expensive

Edit: Some tips when using an API Key:

- You pay for tokens used (basically how long your question & AI's answers). The price per chat message is not expensive, but usually you will need to send the whole conversation to OpenAI, which makes it expensive. Make sure to pick an app that allows you to limit the chat context.

- Don't use GPT-4 for tasks that doesn't require deeper reasoning capabilities. I find that GPT-3.5 turbo is still very good at simple tasks like: grammar fixes, improve your writing, translations ...

-You can even use local LLMs if your machine can run Ollama

- Use different system prompts for different tasks: for example, I have a special prompt for coding tasks, and a different system prompt for writing tasks. It usually give a much better result.

Shameless plug: I've been building a native ChatGPT app for Mac called BoltAI[0], give it a try

[0]: https://boltai.com

[+] osigurdson|2 years ago|reply
I think that image generation, while very impressive at first, has already become a kind of cheap parlor trick. Everyone can easily spot images created by ChatGPT and it now just seems kind of cheesy.
[+] jasonjmcghee|2 years ago|reply
Unstable due to what? Network error? Or the model itself providing bad results?

If it's about model output, I highly recommend custom GPTs. Taking 15 minutes to an hour to play around with a custom prompt to get it to work how you want is incredibly worth it.

[+] cableshaft|2 years ago|reply
I cancelled it about four or five months ago. I've brought up ChatGPT 3.5 like three times since then. Definitely don't use it enough to be worth paying for it right now.