top | item 42330732

ChatGPT Pro

813 points| meetpateltech | 1 year ago |openai.com | reply

1197 comments

order
[+] fudged71|1 year ago|reply
OpenAI is racing against two clocks: the commoditization clock (how quickly open-source alternatives catch up) and the monetization clock (their need to generate substantial revenue to justify their valuation).

The ultimate success of this strategy depends on what we might call the enterprise AI adoption curve - whether large organizations will prioritize the kind of integrated, reliable, and "safe" AI solutions OpenAI is positioning itself to provide over cheaper but potentially less polished alternatives.

This is strikingly similar to IBM's historical bet on enterprise computing - sacrificing the low-end market to focus on high-value enterprise customers who would pay premium prices for reliability and integration. The key question is whether AI will follow a similar maturation pattern or if the open-source nature of the technology will force a different evolutionary path.

[+] submeta|1 year ago|reply
I actually pay 166 Euros a month for Claude Teams. Five seats. And I only use one. For myself. Why do I pay so much? Because the normal paid version (20 USD a month) interrups the chats after a dozen questions and wants me to wait a few hours until I can use it again. But Teams plan gives me way more questions.

But why do I pay that much? Because Claude in combination with the Projects feature, where I can upload two dozen or more files, PDFs, text, and give it a context, and then ask questions in this specific context over a period of week or longer, come back to it and continue the inquiry, all of this gives me superpowers. Feels like a handful of researchers at my fingertips that I can brainstorm with, that I can ask to review the documents, come up with answers to my questions, all of this is unbelievably powerful.

I‘d be ok with 40 or 50 USD a month for one user, alas Claude won’t offer it. So I pay 166 Euros for five seats and use one. Because it saves me a ton of work.

[+] pentagrama|1 year ago|reply
The argument of more compute power for this plan can be true, but this is also a pricing tactic known as the decoy effect or anchoring. Here's how it works:

1. A company introduces a high-priced option (the "decoy"), often not intended to be the best value for most customers.

2. This premium option makes the other plans seem like better deals in comparison, nudging customers toward the one the company actually wants to sell.

In this case for Chat GPT is:

Option A: Basic Plan - Free

Option B: Plus Plan - $20/month

Option C: Pro Plan - $200/month

Even if the company has no intention of selling the Pro Plan, its presence makes the Plus Plan seem more reasonably priced and valuable.

While not inherently unethical, the decoy effect can be seen as manipulative if it exploits customers’ biases or lacks transparency about the true value of each plan.

[+] Someone1234|1 year ago|reply
Why doesn't Pro include longer context windows?

I'm a Plus member, and the biggest limitation I am running into by far is the maximum length of a context window. I'm having context fall out of scope throughout the conversion or not being able to give it a large document that I can then interrogate.

So if I go from paying $20/month for 32,000 tokens, to $200/month for Pro, I expect something more akin to Enterprise's 128,000 tokens or MORE. But they don't even discuss the context window AT ALL.

For anyone else out there looking to build a competitor I STRONGLY recommend you consider the context window as a major differentiator. Let me give you an example of a usage which ChatGPT just simply cannot do very well today: Dump a XML file into it, then ask it questions about that file. You can attach files to ChatGPT, but it is basically pointless because it isn't able to view the entire file at once due to, again, limited context windows.

[+] A_D_E_P_T|1 year ago|reply
I just bought a pro subscription.

First impressions: The new o1-Pro model is an insanely good writer. Aside from favoring the long em-dash (—) which isn't on most keyboards, it has none of the quirks and tells of old GPT-4/4o/o1. It managed to totally fool every "AI writing detector" I ran it through.

It can handle unusually long prompts.

It appears to be very good at complex data analysis. I need to put it through its paces a bit more, though.

[+] Mordisquitos|1 year ago|reply
> Aside from favoring the long em-dash (—) which isn't on most keyboards

Interesting! I intentionally edit my keyboard layout to include the em-dash, as I enjoy using it out of sheer pomposity—I should undoubtedly delve into the extent to which my own comments have been used to train GPT models!

[+] Atotalnoob|1 year ago|reply
AI writing detectors are snake oil
[+] karaterobot|1 year ago|reply
I use the emdash a lot. Maybe too much. On MacOS, it's so easy to type—just press shift-option-minus—that I don't even think about it anymore!
[+] vessenes|1 year ago|reply
I noticed a writing style difference, too, and I prefer it. More concise. On the coding side, it's done very well on large (well as large as it can manage) codebase assessment, bug finding, etc. I will reach for it rather than o1-preview for sure.
[+] imgabe|1 year ago|reply
Writers love the em-dash though. It's a thing.
[+] dougb5|1 year ago|reply
That's encouraging to hear that it's a better writer, but I wonder if "quirks and tells" can only be seen in hindsight. o1-pro's quirks may only become apparent after enough people have flooded the internet with its output.
[+] heyjamesknight|1 year ago|reply
> Aside from favoring the long em-dash (—)

This is a huge improvement over previous GPT and Claude, which use the terrible "space, hyphen, space" construct. I always have to manually change them to em-dashes.

[+] layer8|1 year ago|reply
> which isn't on most keyboards

This shouldn’t really be a serious issue nowadays. On macOS it’s Option+Shift+'-', on Windows it’s Ctrl+Alt+Num- or (more cryptic) Alt+0151.

The Swiss army knife solution is to configure yourself a Compose key, and then it’s an easy mnemonic like for example Compose 3 - (and Compose 2 - for en dash).

[+] _cs2017_|1 year ago|reply
No internet access makes it very hard to benefit from o1 pro. Most of the complex questions I would ask require google search for research papers, language or library docs, etc. Not sure why o1 pro is banned from the internet, was it caught downloading too much porn or something?
[+] veidr|1 year ago|reply
Macs have always been able to type the em dash — the key combination is ⌥⇧- (Option-Shift-hyphen). I often use them in my own writing. (Hope it doesn't make somebody think I'm phoning it in with AI!)
[+] davidmurphy|1 year ago|reply
Anyone who read "The Mac is not a typewriter" — a fantastic book of the early computer age — likely uses em dashes.
[+] jwpapi|1 year ago|reply
Wait how did you buy it. I’m just getting forwarded to Team Plan I already have. Sitting in Germany, tried US VPN as well.
[+] cableshaft|1 year ago|reply
Some autocorrect software automatically converts two hyphens in a row into an emdash. I know that's how it worked in Microsoft Word and just verified it's doing that with Google Docs. So it's not like it's hard to include an emdash in your writing.

Could be a tell for emails, though.

[+] galleywest200|1 year ago|reply
This is interesting, because at my job I have to manually edit registration addresses that use the long em-dash as our vendor only supports ASCII. I think Windows automatically converts two dashes to the long em-dash.
[+] aucisson_masque|1 year ago|reply
> It managed to totally fool every "AI writing detector" I ran it through.

For now, as ai power increase, ai powered ai writing detection tool also gets better.

[+] pests|1 year ago|reply
> the long em-dash (—) which isn't on most keyboards

On Windows its Windows Key + . to get the emoji picker, its in the Symbols tab or find it in recents.

[+] pjs_|1 year ago|reply
Long emdash is the way -- possible proof of AGI here
[+] rahimnathwani|1 year ago|reply
Would you mind sharing any favourite example chats?
[+] the_clarence|1 year ago|reply
You can use the emdash by writing dash twice -- it works in a surprising number of editors and rendering engines
[+] ed_elliott_asc|1 year ago|reply
Does it still hallucinate? This for me is key, if it does it will be limited.
[+] az226|1 year ago|reply
What’s the context window?
[+] griomnib|1 year ago|reply
I consistently get significantly better performance from Anthropic at a literal order of magnitude less cost.

I am incredibly doubtful that this new GPT is 10x Claude unless it is embracing some breakthrough, secret, architecture nobody has heard of.

[+] minimaxir|1 year ago|reply
The main difficulty when pricing a monthly subscription for "unlimited" usage of a product is the 1% of power users who use have extreme use of the product that can kill any profit margins for the product as a whole.

Pricing ChatGPT Pro at $200/mo filters it to only power users/enterprise, and given the cost of the GPT-o1 API, it wouldn't surprise me if those power users burn through $200 worth of compute very, very quickly.

[+] ta_1138|1 year ago|reply
There are many use cases for which the price can go even higher. I look at recent interactions with people that were working at an interview mill: Multiple people in a boiler room interviewing for companies all day long, with a computer set up so that our audio was being piped to o1. They had a reasonable prompt to remove many chatbot-ism, and make it provide answers that seem people-like: We were 100% interviewing the o1 model. The operator said basically nothing, in both technical and behavioral interviews.

A company making money off of this kind of scheme would be happy to pay $200 a seat for an unlimited license. And I would not be surprised if there were many other very profitable use cases that make $200 per month seem like a bargain.

[+] blobbers|1 year ago|reply
My friend found 2 chimney sweep businesses. One charges $569, the other charges $150.

Plot twist: the same guy runs both. They do the same thing and the same crew shows up.

[+] vhayda|1 year ago|reply
Yesterday, I spent 4.5hrs crafting a very complex Google Sheets formula—think Lambda, Map, Let, etc., for 82 lines. If I knew it would take that long, I would have just done it via AppScript. But it was 50% kinda working, so I kept giving the model the output, and it provided updated formulas back and forth for 4.5hrs. Say my time is $100/hr - that’s $450. So even if the new ChatGPT Pro mode isn’t any smarter but is 50% faster, that’s $225 saved just in time alone. It would probably get that formula right in 10min with a few back-and-forth messages, instead of 4.5hrs. Plus, I used about $62 worth of API credits in their not-so-great Playground. I see similar situations of extreme ROI every few days, let alone all the other uses. I’d pay $500/mo, but beyond that, I’d probably just stick with Playground & API.
[+] jsheard|1 year ago|reply
Expect more of this as they scramble to course-correct from losing billions every year, to hitting their 2029 target for profitability. That money's gotta come from somewhere.

> Price hikes for the premium ChatGPT have long been rumored. By 2029, OpenAI expects it’ll charge $44 per month for ChatGPT Plus, according to reporting by The New York Times.

I suspect a big part of why Sora still isn't available is because they couldn't afford to offer it on their existing plans, maybe it'll be exclusive to this new $200 tier.

[+] boringg|1 year ago|reply
That CAPEX spend and those generous salaries have to get paid somehow ...
[+] shadowmanif|1 year ago|reply
Totally agree with Sora.

Runway is $35 a month to generate 10 second clips and you really get very few generations for that. $95 a month for unlimited 10 second clips.

I love art and experimental film. I really was excited for Sora but it will need what feels like unlimited generation to explore what it can do . That is going to cost an arm and a leg for the compute.

Something about video especially seems like it will need to be ran locally to really work. Pay a monthly fee for the model that can run as much as you want with your own compute.

[+] aiono|1 year ago|reply
Can you link to the source where they state that they want to be profitable in 2029? I am curious.
[+] doctorpangloss|1 year ago|reply
ChatGPT as a standalone service is profitable. But that’s not saying much.
[+] distalx|1 year ago|reply
Didn't they initially offer a professional plan at $42/mo?
[+] fragmede|1 year ago|reply
Sora isn't available because of the deep fake potential.
[+] EternalFury|1 year ago|reply
I give o1 a URL and I ask it to comment on how well the corresponding web page markets a service to an audience I define in clear detail.

o1 generates a couple of pages of comments before admitting it didn’t access the web page and entirely based its analysis on the definition of the audience.

[+] motoxpro|1 year ago|reply
If one makes $150 an hour and it saves them 1.25 hours a month, then they break even. To me, it's just a non-deterministic calculator for words.

If it getting things wrong, then don't use it for those things. If you can't find things that it gets right, then it's not useful to you. That doesn't mean those cases don't exist.

[+] kilroy123|1 year ago|reply
I do wonder what effect this will have on furthering the divide between the "rich West" and the rest of the world.

If everyone in the West has powerful AI and Agents to automate everything. Simply because we can afford it, but the rest of the world doesn't have access to it.

What will that mean for everyone left behind?

[+] kaiwen1|1 year ago|reply
I know a guy who owned a tropical resort on a island where competiton was sprouting up all around him. He was losing money trying to keep up with the quality offered by his neighbors. His solution was to charge a lot more for an experience that was really no better, and often worse, than the resorts next door. This didn't work.
[+] EcommerceFlow|1 year ago|reply
After a few hours of $200 Pro usage, it's completely worth it. Having no limit on o1 usage is a game changer, where I felt so restricted before, the amount of intelligence at the palm of my hand UNLIMITED feels a bit scary.
[+] flkiwi|1 year ago|reply
A lot of these tools aren't going to have this kind of value (for me) until they are operating autonomously at some level. For example, "looking at" my inbox and prepping a bundle of proposed responses for items I've been sitting on, drafting an agenda for a meeting scheduled for tomorrow, prepping a draft LOI based on a transcript of a Teams chat and my meeting notes, etc. Forcing me to initiate everything is (uncomfortably) like forcing me to micromanage a junior employee who isn't up to standards: it interrupts the complex work the AI tool cannot do for the lower value work it can.

I'm not saying I expect these tools to be at this level right now. I'm saying that level is where I will start to see these tools as anything more than an expensive and sometimes impressive gimmick. (And, for the record, Copilot's current integration into Office applications doesn't even meet that low bar.)

[+] leosanchez|1 year ago|reply
I lived on 200$ monthly salary for 1.6 years. I guess AI will be slowely priced out from 3rd world countries.
[+] rafram|1 year ago|reply
Any AI product sold for a price that's affordable on a third-world salary is being heavily subsidized. These models are insanely expensive to train, guzzle electricity to the point that tech companies are investing in their own power plants to keep them running, and are developed by highly sought-after engineers being paid millions of dollars a year. $20/month was always bound to be an intro offer unless they figured out some way to reduce the cost of running the model by an order of magnitude.
[+] paxys|1 year ago|reply
We've been conditioned to pay $10/mo for an endless stream of gloried CRUD apps, but it is very common for specialized software to cost orders of magnitude more. Think Bloomberg Terminal, Cadence, Maya, lots of CAD software (like SOLIDWORKS), higher tiers of Adobe etc. all running in the thousands of dollars per user. And companies happily pay for them because of the value they add. ChatGPT isn't any different.
[+] beepbooptheory|1 year ago|reply
Tangent. Does any body have good tips for working in a company that is totally bought in on all this stuff, such that the codebase is a complete wreck? I am in a very small team, and I am just a worker, not a manager or anything. It has become increasingly clear that most if not all my coworkers rely on all this stuff so much. Spending hours trying to give benefit of the doubt to huge amounts of inherited code, realizing there is actually no human bottom to it. Things are merged quickly, with very little review, because, it seems, the reviewers can't really have their own opinion about stuff anymore. The idea of "idiomatic" or even "understandable" code seems foreign at this place. I asked why we don't use more structural directives in our angular frontend, and people didn't know what I was talking about!

I don't want the discourse, or tips on better prompts. Just tips for being able to interact with the more heavy AI-heads, to maybe encourage/inspire curiosity and care in the actual code, rather than the magic chatgpt outputs. Or even just to talk about what they did with their PR. Not for some ethical reason, but just to make my/our jobs easier. Because its so hard to maintain this code now, it is like truly a nightmare for me everyday seeing what has been added, what now needs to be fixed. Realizing nobody actually has this stuff in their heads, its all just jira ticket > prompt > mission accomplished!

I am tired of complaining about AI in principle. Whatever, AGI is here, "we too are stochastic parrots", "my productivity has tripled", etc etc. Ok yes, you can have that, I don't care. But can we like actually start doing work now? I just want to do whatever I can, in my limited formal capacity, to steer the company to be just a tiny bit more sustainable and maybe even enjoyable. I just don't know how to like... start talking about the problem I guess, without everyone getting super defensive and doubling down on it. I just miss when I could talk to people about documentation, strategy, rationale..

[+] questinthrow|1 year ago|reply
Question, what stops openai from downgrading existing models so that you're pushed up the subscription tiers to ever more expensive models? I'd imagine they're currently losing a ton of money supplying everyone with decent models with a ton of compute behind them because they want us to become addicted to using them right? The fact that classic free web searching is becoming diluted by low quality AI content will make us rely on these LLMs almost exclusively in a few years or so. Am I seeing this wrong?