(no title)
tekacs | 19 days ago
It would be great if the HN title could be changed to something more like, 'OpenAI requiring ID verification for access to 5.3-codex'?
> Thank you all for reporting this issue. Here's what's going on.
> This rerouting is related to our efforts to protect against cyber abuse. The gpt-5.3-codex model is our most cyber-capable reasoning model to date. It can be used as an effective tool for cyber defense applications, but it can also be exploited for malicious purposes, and we take safety seriously. When our systems detect potential cyber activity, they reroute to a different, less-capable reasoning model. We're continuing to tune these detection mechanisms. It is important for us to get this right, especially as we prepare to make gpt-5.3-codex available to API users.
> Refer to this article for additional information. You can go to chatgpt.com/cyber to verify and regain gpt-5.3-codex access. We plan to add notifications in all of our Codex surfaces (TUI, extension, app, etc.) to make users aware that they are being rerouted due to these checks and provide a link to our “Trusted Access for Cyber” flow.
> We also plan to add a dedicated button in our /feedback flow for reporting false positive classifications. In the meantime, please use the "Bug" option to report issues of this type. Filing bugs in the Github issue tracker is not necessary for these issues.
Reubend|18 days ago
sdwr|18 days ago
Same idea as shadow banning, ban waves, and generic errors for sensitive actions
Dylan16807|19 days ago
But right now I want to focus on what one of the more recent comments pointed out. "cyber-capable"? "cyber activity"? What the hell is that. Use real words.
nerdsniper|18 days ago
> The term “cyber capability” means a device or computer program, including any combination of software, firmware, or hardware, designed to create an effect in or through cyberspace.
So apparently, OpenAI's response is written by and for an audience of lawyers / government wonks which differs greatly from the actual user-base who tend to be technical experts rather than policy nerds. Echoes of SOC2 being written by accountants, but advertised as if it's an audit of computer security.
revolvingthrow|18 days ago
"This rerouting is related to our efforts to protect our profit margins. The $current_top_model is our most expensive model to date. It can be used as an effective tool to get semi-useful results, but it can also be exploited for using a lot of tokens which costs us money, and we take profitability seriously. When our systems detect potential excessive token generation, they reroute to a different, less-capable reasoning model. We’re continuing to tune these detection mechanisms.
In the meantime, please buy a second $200/mo subscription."
nerdsniper|19 days ago
kingstnap|18 days ago
User: "There is a bug in foo(), its not validating auth correctly"
OpenAI: User detected engaging in cyber activity - access restricted.
And the rest is history.
red-iron-pine|18 days ago
"foreign intelligence is using codex to write novel exploits from scratch, that work"
cactusplant7374|18 days ago
What is their rationale for hiding it? OpenAI was deceptive. Paying customers did not realize they were being rerouted. Zero transparency.
Your suggested title doesn't represent what actually happened.
avaer|19 days ago
Pulling a switcheroo on the user behind the scenes, whatever the justification, is another issue, and I think the more interesting one.
It's a stepping stone to "we will reconfigure your AI to do whatever we want whenever we want, because security/think of the children".