(no title)
hinterlands | 7 months ago
I don't think this makes OpenAI special. It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
Look at it this way: the flip side of "incredibly bottoms-up" from this article is that there are people who feel rudderless because there is no roadmap or a thing carved out for them to own. Similarly, the flip side of "strong bias to action" and "changes direction on a dime" is that everything is chaotic and there's no consistent vision from the executives.
This cracked me up a bit, though: "As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things. It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!
harmonic18374|7 months ago
Given how vengeful Altman can reportedly be, this goes double for OpenAI. This guy even says they scour social media!
Whether subconsciously or not, one purpose of this post is probably to help this guy’s own personal network along; to try and put his weirdly short 14-month stint in the best possible light. I think it all makes him look like a mark, which is desirable for employers, so I guess it is working.
m00x|7 months ago
44520297|7 months ago
Every, and I mean every, technology company scours social media. Amazon has a team that monitors social media posts to make sure employees, their spouses, their friends don’t leak info, for example.
unknown|7 months ago
[deleted]
rrrrrrrrrrrryan|7 months ago
I worked for a few years at a company that made software for casinos, and this was absolutely not the case there. Casinos absolutely have fully shameless villains at the helm.
stickfigure|7 months ago
lucianbr|7 months ago
Do these people have even minimal self-awareness?
darkmarmot|7 months ago
Bratmon|7 months ago
Much more common for OpenAI, because you lose all your vested equity if you talk negatively about OpenAI after leaving.
rvz|7 months ago
There is a reason why there was a cult-like behaviour on X amongst the employees in supporting to bringing back Sam as CEO when he was kicked out by the OpenAI board of directors at the time.
"OpenAI is nothing without it's people"
All of "AGI" (which actually was the lamborghinis, penthouses, villas and mansions for the employees) was all on the line and on hold if that equity went to 0 or would be denied selling their equity if they openly criticized OpenAI after they left.
fragmede|7 months ago
tedsanders|7 months ago
torginus|7 months ago
Considering the high stakes, money, and undoubtedly the ego involved, the writer might have acquired a few bruises along the way, or might have lost out on some political in fights (remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own).
Another possible explanation is that the writer's just had enough - enough money to last a lifetime, just started a family, made his mark on the world, and was no longer compelled (or have been able to) keep up with methed-up fresh college grads.
matco11|7 months ago
Well it depends on people’s mindset. It’s like doing a hackathon and not winning. Most people still leave inspired by what they have seen other people building, and can’t wait to do it again.
…but of course not everybody likes to go to hackathons
sensanaty|7 months ago
We're talking about Sam Altman here, right, the dude behind Worldcoin? A literal bond-villainesque biological data harvesting scheme?
ben_w|7 months ago
I'd be more worried about the guy who tweeted “If this works, I’m treating myself to a volcano lair. It’s time.” and more recently wore a custom T-shirt that implies he's like Vito Corleone.
teiferer|7 months ago
> I returned early from my paternity leave to help participate in the Codex launch.
10 years from now, the significance of having participated in that launch will be ridiculously small (unless you tell yourself that it was a pivotal moment of your life, even if it objectively wasn't) versus those first weeks with your newborn will never come back. Kudos to your partner though.
baggachipz|7 months ago
usaar333|7 months ago
Aurornis|7 months ago
The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.
I was at a company that turned into the most toxic place I had ever worked due to a CEO who decided to randomly get involved with projects, yell at people, and even fire some people on the spot.
Yet a lot of people wrote glowing stories about their time at the company on blogs or LinkedIn because it was beneficial for their future job search.
> It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
For the posts that make HN I rarely see it that way. The recent trend is for passionate employees who really wanted to make a company work to lament how sad it was that the company or department was failing.
eddythompson80|7 months ago
Yeah I had to re-read the sentence.
The positive "Farewell" post is indeed the norm. Especially so from well known, top level people in a company.
bigiain|7 months ago
Sure, but this bit really makes me wonder if I'd like to see what the writer is prepared to do to other people to get to his payday:
"Nabeel Quereshi has an amazing post called Reflections on Palantir, where he ruminates on what made Palantir special. I wanted to do the same for OpenAI"
iLoveOncall|7 months ago
Spooky23|7 months ago
Usually the level 1 people are just motivated by power and money to an unhealthy degree. The worst are true believers in something. Even something seemingly mild.
Wilder7977|7 months ago
I don't think people who work on products that spy on people, create addiction or worse are as naïve as you portrayed them.
ben_w|7 months ago
FWIW, I have positive experiences about many of my former employers. Not all of them, but many of them.
yen223|7 months ago
Timwi|7 months ago
The operative word is “trying”. You can “try” to do the right thing but find yourself restricted by various constraints. If an employee actually did the right thing (e.g. publish the weights of all their models, or shed light on how they were trained and on what), they get fired. If the CEO or similarly high-ranking exec actually did the right thing, the company would lose out on profits. So, rationalization is all they can do. “I'm trying to do the right thing, but.” “People don't see the big picture because they're not CEOs and don't understand the constraints.”
newswasboring|7 months ago
This is a great insight. But if we think a bit deeper about why that happens, I land on because there is nobody forcing anyone to do the right thing. Our governments and laws are geared more towards preventing people from doing the wrong thing, which of course can only be identified once someone has done the wrong thing and we can see the consequences and prove that it was indeed the wrong thing. Sometimes we fail to even do that.
saghm|7 months ago
unknown|7 months ago
[deleted]
curious_cat_163|7 months ago
I liked my jobs and bosses!
tptacek|7 months ago
TeMPOraL|7 months ago
> Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!
Those are the easy cases, and correspondingly, you don't see much of those - or at least few are paying attention to companies talking like that. This is distinct from saying "X is going to directly benefit the society, and we're merely charging for it as fair compensation of our efforts, much like a baker charges you for the bread" or variants of it.
This is much closer to what most tech companies try to argue, and the distinction seems to escape a lot of otherwise seemingly sharp people. In threads like this, I surprisingly often end up defending tech companies against such strawmen - because come on, if we want positive change, then making up a simpler but baseless problem, calling it out, and declaring victory, isn't helping to improve anything (but it sure does drive engagement on-line, making advertisers happy; a big part of why press does this too on a routine basis).
And yes, this applies to this specific case of OpenAI as well. They're not claiming "LLMs are going to indirectly benefit the society because we're going to get rich off them, and then use that money to fund lots of nice things". They're just saying, "here, look at ChatGPT, we believe you'll find it useful, and we want to keep doing R&D in this direction, because we think it'll directly benefit society". They may be wrong about it, or they may even knowingly lie about those benefits - but this is not trickle-down economics v2.0, SaaS edition.
vlovich123|7 months ago
I mean, that's a leap. There could be a bond villain that sets up incentives such that people who rationalize the way they want is who gets promoted / their voice amplified. Just because individual workers generally seem like they're trying to do the best thing doesn't mean the organization is set up specifically and intentionally to make certain kinds of "shady" decisions.
energy123|7 months ago