> ...it's not just about saving costs – it's about saving the planet
There's something that doesn't sit right with me about this statement, and I'm not sure what it is. Are you sure you didn't just join for the money? (edit: cool problems, too)
Probably because "making the world a better place" has been a trope used so much in the industry that it's made it to a TV show [1]. It's fine to be passionate about your job. It's fine to be paid well. You don't need to make us believe that you're mother Theresa on top of it.
Reminds me of when I was younger and thought of companies like Google and Tesla as a force for good that will create and use technology to make people's lives better. Surely OpenAI and these LLM companies will change the world for the better, right? They wouldn't burn down our planet for short-term monetary gain, right?
I've learned over the years that I was naive and it's a coincidence if the tech giants make people's lives better. That's not their goal.
The greatness of human accomplishment has always been measured by size. The bigger, the better. Until now. Nanotech. Smart cars. Small is the new big. In the coming months, Hooli will deliver Nucleus, the most sophisticated compression software platform the world has ever seen. Because if we can make your audio and video files smaller, we can make cancer smaller. And hunger. And AIDS.
Right? Like what an incredibly naive thing to think, that BG is going to contain power consumption lmao. OpenAI is always going to run their hardware hot. If BG frees up compute, a new workload will just fill it.
Sure you might argue "well if they can do more with less they won't need as many data centers." But who is going to believe that a company that can squeeze more money from their investment won't grow?
Tangentially, I am looking forward to learn the new innovations that come from this problem space. [Self-rightous] BG certainly is exceptional at presenting hard topics in an approachable and digestible manner. And now it seems he has an unlimited fund to get creative.
Sam Altman is pro-extinctionist like most of the surveillance capitalist ghouls. He literally invests in mind uploading companies and believes only the rich deserve to "survive" the singularity he believes it is his job to bring about.
Sure, humans going extinct is good for the planet, I guess, but be up front about what you are really supporting.
The HBO “Silicon Valley” series’ version of “making the world a better place by” nonsense. The blog article has fallen for the marketing of OpenAI. OpenAI is making the world a worse place by inflating the cost of RAM and even getting rid of RAM chip providers from the consumer space. Not to mention all the wasted power on compute for all sorts of meaningless tasks. At least with something like Claude I am saving months if not years of engineering effort and resources in a few hours.
For me, it just sounds like a ChatGPT-generated sentence. Especially, it likes to write sentences like "it's not just about... - it's about ..." and it first sounds legit, but it doesn't really make much sense when you start to think about it.
If you’re going to hold datacenter operators to blame for the waste associated with non-optimized computation, then it would seem to follow that they get some credit for optimizing.
If you trust what the executives of OpenAI and Anthropic say about their respective projects, its a die roll as to whether or not they will totally destroy the world. A theme of the last 5-10 years has been tech dropping the whitewashing of their reputation and embracing the idea that they what they are doing is incredibly sociopathic and still somehow cool (to them, I guess). Guess not everyone got the memo.
Brendan, I'm a big fan of your book, and work.
I don't have a problem with you joining OpenAI; best of luck there!
However, I'm not sure your analysis is quite correct, in this case.
If OpenAI can mobilize X (giga)dollars to buy Y amounts of energy, your work there will not reduce X or Y, it will simply help them produce more "tokens" (or whatever "unit of AI") for a given amount of energy.
So in a sense you're helping make OpenAI tools better, more effective, but it's not helping reduce resource usage.
One day if openai becomes a real company (and public), like the kind that takes money from customers and employe accountants and turns a profit, etc, there may be downward pressure on the "costs" side of the equation.
Also while the thirst for training may be insatiable, I could see the energy cost of "hey chat can you check the basketball score" coming down.
To answer a few people at once: I did mention compensation as a factor in the post, but I didn't elaborate details, so easy to miss. Comp is important of course, but so are the other factors. It feels like I can't go for a day without reading about the cost of AI datacenters in the news, and I can do something about it.
Again, many comments here saying I only care about the money, and while comp is an important factor I think it characterizes me as someone I'm not, and forgets what I've been doing for the past two decades. I've spent thousands of hours of my life writing textbooks for roughly minimum wage, as I want to help others like me (I came from nothing, with no access tech meetups or conferences, and books were the gateway to a better job). I've published technologies as open source that have allowed others to make millions and are the basis for many startups. I'm also helping pioneer remote work and hoping to set a good example for others to follow (as I've published about before). So I think I'm well known for caring about a lot of things during the past couple of decades.
Read this Gregg.
I'm the first one to always put your work and books in any comments related to you or your work here. But...
This is a company which at the first opportunity seized and stopped doing open research, cut open source contributions, converted itself to for profit after years of fiscal benefits, that scrapped its ethics committee and removed all engineers who opposed any of this.
Don't come with the excuse there is any work being done for the better of something.
One should never input one's own expectations into another, but, I feel disappointed. It is having the guy I saw growing from the first posts working for an evil machine on his own volition.
Do what you want. But that's what I feel about this disheartening news
Hey Brendan - first time listener, first time caller.
Inferring the overall tone from the comments, I think the folks here are struggling with what sounds like a logical fallacy from someone who is certainly a logical thinker.
> how I could lead performance efforts and help save the planet.
The problem on the face of it being: Performance gains will not translate to less energy usage (and by extension less heat released into the atmosphere). Rather, performance gains will mean that more effective compute can be squeezed from the existing hardware.
If performance gains translate to better utilization of the hardware, it also follows that it will translate to more money for the company, allowing for the purchase of more GPUs. Ad infinitum.
My stance is that this is just businesses doing what they do. It's always required regulation to slow down the direct/indirect negative byproducts (petro companies being the most obvious example). I don't see how AI would inherently be different.
Is there another angle that I (we) am (are) missing where the performance efficiencies translate to net benefits for the planet?
It would be good if the performance improvements made can be applicable across the industry so everyone benefits. But it doesn't sound unbelievable that OpenAi may want to keep some of it secret to keep an advantage over others?
> I stood on the street after my haircut and let sink in how big this was, how this technology has become an essential aide for so many, how I could lead performance efforts and help save the planet.
Brendan.
First of all congratulations on your new job. However,
It is easier to just say to everyone it is about the money, compensation and the stock options.
You're not joining a charity, or to save the planet, this company is about to unload on the public markets at an unfathomable $1TN valuation.
Thanks for taking the risk in this environment and posting about your experience from a personal standpoint. [environment: people will come at you from all angles with very passionate opinions]
Interesting. Out of curiosity, how long do you think OpenAI can survive as a company? Put another way, what would be your guesses for probability of failure on 1yr, 3yr, and 5yr horizons?
EDIT: possibly a corollary--does Mia pay money for chatgpt or use a free plan?
I use LLMs daily and they've become an essential part of my life. Really hoping your work can help make LLMs and AI more broadly cheaper and more abundant.
Would be fantastic if you can find a way to make optimizations you find available more openly. The whole ecosystem benefits when efficiency improvements are shared. Looking forward to seeing where this goes and don't let the negativity from some get to you
For people who’s main computing devices are phones, this isn’t hard to believe at all.
Interacting outside of the tech bubble is eye opening. Conversely, the hair stylist might have mentioned the brand of a super popular scissor supplier/other equipment you’d have never heard of.
The AI industry, and SV tech generally, has a pattern of recruiting talent by flattering people's self-image as builders and discoverers, which makes it psychologically very difficult for those people to reckon honestly with downstream harm.
It’s so jarring to hear this reality-detached writing style coming from someone who’s otherwise a great systems thinker.
This is like my worst nightmare as a systems engineer: that years of navigating bureaucracy at a place like Intel slowly brainrots me into prioritizing politics and self-promotion over the technical truth.
I hope this is just PR reflex and not an actual loss of grounding.
imo the engineers who stay in their nuts-and-bolts lane - those are the ones who are at a real risk of 'brainrot', who sometimes at the ripe age of 50+ continue to misunderstand what motivates management, the executives, seemingly unable to model anyones minds other than their own (or that of a very predictable entity, like a computer.)
infact, you could argue that politics is in some sense the biggest, most complex dynamic system of them all, and thus poses the greatest 'engineering' challenge. and it invariably involves promotion of oneself, or an idea, or a certain direction, with real trade-offs that have positive impact on some people, and negative on others.
As a big fan of you; There are a lot of things feels off in the post, and as others mentioned it feels like you’re trying to convince yourself that you’re going to save the world but everyone knows it’s something else.
> She was worried about a friend who was travelling in a far-away city, with little timezone overlap when they could chat, but she could talk to ChatGPT anytime about what the city was like and what tourist activities her friend might be doing, which helped her feel connected. She liked the memory feature too, saying it was like talking to a person who was living there.
This seems rather sad. Is this really what AI is for?
And we do not need gigawatts and gigawatts for this use case anyway. A small local model or batched inference of a small model should do just fine.
It's sad that we aren't all rich enough to have a personal assistant to tend to our sides 24/7? mean, it seems more useful than, say, cruise ships, but they get to exist.
It’s super dope, and you can have it talk to people for you in the local language when you go there. I’ve busted it out to explain what I’m thinking for me. Watching travel shows on TV or reading travel magazines is sadder.
I found funny the hairstylist did provide a pretty distopic reason to use ChatGPT... it seems that you are trying to please your new employer... Nevertheless I respect performance work and I'm studying for something similar. I hope to land a job in HPC
I'm fine with people never justifying their personal choices. It's their business. But if they do bother to justify it then it's a show they put on for me. And reading this kind of explanation is like the show runner takes me for a fool. The net result is that I lose all respect for the person.
Unless they put on a show for themselves and that's who they try to fool. Probably why nobody mentions money in these shows. They're self motivational.
That is normal on his blog. He is a brand that he has developed over many years, and he is constantly promoting that brand.
Yes, he has done a lot of good work in the past, but he has put as much effort into self-promotion and landed a series of interesting and well-paying gigs.
I can't blame him for that. It just makes me tired to watch.
Reminds me of the TechCrunch episode of Silicon Valley TV show. Everyone was there to make the big buck but all collectively pretended they were doing their work for the good of humankind.
Apparently, there's this guy who's really good at optimizing computer performance and makes a lot of money doing it. At the same time, he writes mediocre school essays that are actually a bit embarrassing. Guys, if you have the opportunity to land a very well-paid job, then do it. Take the money. Live your life. But please spare us the public self-castration.
Re: "it's about saving the planet" - according to Jevons paradox [0] if the cost of AI will decrease its total usage will increase. Definitely not saving the planet in this scenario.
Hi Brendan. Thanks for the update. Ignore the haters.
WRT "AI saving the planet", obviously.
We need ungodly amounts of machine learning. Weather modeling, forecasting, resilience planning, risk mgmt, planning, etc.
To implement virtual power plants (aka P2P distributed grid), everything needs to get smart. Just this transformation alone is a generational project.
There's dozens more of "must have" stacks we need to tackle climate crisis. Replace industrial heat. Decarbonize agriculture. Build out geothermal. Find and stop methane leaks. Pretty much everything needs a makeover, really.
OpenAI is as good a place (for you) to start as any.
Brendan can do whatever he wants. Hes that good. If anybody seriously needed to interview him 20+ times to figure it out, then the burden is now on them to not fuck it up.
He's summing interviews across all AI giants. But the ones about to IPO can interview someone almost infinitely many times, because everyone wants on the bandwagon.
according to most in the industry, the cheaper AI is the more we will need of it. So to actually reduce the energy used by AI, you should try to make it as inefficient as possible.
Did the article intentionally start with a LLM cliche to filter out all the people who hate reading obviously generated content? I would say it worked.
I have been attempting to write a lot more with AI, but it's so gimmicky. It's always spitting lines like this: " it's not just about x – it's about y." like in this post. I find it so frustrating that no matter the prompt I throw at it, it eventually repeats itself again after some time. Good technical and succinct writing is almost impossible to iterate on with AI for me.
I like how my eyes went over the first sentence, barely parsing it and already discarding the information, because its obviously ai generated. Its like the circumstances we live in added a new layer of perception to my brain to guard itself against the flood of useless information!
I really hope it's intentional. The author is a smart, accomplished person. He even published books. It's sad if this kind of person thinks it's okay to just outsource their writing to AI.
If it's in your power, make sure user prompts and llm responses are never read, never analyzed and never used for training - not anonymized, not derived, not at all.
I think OpenAI will IPO at 1T. I don’t want to say bubble but it could be one of these stocks super hyped that never goes anywhere after the IPO(I.e airbnb during Covid)
Banditoz|23 days ago
There's something that doesn't sit right with me about this statement, and I'm not sure what it is. Are you sure you didn't just join for the money? (edit: cool problems, too)
pyrale|23 days ago
[1]: https://www.youtube.com/watch?v=B8C5sjjhsso
robby_w_g|23 days ago
I've learned over the years that I was naive and it's a coincidence if the tech giants make people's lives better. That's not their goal.
deng|22 days ago
Gavin Belson
its-kostya|23 days ago
Sure you might argue "well if they can do more with less they won't need as many data centers." But who is going to believe that a company that can squeeze more money from their investment won't grow?
Tangentially, I am looking forward to learn the new innovations that come from this problem space. [Self-rightous] BG certainly is exceptional at presenting hard topics in an approachable and digestible manner. And now it seems he has an unlimited fund to get creative.
lrvick|22 days ago
Sure, humans going extinct is good for the planet, I guess, but be up front about what you are really supporting.
giancarlostoro|22 days ago
petterroea|23 days ago
cheesepaint|22 days ago
closeparen|22 days ago
cyanydeez|22 days ago
Thaxll|23 days ago
almostdeadguy|22 days ago
wheelerwj|23 days ago
The problems are interesting and the pay is exceptional. Just fucking own it.
biggggtalkguy|23 days ago
[deleted]
lm28469|23 days ago
[deleted]
mewpmewp2|23 days ago
perf99999999999|23 days ago
However, I'm not sure your analysis is quite correct, in this case.
If OpenAI can mobilize X (giga)dollars to buy Y amounts of energy, your work there will not reduce X or Y, it will simply help them produce more "tokens" (or whatever "unit of AI") for a given amount of energy.
So in a sense you're helping make OpenAI tools better, more effective, but it's not helping reduce resource usage.
https://en.wikipedia.org/wiki/Jevons_paradox
bagacrap|22 days ago
Also while the thirst for training may be insatiable, I could see the energy cost of "hey chat can you check the basketball score" coming down.
ulnarkressty|23 days ago
pillefitz|23 days ago
brendangregg|23 days ago
brendangregg|23 days ago
motbus3|22 days ago
This is a company which at the first opportunity seized and stopped doing open research, cut open source contributions, converted itself to for profit after years of fiscal benefits, that scrapped its ethics committee and removed all engineers who opposed any of this.
Don't come with the excuse there is any work being done for the better of something.
One should never input one's own expectations into another, but, I feel disappointed. It is having the guy I saw growing from the first posts working for an evil machine on his own volition.
Do what you want. But that's what I feel about this disheartening news
rybosworld|22 days ago
Inferring the overall tone from the comments, I think the folks here are struggling with what sounds like a logical fallacy from someone who is certainly a logical thinker.
> how I could lead performance efforts and help save the planet.
The problem on the face of it being: Performance gains will not translate to less energy usage (and by extension less heat released into the atmosphere). Rather, performance gains will mean that more effective compute can be squeezed from the existing hardware.
If performance gains translate to better utilization of the hardware, it also follows that it will translate to more money for the company, allowing for the purchase of more GPUs. Ad infinitum.
My stance is that this is just businesses doing what they do. It's always required regulation to slow down the direct/indirect negative byproducts (petro companies being the most obvious example). I don't see how AI would inherently be different.
Is there another angle that I (we) am (are) missing where the performance efficiencies translate to net benefits for the planet?
politelemon|23 days ago
kgraves|23 days ago
Brendan.
First of all congratulations on your new job. However,
It is easier to just say to everyone it is about the money, compensation and the stock options.
You're not joining a charity, or to save the planet, this company is about to unload on the public markets at an unfathomable $1TN valuation.
Don't insult your readers.
bahmboo|23 days ago
jcgrillo|23 days ago
EDIT: possibly a corollary--does Mia pay money for chatgpt or use a free plan?
AnonHP|23 days ago
> There's so many interesting things to work on, things I have done before and things I haven't.
What are the things you haven’t done before, if you could mention them?
artninja1988|22 days ago
Would be fantastic if you can find a way to make optimizations you find available more openly. The whole ecosystem benefits when efficiency improvements are shared. Looking forward to seeing where this goes and don't let the negativity from some get to you
jhoho|21 days ago
jonesetc|23 days ago
No, it never does. Those people somehow delude themselves into thinking it might, but...it might just work for us.
cyanydeez|22 days ago
You arnt going to stop the excesses.
journal|23 days ago
unknown|23 days ago
[deleted]
robotpepi|22 days ago
[deleted]
PunchTornado|23 days ago
[deleted]
username223|23 days ago
[deleted]
vasco|23 days ago
DeepYogurt|23 days ago
indigodaddy|23 days ago
biggggtalkguy|23 days ago
notepad0x90|23 days ago
matt_daemon|23 days ago
How could she not know?
Upvoter33|23 days ago
BG and eBPF are awesome but this article read like a midlife crisis to me.
Insanity|23 days ago
Interacting outside of the tech bubble is eye opening. Conversely, the hair stylist might have mentioned the brand of a super popular scissor supplier/other equipment you’d have never heard of.
padolsey|23 days ago
foltik|22 days ago
This is like my worst nightmare as a systems engineer: that years of navigating bureaucracy at a place like Intel slowly brainrots me into prioritizing politics and self-promotion over the technical truth.
I hope this is just PR reflex and not an actual loss of grounding.
webdevver|22 days ago
infact, you could argue that politics is in some sense the biggest, most complex dynamic system of them all, and thus poses the greatest 'engineering' challenge. and it invariably involves promotion of oneself, or an idea, or a certain direction, with real trade-offs that have positive impact on some people, and negative on others.
import|23 days ago
amluto|23 days ago
This seems rather sad. Is this really what AI is for?
And we do not need gigawatts and gigawatts for this use case anyway. A small local model or batched inference of a small model should do just fine.
georgemcbay|23 days ago
I guess I'm a dinosaur but I think emailing the friend to ask what they are actually up to would be even better than involving an LLM to imagine it.
Asynchronous human to human communication is a pretty solved problem.
unknown|23 days ago
[deleted]
mschild|23 days ago
Or, you know, Signal/Matrix/WhatsApp/{your_preferred_chat_app}. If you're already texting things, might as well do that.
UltraSane|23 days ago
fragmede|23 days ago
peyton|23 days ago
FattiMei|22 days ago
selfawareMammal|23 days ago
I couldn't go on reading.
buran77|23 days ago
Unless they put on a show for themselves and that's who they try to fool. Probably why nobody mentions money in these shows. They're self motivational.
bspammer|23 days ago
> Do anything, do it at scale, and do it today
> It's not just GPUs, it's everything.
> I'm not the first, I'm just the latest.
wscott|22 days ago
Yes, he has done a lot of good work in the past, but he has put as much effort into self-promotion and landed a series of interesting and well-paying gigs.
I can't blame him for that. It just makes me tired to watch.
throwa356262|23 days ago
This guy and Rob Pike should have a talk.
robotpepi|22 days ago
moomoo11|23 days ago
Phelinofist|22 days ago
allovertheworld|23 days ago
[deleted]
unfunco|23 days ago
[deleted]
ahf8Aithaex7Nai|23 days ago
ikidd|22 days ago
the_kLeZ|22 days ago
Thanks for reading. Please subscribe to my newsletter to keep up to date with my works and the latest news about AI.
rednafi|22 days ago
dh2022|22 days ago
[0] https://en.wikipedia.org/wiki/Jevons_paradox
blibble|22 days ago
just say it's for the money, people understand that
but this sort of post is simply gross
fulafel|23 days ago
specialist|21 days ago
WRT "AI saving the planet", obviously.
We need ungodly amounts of machine learning. Weather modeling, forecasting, resilience planning, risk mgmt, planning, etc.
To implement virtual power plants (aka P2P distributed grid), everything needs to get smart. Just this transformation alone is a generational project.
There's dozens more of "must have" stacks we need to tackle climate crisis. Replace industrial heat. Decarbonize agriculture. Build out geothermal. Find and stop methane leaks. Pretty much everything needs a makeover, really.
OpenAI is as good a place (for you) to start as any.
Happy hunting.
stonecharioteer|22 days ago
Something tells me that in a year we'll see a post about why you left OpenAI.
Sama won't listen to anyone. That's why. None of these CEOs are going to listen.
SanjayMehta|23 days ago
unknown|23 days ago
[deleted]
thinkingkong|23 days ago
ojbyrne|23 days ago
I don't think that indicates that any one company interviewed him 20+ times.
sgarland|23 days ago
7e|23 days ago
lxrogers|22 days ago
burroisolator|22 days ago
puttycat|23 days ago
You're in for a surprise buddy.
pyrale|23 days ago
tominous|23 days ago
popcorncowboy|22 days ago
kopollo|23 days ago
testfrequency|22 days ago
We are currently fucked as well to be clear as people genuinely have this disconnected mindset of reality.
zomglings|22 days ago
unknown|22 days ago
[deleted]
jhhh|23 days ago
laluser|23 days ago
mawadev|23 days ago
raincole|23 days ago
jofzar|23 days ago
[deleted]
I_am_tiberius|23 days ago
surajrmal|23 days ago
satvikpendem|23 days ago
dforsythe|23 days ago
[deleted]
zombiwoof|23 days ago
[deleted]
rvz|23 days ago
[deleted]
patrickaljord|23 days ago
thefounder|23 days ago
jasonvorhe|23 days ago
[deleted]
falloutx|23 days ago
[deleted]
r33b33|23 days ago
[deleted]
dgoxow|23 days ago
[deleted]
inchargeoncall|22 days ago
[deleted]
throwawee|23 days ago
[deleted]
yomismoaqui|23 days ago
[deleted]
brendangreggg|23 days ago
[deleted]
moltar|23 days ago
[deleted]
zeroonetwothree|23 days ago
[deleted]
bilekas|23 days ago
[deleted]
LittlePeter|23 days ago
[deleted]
llmslop|23 days ago
[deleted]
light_triad|23 days ago