(no title)
goldinfra | 2 years ago
Software is always more important than hardware. All the big players have access to NVIDIA chips today and yet only OpenAI has ChatGPT, proving the point.
OpenAI probably wishes someone would create competition to NVIDIA and this is Sam Altman trying to make that happen himself, since no one else seems to have been able to pull it off so far.
A conflict of interest would be OpenAI buying Altman's chips at inflated prices or something like that.
But if he makes a bunch of money selling OpenAI chips and OpenAI gets better/cheaper chips, that seems like pure win-win and totally free of ethical conflict.
bartwr|2 years ago
And in the tech-specific case: As much as junior engineers would love to believe in "superior" solutions, tech decisions are seldom clear-cut. There are many trade-offs: cost, efficiency, memory use, throughput, latency, ease of use, cost of switching, and many more. You always have a pile of pros and cons. Sometimes, one is strong enough, but most of the time, it feels almost like guessing/intuition. And then the conflict of interest becomes especially concerning.
dr_dshiv|2 years ago
roguas|2 years ago
tsunamifury|2 years ago
A bunch of nerds just thought they could jump the gun here because they are inexperienced doofus’s when it comes to corp.
Nevermark|2 years ago
Unfortunately, far from that case.
Altman's hardware startup would only be free of ethical conflict if Altman was open about it, and the board approved at least two things (and probably more):
1. A formal plan to separate OpenAI CEO Altman from OpenAI hardware acquisition decisions.
2. A formal agreement with Altman and his new company, on how OpenAI's private information with hardware implications is firewalled and/or shared with Altman's new hardware concern.
Otherwise, Altman is going rogue, acting on private OpenAI information useful to a new hardware company looking for future business with OpenAI and OpenAI competitors.
CEO Altman has a fiduciary duty to act directly in OpenAI's interest, and not in some "hey this could be great for everyone" version.
Litmus test: If your legal partner/executive is doing things behind your back with large implications for you, they are almost certainly violating ethics in some way.
kmlevitt|2 years ago
nostrademons|2 years ago
The issue is what happens when SamaChip's profit imperative forces them to maximize revenue. Because the best way to maximize revenue, when you're an independent chipmaker with a large R&D investment, is to sell to more companies. Which by definition are going to be OpenAI's competitors, and whose interests may not be aligned with OpenAI, but will have a financial line to SamaChip.
Gwern's highlighting an interesting contradiction in OpenAI's core charter. In order to ensure responsible, safe, humanity-benefiting AGI, OpenAI needs to have control over AGI's development; any for-profit entity that gets ahead of them probably will not have the same humanity-benefitting mission (actually, we know they won't, they will have a shareholder-benefitting mission by definition). But that means that by their charter, they can't be "Open". Anything like the Developer Day or API or a SamaChip that can sell to other startups means that other parties will have the freedom to use it for their own interests.
Not saying whether this is good or bad - the tension between openness and vulnerability always exists, and personally I tend to come down on the side of openness. But IMHO OpenAI's mission was contradictory from the very beginning, and was more a recruiting tool to get bright idealistic AI researchers to work for them.
unknown|2 years ago
[deleted]
tsunamifury|2 years ago
esafak|2 years ago
zamalek|2 years ago
Because OpenAI's mission statement is along the lines of providing AI to all. "All" is more than data centers and billion dollar valuation companies.
I strongly doubt I would be able to purchase one of said chips and have it in my house.
This GPU fiasco is all thanks to LLMs - especially transformers, which was OpenAIs trajectory under Altman. I wouldn't be surprised if the breakdown in communication was over OpenAI becoming a LLM printer. Transformers are a solved problem, making a bigger one is hardly research and definitely not a step towards OpenAIs mission statement.
kmlevitt|2 years ago
kmlevitt|2 years ago
refulgentis|2 years ago