(no title)
artwr | 1 year ago
True, but it's hard to start something as big as OpenAI and not warrant a little scrutiny. At least, I think there is plenty of public interest here, in particular because of the chosen mission statement for the company.
> Ultimately, I ask myself, is my life better because Sam was born and did what he did? And the answer is 1,000 times "yes!" because the introduction of ChatGPT changed so much and enabled so much creation and learning for me personally.
Which is a very reasonable position, but is the fact that your life is better negate concerns that applications of ChatGPT may actually make other people's lives worse? And that the lack of transparency around conflicts of interest raises reasonable concerns about both judgement and the ability of the organization to deliver on its mission?
eigenvalue|1 year ago
And I also don't feel like I am somehow owed a huge amount of transparency around the exact details of how Sam may or may not benefit financially from his association with OpenAI, or the legal agreements they had with departing staff. Even if he does benefit, is that really so horrible? They have a for-profit division now so they are paying taxes. And the fortunes made from OpenAI stock with be taxed for sure. And the people who left are rich and got to work on a world changing product.
Where is all the harm? It's really hard to point at any real harm from my standpoint. But the benefits and gains are palpable, and they are obvious to anyone without an agenda to push or axe to grind.
pseudalopex|1 year ago
People have lost jobs and likely careers to AI models trained on their works. You could assert in the long run all individuals will be better off. You could assert the benefits to others made the harms virtuous. You could assert they deserved it. I don't know how you could deny they were harmed. You could assert it was inevitable. But this would negate credit if it would negate blame. This is a distraction from the question of trust however.