top | item 47151634

(no title)

thunky | 4 days ago

> I don’t get what you’re doing here

You said that a CEO was out of bounds for framing employees as numbers on a spreadsheet. To me this suggests that you believe company owners should care about the humanity of their workers. And I'm saying they don't.

discuss

order

the_af|4 days ago

I think by "they", Forgeties79 means me, the_af.

I get the general point you're making. Indeed, Altman's take is capitalism taken to 11. There was a lot of that going on before AI or the past few decades, but I don't think it wasn't as extreme and for every company. There's definitely a conversation to be had about modern capitalism (and plenty of people studying it, too). However, not everything is a FAANG or tech startup. Some owners do care about their employees to a higher degree than just numbers on a spreadsheet (not going into the whole "we're a family" bullshit speech, I mean the genuine stuff).

Imagine thinking of people as "resource-hogs before they reach peak smartness"!

What's new here, in my opinion, is people like Sam Altman behaving as if they didn't understand normal human behavior. You cannot simply compare an LLM to a growing human. You cannot say things like "grow a human over 20 years before they achieve smartness". What? That's not how human beings think about human beings, and Altman is detached from real human behavior here. He's saying out loud the thoughts he should keep to himself, a bit like a person with coprolalia. And it's ok for us to dislike him for this, even if he's just voicing the opinions of extreme techno-capitalism.

Sam Altman once joked (?) he wouldn't know how to raise his child without ChatGPT. Maybe he should ask ChatGPT how to behave more like a human? Or at least fake it?

Forgeties79|4 days ago

> Sam Altman once joked (?) he wouldn't know how to raise his child without ChatGPT. Maybe he should ask ChatGPT how to behave more like a human? Or at least fake it?

Not to mention that was at a time when all kinds of wild suggestions like glue in pizza were coming out of ChatGPT’s sloppy outputs. There are so many little things that quickly become big things with kids, annd exhausted parents should absolutely not use LLM’s for sussing those things out.

I could easily see well-meaning parents looking for healthy snacks to make their kids accidentally feeding their baby fresh honey, for instance. Or asking how much water to give their infant and not realizing the answer is absolutely none unless they are severely dehydrated from an illness or something.

There are a lot of hazards for kids under 1 in particular that make me incredibly nervous to ever suggest exhausted parents use LLM’s to answer kid related questions. Recommendations also change relatively frequently so who knows if it’s even pulling on the most recent best practices.