(no title)
stingraycharles | 3 days ago
Obviously mass surveillance is already happening. Obviously the line between “human kills other human” is blurring for a long time already, eg remote operated drones. Missiles are already remotely controlled and navigating and detecting and following moving targets autonomously.
What’s the goal of people who think deleting their OpenAI account will make an impact?
maxbond|3 days ago
I left a comment describing how I am deleting my OpenAI account. I think there's a good chance someone at OpenAI sees it, even if only aggregated into a figure in a spreadsheet. Maybe a pull quote in a report.
You do your best at the margin, have faith it will count for something in aggregate and accept that sometimes you're tilting at windmills. I know most of my breathe is wasted but I can't reliably tell which.
mentalgear|3 days ago
* he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs.
* also, he warned that "ads would be the last resort" for LLM companies.
Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago.
While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists not marketing guys.
qsera|3 days ago
duskdozer|3 days ago
designerarvid|3 days ago
/non-US and just guessing
stingraycharles|3 days ago
The genie is out of the bottle, this will happen anyway. The question is who will be the steward.
throwaway20261|3 days ago
xraypants|2 days ago
coredev_|3 days ago
ndriscoll|3 days ago
Of course it's also a different question from whether we should allow mass surveillance against ourselves, which obviously we should not.
chronc2739|3 days ago
Says who? You?
Sorry, but you are just 1 person, 1 vote.
Unless you believe your vote outweighs other people’s vote.
Today, 40% of Americans today still approve of Trump and his actions. Another 10-20% probably don’t care. Even after Iran’s attack and DoW x OAI collab.
Which leaves the “no AI in weapons” camp at less than 50%.
ozgung|3 days ago
Ethics is about knowing and acting right or wrong. Not about how we feel about them.
kledru|3 days ago
podgorniy|3 days ago
--
Some people do that as a symbolic action. Some to keep own terms as much as they can. Some hope their actions will join others actions and will turn into a signal for decision makers. For others this action reduces the area of their exposure. Others believe in something and just follow their beliefs.
BTW following own set of beliefs is what you're (we all) doing here. You believe that surveillance is already happening and nothing can be done about it, that single action does not matter, that there are no other reasons for action other than direct visible impact, etc. Seems that you analyze others through own set of beliefs and it can not explain actions of others. This inability to explain others suggests that the whole model is flawed in some way. So what is the nature of your beliefs? Did you choose them or they were presented you without alternatives? What are alternatives then? Do these beliefs serve your interests or others?
hrmtst93837|3 days ago
syllogism|3 days ago
The point of the supply chain risk provisions is to denote, you know, supply chain risks. The intention is not to give the Pentagon a lever it can pull to force any company to agree to any contract it wants.
Hegseth doesn't even pretend that Anthropic is actually a supply chain risk. The argument for designating them so is that _they won't do exactly what the government wants_.
People use the term "fascism" a lot and people have kind of tuned it out, but what do you call a government that deals itself the power to compel any company to accept any contract, and declare it a pariah on thin pretext if it objects?
By taking the deal under these conditions OpenAI is accepting this. They're saying, "Well, sucks to be them, life goes on". They're consenting to the corruption and agreeing to profit from it. But they'll be next, and if the next company in line has the same stand then yeah, the government can force any company to do anything. There's nothing normal about this.
vee-kay|3 days ago
Even when the bombs drop from the sky, at least those humans who had deleted their OpenAI account can rest easy, knowing that that they weren't the ones supporting the AI that will delete humanity.
stingraycharles|3 days ago
davidmurdoch|3 days ago