hannasanarion's comments

hannasanarion | 26 days ago | on: An AI agent published a hit piece on me

But it doesn't look human. Read the text, it is full of pseudo-profound fluff, takes way too many words to make any point, and uses all the rhetorical devices that LLMs always spam: gratuitous lists, "it's not x it's y" framing, etc etc. No human person ever writes this way.

hannasanarion | 1 month ago | on: Data centers in space makes no sense

People keep saying this but it's simply untrue. AI inference is profitable. Openai and Anthropic have 40-60% gross margins. If they stopped training and building out future capacity they would already be raking in cash.

They're losing money now because they're making massive bets on future capacity needs. If those bets are wrong, they're going to be in very big trouble when demand levels off lower than expected. But that's not the same as demand being zero.

hannasanarion | 1 month ago | on: Are we all plagiarists now?

Read the paper dude. It's not an advertisement, it's an investigation. They performed an experiment including 29 human written papers. One of them got a score of 11% likely to be AI, the rest got a score of 0% likely to be AI. The tool never labeled any human writing as AI with high confidence.

> That cannot be true as it would be easy for a human to write in the style of AI, if they choose to.

Is that the nightmare scenario that everybody in this thread is freaking out about?

Students who go to great effort to deliberately try to make it look like they are cheating, they're the ones you're afraid of being falsely accused of cheating?

We're on our way to dystopia because people who go out of their way to look suspicious on purpose, arouse suspicion?

hannasanarion | 1 month ago | on: Are we all plagiarists now?

It's not, but the fact that one sentence deserves a high score doesn't automatically mean that entire thing will flag false positive. Unless it's like, two sentences in total.

hannasanarion | 1 month ago | on: Are we all plagiarists now?

Science fiction is as old as fiction. The Epic of Gilgamesh (2000BC) and Ramayana (500BC) have sci-fi elements. There's nothing innovative or unique about stories that imagine a future instead of a past, present, or alternate reality.

Genres are too vague and generic to be ownable by anybody. Inspiration is not plagiarism.

hannasanarion | 1 month ago | on: Are we all plagiarists now?

> Turnitin is only ~90% effective:

No it isn't. Stop.

The cynical part of me says that the people who share this link with that summary are the cheaters trying to avoid getting caught, on the basis of the fact that they are patently abusing the numbers presumably because they didn't pay attention in math class.

The tests are 90% SENSITIVE. That means that of 100 AI cheaters, 10 won't be caught.

The paper you linked says the tests are 100% SPECIFIC. That means they will *never* flag a human-written paper as mostly AI.

hannasanarion | 1 month ago | on: Are we all plagiarists now?

> There is also the mix: if I write two pages and I used two sentences by AI (because I was tired and I couldn't find the right sentence), I may be flagged for using AI.

None of these tools are binary. They give a percentage score, a confidence score, or both.

If you include one ai sentence in a 100 sentence essay, your essay will be flagged as 1% AI and nobody will bat an eye.

hannasanarion | 1 month ago | on: Are we all plagiarists now?

No, read the paper. They're going to pass 10% of students who cheated. The 90% figure is the false negative rate, how many AI essays it says are human.

The false positive rate is 0. The tool *never* says human writing is AI.

hannasanarion | 1 month ago | on: Are we all plagiarists now?

That's not what 90% effective means. Tests don't work that way.

Tests can be wrong in two different ways, false positive, and false negative.

The 90% figure (which people keep rounding up from 86% for some reason, so I'll use that number from now on) is the sensitivity, or the abitity to not have false negatives. If there are 100 cheaters, the test will catch 86 of them, and 14 will get away with it.

The test's false positive rate, how often it says "AI" when there isn't any AI, is 0%, or equivalently, the test's "specificity" is 100%

> Turnitin correctly identified 28 of 30 samples in this category, or 93%. One sample was rated incorrectly as 11% AI-generated[8], and another sample was not able to be rated.

The worst that would have happened according to this test is that one student out of 30 would be suspected of AI generating a single sentence of their paper. None of the human authored essays were flagged as likely AI generated.

hannasanarion | 1 month ago | on: Iran Protest Map

The Gen Z revolution would have gone nowhere had the Nepalese Military not launched a coup, removed the existing government from power at gunpoint, and asked the protesters who to replace them with.

So, fine, there's a third condition: when the entire military mutinies, leaving the regime with no armed defenders.

hannasanarion | 1 month ago | on: Iran Protest Map

* or mass-scale mutinies from the military itself giving the revolutionaries access to artillery, tanks, planes, and ships while the regime has none.

hannasanarion | 2 months ago | on: Iran Protest Map

No rebellion or revolt had ever been successful without arms supplied from outside sponsors.

Random personal small arms that a bunch of people just happen to have at home are not enough to win a revolutionary war against a professional military.

Self defense pistols and hunting rifles don't win wars, artillery does.

hannasanarion | 2 months ago | on: No AI* Here – A Response to Mozilla's Next Chapter

That's literally exactly what they do. This is why you should consider reading beyond headlines from time to time.

> You give Mozilla the rights necessary to operate Firefox. This includes processing your data as we describe in the Firefox Privacy Notice. It also includes a nonexclusive, royalty-free, worldwide license for the purpose of doing as you request with the content you input in Firefox. This does not give Mozilla any ownership in that content.

> (from the attached FAQ) Mozilla doesn’t sell data about you (in the way that most people think about “selling data”), and we don’t buy data about you. Since we strive for transparency, and the LEGAL definition of “sale of data” is extremely broad in some places, we’ve had to step back from making the definitive statements you know and love. We still put a lot of work into making sure that the data that we share with our partners (which we need to do to make Firefox commercially viable) is stripped of any identifying information, or shared only in the aggregate, or is put through our privacy preserving technologies (like OHTTP).

hannasanarion | 2 months ago | on: AI and the ironies of automation – Part 2

Right, but the question then is, would it actually have been contracted out?

I've played RPGs, I know how this works: you either Google image search for a character you like and copy/paste and illegally print it, or you just leave that part of the sheet blank.

So it's analogous to the "make a one-off dashboard" type uses from that programming survey: the work that's being done with AI is work that otherwise wouldn't have been done at all.

hannasanarion | 2 months ago | on: No AI* Here – A Response to Mozilla's Next Chapter

This is an absurd take. The meaning of "selling" is extremely broad, courts have found such language to apply to transactions as simple as providing an http request in exchange for an http response. Their lawyers must have been begging them to remove that language for the liability it represents.

For all purposes actually relevant to privacy, the updated language is more specific and just as strong.

hannasanarion | 2 months ago | on: AI and the ironies of automation – Part 2

> All of these AI outputs are both polluting the commons where they pulled all their training data AND are alienating the creators of these cultural outputs via displacement of labor and payment

No dispute on the first part, but I really wish there were numbers available somehow to address the second. Maybe it's my cultural bubble, but it sure feels like the "AI Artpocalypse" isn't coming, in part because of AI backlash in general, but more specifically because people who are willing to pay money for art seem to strongly prefer that their money goes to an artist, not a GPU cluster operator.

I think a similar idea might be persisting in AI programming as well, even though it seems like such a perfect use case. Anthropic released an internal survey a few weeks ago that was like, the vast majority, something like 90% of their own workers AI usage, was spent explaining allnd learning about things that already exist, or doing little one-off side projects that otherwise wouldn't have happened at all, because of the overhead, like building little dashboards for a single dataset or something, stuff where the outcome isn't worth the effort of doing it yourself. For everything that actually matters and would be paid for, the premier AI coding company is using people to do it.

hannasanarion | 3 months ago | on: Steam Machine

That is true for all media purchases since the invention of copyright in 1662.

You think you own the Silmarillion because you have a paper copy? Hah! No, you have a transferrable license to read it.

Every hard copy movie you have starts with a big green FBI warning reminding you that having that disc does not means you own the movie, it means you have a transferrable license to play it for yourself and small groups on small screens.

Digital media with DRM allow content distributors to remove the "transferrable" part of the license if they want, which often allows them to sell for cheaper since they know that each sale represents only one person recieving the experience. The license comes with less rights (no transferrance), so it can be priced lower.

hannasanarion | 3 months ago | on: Steam Machine

Xbox One/PS4 is when both sides standardized on BluRay.

When Xbox360 and PS3 came out, the format war was only just starting, and the consoles were on either side of it.

PS3 came with a BluRay drive and the games were delivered on BluRay.

Xbox360 came with software support for HDDVD, but the actual disk reader hardware was a DVD reader (famously, a large off-the-shelf part selected at the last minute that required a redesign of the cooling system to accomodate its size), and the HDDVD drive was an optional add-on that nobody bought.

The fact that every PS3 could read BluRay, but you needed a special extra to play HDDVD on Xbox 360 is arguably the main reason BluRay won the format war.

page 1