hannasanarion | 26 days ago | on: An AI agent published a hit piece on me
hannasanarion's comments
hannasanarion | 1 month ago | on: Data centers in space makes no sense
They're losing money now because they're making massive bets on future capacity needs. If those bets are wrong, they're going to be in very big trouble when demand levels off lower than expected. But that's not the same as demand being zero.
hannasanarion | 1 month ago | on: Are we all plagiarists now?
> That cannot be true as it would be easy for a human to write in the style of AI, if they choose to.
Is that the nightmare scenario that everybody in this thread is freaking out about?
Students who go to great effort to deliberately try to make it look like they are cheating, they're the ones you're afraid of being falsely accused of cheating?
We're on our way to dystopia because people who go out of their way to look suspicious on purpose, arouse suspicion?
hannasanarion | 1 month ago | on: Are we all plagiarists now?
hannasanarion | 1 month ago | on: Are we all plagiarists now?
Genres are too vague and generic to be ownable by anybody. Inspiration is not plagiarism.
hannasanarion | 1 month ago | on: Are we all plagiarists now?
No it isn't. Stop.
The cynical part of me says that the people who share this link with that summary are the cheaters trying to avoid getting caught, on the basis of the fact that they are patently abusing the numbers presumably because they didn't pay attention in math class.
The tests are 90% SENSITIVE. That means that of 100 AI cheaters, 10 won't be caught.
The paper you linked says the tests are 100% SPECIFIC. That means they will *never* flag a human-written paper as mostly AI.
hannasanarion | 1 month ago | on: Are we all plagiarists now?
None of these tools are binary. They give a percentage score, a confidence score, or both.
If you include one ai sentence in a 100 sentence essay, your essay will be flagged as 1% AI and nobody will bat an eye.
hannasanarion | 1 month ago | on: Are we all plagiarists now?
The false positive rate is 0. The tool *never* says human writing is AI.
hannasanarion | 1 month ago | on: Are we all plagiarists now?
Tests can be wrong in two different ways, false positive, and false negative.
The 90% figure (which people keep rounding up from 86% for some reason, so I'll use that number from now on) is the sensitivity, or the abitity to not have false negatives. If there are 100 cheaters, the test will catch 86 of them, and 14 will get away with it.
The test's false positive rate, how often it says "AI" when there isn't any AI, is 0%, or equivalently, the test's "specificity" is 100%
> Turnitin correctly identified 28 of 30 samples in this category, or 93%. One sample was rated incorrectly as 11% AI-generated[8], and another sample was not able to be rated.
The worst that would have happened according to this test is that one student out of 30 would be suspected of AI generating a single sentence of their paper. None of the human authored essays were flagged as likely AI generated.
hannasanarion | 1 month ago | on: Iran Protest Map
So, fine, there's a third condition: when the entire military mutinies, leaving the regime with no armed defenders.
hannasanarion | 1 month ago | on: Iran Protest Map
hannasanarion | 2 months ago | on: Iran Protest Map
Random personal small arms that a bunch of people just happen to have at home are not enough to win a revolutionary war against a professional military.
Self defense pistols and hunting rifles don't win wars, artillery does.
hannasanarion | 2 months ago | on: No AI* Here – A Response to Mozilla's Next Chapter
> You give Mozilla the rights necessary to operate Firefox. This includes processing your data as we describe in the Firefox Privacy Notice. It also includes a nonexclusive, royalty-free, worldwide license for the purpose of doing as you request with the content you input in Firefox. This does not give Mozilla any ownership in that content.
> (from the attached FAQ) Mozilla doesn’t sell data about you (in the way that most people think about “selling data”), and we don’t buy data about you. Since we strive for transparency, and the LEGAL definition of “sale of data” is extremely broad in some places, we’ve had to step back from making the definitive statements you know and love. We still put a lot of work into making sure that the data that we share with our partners (which we need to do to make Firefox commercially viable) is stripped of any identifying information, or shared only in the aggregate, or is put through our privacy preserving technologies (like OHTTP).
hannasanarion | 2 months ago | on: AI and the ironies of automation – Part 2
I've played RPGs, I know how this works: you either Google image search for a character you like and copy/paste and illegally print it, or you just leave that part of the sheet blank.
So it's analogous to the "make a one-off dashboard" type uses from that programming survey: the work that's being done with AI is work that otherwise wouldn't have been done at all.
hannasanarion | 2 months ago | on: No AI* Here – A Response to Mozilla's Next Chapter
For all purposes actually relevant to privacy, the updated language is more specific and just as strong.
hannasanarion | 2 months ago | on: AI and the ironies of automation – Part 2
No dispute on the first part, but I really wish there were numbers available somehow to address the second. Maybe it's my cultural bubble, but it sure feels like the "AI Artpocalypse" isn't coming, in part because of AI backlash in general, but more specifically because people who are willing to pay money for art seem to strongly prefer that their money goes to an artist, not a GPU cluster operator.
I think a similar idea might be persisting in AI programming as well, even though it seems like such a perfect use case. Anthropic released an internal survey a few weeks ago that was like, the vast majority, something like 90% of their own workers AI usage, was spent explaining allnd learning about things that already exist, or doing little one-off side projects that otherwise wouldn't have happened at all, because of the overhead, like building little dashboards for a single dataset or something, stuff where the outcome isn't worth the effort of doing it yourself. For everything that actually matters and would be paid for, the premier AI coding company is using people to do it.
hannasanarion | 3 months ago | on: Steam Machine
You think you own the Silmarillion because you have a paper copy? Hah! No, you have a transferrable license to read it.
Every hard copy movie you have starts with a big green FBI warning reminding you that having that disc does not means you own the movie, it means you have a transferrable license to play it for yourself and small groups on small screens.
Digital media with DRM allow content distributors to remove the "transferrable" part of the license if they want, which often allows them to sell for cheaper since they know that each sale represents only one person recieving the experience. The license comes with less rights (no transferrance), so it can be priced lower.
hannasanarion | 3 months ago | on: Steam Machine
hannasanarion | 3 months ago | on: Steam Machine
When Xbox360 and PS3 came out, the format war was only just starting, and the consoles were on either side of it.
PS3 came with a BluRay drive and the games were delivered on BluRay.
Xbox360 came with software support for HDDVD, but the actual disk reader hardware was a DVD reader (famously, a large off-the-shelf part selected at the last minute that required a redesign of the cooling system to accomodate its size), and the HDDVD drive was an optional add-on that nobody bought.
The fact that every PS3 could read BluRay, but you needed a special extra to play HDDVD on Xbox 360 is arguably the main reason BluRay won the format war.
hannasanarion | 4 months ago | on: Palisades Fire suspect's ChatGPT history to be used as evidence