This was something that was bound to happen and is going to happen again unless we get more serious regulation around AI publishing.
> It lists the author as having a Masters Degree in Mycology from University of East Ontario. A search later revealed there is no "University of East Ontario."
I’m not an AI apologist by any stretch, but how is this different from some incompetent person criminally-negligently assembling and publishing a book without AI help? Shouldn’t existing legislation already cover this? I’d also assume that most books will be written with AI help in some way or another in the future.
Imaginary Universities in Canada, are a government supported racket. It is not for no reason that the UN condemns Canada for modern slavery with their temporary foreign worker program, that runs off the back of fleece-u colleges.
There's more intelligence in this AI guide, than there is in the majority of college diploma's coming out of of Canada today.
What's good for the goose is good for the gander: the AI generated not only the contents but the author as well.
This has to be some kind of a new level of idiocy. I mean, use AI to make junk sci-fi stories, and generate fake authors all day. But going for a book about mushrooms deserves a special stupidity and evilness award.
This is why generative AI simply doesn't work for anything with actual consequences. As it is right now, it's a glorified autocomplete, and one that doesn't understand the context it's being used in at all.
That's fine for things like stock images or text on random product pages, but for things like this? Yeah, the very concept is just risky as hell.
I generally distrust Reddit threads, but it's entirely plausible to me. I was once gifted a cookbook on Amazon that was full of pre-LLM "sludge" of suspect internet-collected recipes along with a stock photo of an author with fake credentials (her author page is still up, although her books have been pulled and apparently she's a man now https://www.amazon.com/stores/author/B0716Q2Y4Z/about)
When I left a negative review pointing out that the author was a stock photo (entire content: "The author of this book is a fraud. There is no Tina B. Baker, she is a stock photo."), Amazon pulled the review saying it violated their guidelines.
The bit where they say Amazon demanded that they return the book by special delivery or face having their entire Amazon account terminated is where the point at which it became clear to me that this is ragebait - Amazon simply do not work that way in the U.K. - returns are via prepaid label that they provide, and they don’t terminate your account if you fail to return a product, you just don’t get refunded.
There have been quite a few articles corroborating the existence of books like this, the only part that needs believing is that someone actually used the books as advertised.
Yeah, while this story is absolutely plausible, plausible does not necessarily mean true. OP is a new account with suspiciously few details that would permit fact-checking, and I think I'd call this one ragebait until proven otherwise. (That does not mean that this couldn't happen or that there isn't a risk of it happening, but everyone here hopefully knew that before this thread. I'm saying it shouldn't adjust your beliefs much if at all.)
I would take any story from reddit with a huge grain of salt, but even if this case is not real there are real AI generated books that pretend to be real books, and mycology books in particular have had attention brought to them due to the dangers inherent to the misinformation they may contain.
Here's a more credible article about the general phenomenon (more so than an anonymous anecdote in "legal advice" Reddit, which is, by reputation, a place for amateur writers to hone their craft in a low-stakes environment).
- "Like past mushroom identification apps, the accuracy is poor, Claypool found in a new report for Public Citizen, a nonprofit consumer advocacy organization. But AI companies and app stores are offering these apps anyway, often without clear disclosures about how often the tools are wrong."
- "Apple, Google, OpenAI and Microsoft didn’t respond to requests for comment."
I see this happen a lot on Tiktok and Facebook where influencers post about using different plants for food or for skincare. They often either show photos that are of a different plant that is dangerous to eat or they haven’t actually tried this and promoting the consumption of a plant that can poison people. It’s the Wild West out there on social networks.
This feels just because while people who know just bit about plants might do some dumb things, mushrooms are a thing that anyone who knows a bit about plants knows not to screw with because it's easy to poison oneself and the results are horrible.
My father was a moderately known evolutionary biologist and he advised his students encountering an unknown plant and curious whether it was edible to "try it and see". But this was for a flowering plant. Most mushrooms can't be dealt with that way.
If its creative writing its for a worthy cause since the books do exist, theres plenty of warnings all over the web about these books written by AI and thats pretty outrageous right there. Its wierd that something like "artificial intelligence" actually makes everything stupider haha
>My wife just received an email from the online retailer. She has been asked to "Not take any photographs or copies of the product in question due to copyright issues" and it states, "the product must be returned immediately by special delivery by [DATE]."
>There's some other statements as well about our account being terminated if we fail to return the product by the specific date. We've got a lot of movies and series that we have purchased over the years on this account, I wouldn't want to lose them.
This story is so fake it hurts. Reddit eating up ragebait is one thing but posts like this doesn't belong to HN at all
Does it really matter if it's fake? It's a plausible scenario, and worth discussing. Fiction books can raise interesting questions about science and technology. And the implications of AI is very much HN's wheelhouse.
That’s to be expected from genAI. Im sure more of this is to follow and these AI companies will have to come up with disclaimers and checkoff boxes (CYA type) “I understood these answers are likely wrong.” Even better, they should mention LLMs do not understand what they generate.
Edit:
Wait, this is a book made up with LLMs. I think the author should be on the hook for publishing unless they added a disclaimer their book has no grounds in reality.
I’ve become suspicious of any book being published after 2022 as a result of AI-generated content. Wherever possible I opt to find an older edition or a book that covers a subject that’s pre-AI, and hopefully containing an errata to boot.
To be fair, even the National Audubon Society Field Guide to North American Mushrooms tells you that if you follow it without knowing what you're doing your going to get poisoned.
It's wild to me that these sorts of stories provoke - on HN of all places - frothy calls for state intervention. Is it by design? Is this some dark campaign to stifle open source LLM development?
Some mushrooms are horribly poisonous. That's the nature into which we were born. It's not a consequence of policy, and won't be fixed by policy.
People need to learn to be careful about sources of information, with care in proportion to consequences. State intervention will preclude that evolutionary step, almost certainly without actually ameliorating the problem.
I’ve watched so many communities devolve because of rage bait, which is often AI generated. I doubt any community is immune. It seems to be an inherent weakness that properly targeted ragebait can subsume them.
I generally treat anything on Reddit as first and foremost motivated by a desire for fake Internet points rather than by a desire to share a real story or have a real conversation. I'd at least expect a reputable journalist to pick up on this story if there really was anything to it.
Regardless of the veracity of the claims in that post, there's not much new here aside from the fact that the distributor generated the content using AI rather than making it up themselves. Quackery and snake oil has always been a thing, and plenty of people have been seriously injured or died from misinformation about food safety or medicine.
The next time someone hesitates to seek professional medical attention for a problem because they got a blessing from the elders at their church and they think God will heal them as soon as they start having more faith, we can start talking about where we can really draw the line between personal responsibility and holding liars liable.
Well the morale is: don't eat wild mushrooms no matter the source (pick them yourself or picked by an "expert"). At least that's my rule. There's a ton of cultivated mushrooms and a ton of other tasty safe foods to choose from, I don't give a fuck how exotic the wild mushrooms flavor would be.
I am of the opinion that you should only be hunting mushrooms if your family has been hunting mushrooms for generations (I could count 4+ in my family). Or at least spend few years under some patronage. This is not something you should learn from books. Also you can't just learn 1 mushroom, you need to know them all even if you only eat one.
directly akin to the people who throw themselves into stopped cars in Russia for insurance fraud purposes, only to be captured on dash-cam video and derisively immortalized on social media.
The other viable explanation is "but Google Maps told me to drive off the pier."
Dylan Beattie talked about this exact thing at NDC this year. https://youtu.be/By4Gb1RKZpU?t=1428. Timestamp included. The entire talk is very good though.
This is actually a problem with non-AI generated mushroom books as well. Apparently publishers have been copying eachothers misinformation for a long time.
It is also a problem with accurate images and descriptions. Some mushrooms are very easily mistaken for others: unless you're 100% sure you've picked this very species before in around the same place and time, you'd better leave it there or bring it to a mycologist for identification.
Even if no one did anything wrong, you might misidentify it in the field. Unless you're very experienced in this field this feels like a very risky and stupid thing to do.
This isn't an AI problem. This is a "Don't eat things growing in the woods" problem.
Amazing that we're at the point where people think "Don't eat things growing in the woods", something that would have doomed our species, is common sense.
Perhaps they asked ChatGPT and were told it was a great idea to eat wild mushrooms.
> This isn't an AI problem. This is a "Don't eat things growing in the woods" problem.
This is a misinformation problem, which you can't solve simply by saying "you should have been better informed". The whole problem of AIs accelerating the post-truth age is that reliable sources are becoming scarcer at an exponential rate.
samtheDamned|1 year ago
> It lists the author as having a Masters Degree in Mycology from University of East Ontario. A search later revealed there is no "University of East Ontario."
This has got to be criminal negligence.
layer8|1 year ago
AwaAwa|1 year ago
There's more intelligence in this AI guide, than there is in the majority of college diploma's coming out of of Canada today.
maxerickson|1 year ago
kalaksi|1 year ago
GaggiX|1 year ago
Just to clarify, the reddit post is 99% creative writing, it's a fake story created by a new Reddit account.
vleaflet|1 year ago
antoniuschan99|1 year ago
rdtsc|1 year ago
This has to be some kind of a new level of idiocy. I mean, use AI to make junk sci-fi stories, and generate fake authors all day. But going for a book about mushrooms deserves a special stupidity and evilness award.
CM30|1 year ago
That's fine for things like stock images or text on random product pages, but for things like this? Yeah, the very concept is just risky as hell.
SkyPuncher|1 year ago
You verify information by finding multiple, different sources.
tptacek|1 year ago
paulgb|1 year ago
When I left a negative review pointing out that the author was a stock photo (entire content: "The author of this book is a fraud. There is no Tina B. Baker, she is a stock photo."), Amazon pulled the review saying it violated their guidelines.
pflenker|1 year ago
madaxe_again|1 year ago
fsckboy|1 year ago
meroes|1 year ago
wcedmisten|1 year ago
https://www.vox.com/24141648/ai-ebook-grift-mushroom-foragin...
https://fortune.com/2023/09/03/ai-written-mushroom-hunting-g...
rachofsunshine|1 year ago
grugagag|1 year ago
samtheDamned|1 year ago
mjhay|1 year ago
perihelions|1 year ago
https://www.washingtonpost.com/technology/2024/03/18/ai-mush... ("Using AI to spot edible mushrooms could kill you")
- "Like past mushroom identification apps, the accuracy is poor, Claypool found in a new report for Public Citizen, a nonprofit consumer advocacy organization. But AI companies and app stores are offering these apps anyway, often without clear disclosures about how often the tools are wrong."
- "Apple, Google, OpenAI and Microsoft didn’t respond to requests for comment."
Tagbert|1 year ago
joe_the_user|1 year ago
My father was a moderately known evolutionary biologist and he advised his students encountering an unknown plant and curious whether it was edible to "try it and see". But this was for a flowering plant. Most mushrooms can't be dealt with that way.
ChrisMarshallNY|1 year ago
paxys|1 year ago
cowboylowrez|1 year ago
inputError|1 year ago
[deleted]
haunter|1 year ago
>There's some other statements as well about our account being terminated if we fail to return the product by the specific date. We've got a lot of movies and series that we have purchased over the years on this account, I wouldn't want to lose them.
This story is so fake it hurts. Reddit eating up ragebait is one thing but posts like this doesn't belong to HN at all
floodle|1 year ago
grugagag|1 year ago
Edit: Wait, this is a book made up with LLMs. I think the author should be on the hook for publishing unless they added a disclaimer their book has no grounds in reality.
stonethrowaway|1 year ago
thisisauserid|1 year ago
Not necessarily an AI issue.
tqi|1 year ago
jMyles|1 year ago
Some mushrooms are horribly poisonous. That's the nature into which we were born. It's not a consequence of policy, and won't be fixed by policy.
People need to learn to be careful about sources of information, with care in proportion to consequences. State intervention will preclude that evolutionary step, almost certainly without actually ameliorating the problem.
meroes|1 year ago
unknown|1 year ago
[deleted]
amatecha|1 year ago
steelframe|1 year ago
Regardless of the veracity of the claims in that post, there's not much new here aside from the fact that the distributor generated the content using AI rather than making it up themselves. Quackery and snake oil has always been a thing, and plenty of people have been seriously injured or died from misinformation about food safety or medicine.
The next time someone hesitates to seek professional medical attention for a problem because they got a blessing from the elders at their church and they think God will heal them as soon as they start having more faith, we can start talking about where we can really draw the line between personal responsibility and holding liars liable.
unknown|1 year ago
[deleted]
jowdones|1 year ago
I'm not curios. Because curiosity killed the cat.
imglorp|1 year ago
With regional variations, any book about wild mushrooms is bound to be partly wrong.
mcphage|1 year ago
You had an opportunity to go with “the morel is:” and you whiffed :-(
unknown|1 year ago
[deleted]
dvh|1 year ago
aaroninsf|1 year ago
directly akin to the people who throw themselves into stopped cars in Russia for insurance fraud purposes, only to be captured on dash-cam video and derisively immortalized on social media.
The other viable explanation is "but Google Maps told me to drive off the pier."
bee_rider|1 year ago
Sverigevader|1 year ago
kamaal|1 year ago
Its not a fact searching exercise.
puppycodes|1 year ago
rnhmjoj|1 year ago
politelemon|1 year ago
sitkack|1 year ago
aaron695|1 year ago
[deleted]
juanani|1 year ago
[deleted]
unknown|1 year ago
[deleted]
rationalfaith|1 year ago
[deleted]
2-3-7-43-1807|1 year ago
[deleted]
34679|1 year ago
[deleted]
999900000999|1 year ago
Even if no one did anything wrong, you might misidentify it in the field. Unless you're very experienced in this field this feels like a very risky and stupid thing to do.
This isn't an AI problem. This is a "Don't eat things growing in the woods" problem.
puppycodes|1 year ago
RIMR|1 year ago
kibwen|1 year ago
Perhaps they asked ChatGPT and were told it was a great idea to eat wild mushrooms.
> This isn't an AI problem. This is a "Don't eat things growing in the woods" problem.
This is a misinformation problem, which you can't solve simply by saying "you should have been better informed". The whole problem of AIs accelerating the post-truth age is that reliable sources are becoming scarcer at an exponential rate.