top | item 47180737

(no title)

erikgahner | 3 days ago

A lot of people might read this and infer that AI use causes depressive symptoms, but the study cannot say anything about causation at all. The study is also transparent about this fact: "Further work is needed to understand whether these associations are causal"

discuss

order

nDRDY|3 days ago

Y'all picked a funny time to nitpick at standard academic boilerplate. If we discounted all research that only "associated" things, then we wouldn't know much at all! Then again, arguably we don't.

erikgahner|3 days ago

I wouldn't call this a minor detail (i.e., nitpicking), and it is worth pointing out again and again when these studies get public attention.

We should encourage stronger research designs (including A/B tests) if we care about the impact of AI use on mental health outcomes. A study like this one cannot say anything about the effect at all (it is even possible that AI use will have a positive impact on mental health).

squigz|3 days ago

The "correlation is not causation" argument gets brought up every single time such a study is shared on HN, so I'm not sure what you mean by "picked a funny time"?

Anyway there's no reason to discount it, but it does mean you can't run with the assumption that there is causation.

ToucanLoucan|3 days ago

My reaction is that depressed people are, for whatever reason you described, more likely to use generative AI. I can think of a bunch of reasons, most tied to executive function in some way, but like, are we really surprised that people who are struggling to find pleasure/accomplishment/meaning in general life find AI appealing? You get to just play with it continuously, it always answers your messages, it always encourages you to keep talking, keep interacting with it, and it will make things for you for no greater cost than the asking.

I don't think this is a mark against those users to be clear, I see this as largely the same chicken-egg relationship you find between depressed people and video games. It's also subject to the same kinds of abuses on the part of the merchant, things like in-game purchases that are particularly attractive to people with executive function issues, and why the predominant "whales" of the video game industry and especially the mobile game industry are people who are already struggling. I think AI is going to end up in a similar position because like, again, not trying to be shitty, but if your life kind of broadly sucks, I'm sure playing in an AI chatbox all day where something that sounds vaguely human will validate whatever you say, make stuff for you at request, and never challenge you in the slightest is quite attractive to you. And, thinking through it further, these systems also adapt to their users, learn how to engage with them better, as many products have before them that have trapped the neurodivergent into problematic usage scenarios.

I don't judge the people, but I am incredibly suspicious of the businesses behind these and other products that seem almost designed to attract neurodivergent people. If you design a machine that gives dopamine on demand, you can't really be shocked when people who are dopamine‑starved use it a lot. Potentially to a harmful extent.

drakonka|3 days ago

Anecdotally, not with depressive symptoms but anxiety, I find that use of ChatGPT/Claude for 'brainstorming' personal situations was definitely a gateway to further rumination for me. As someone who works on AI agents I thought I'd never fall into that trap and knew how to use it 'properly' when I wanted a sounding board. I was wrong. I now avoid general-use chatbots for personal issues as much as I can because it feels like it's helping in the short term, but has always been worse in aggregate.

(I say general-use because I think there are some AI-based tools that are specially made which _can_ actually be helpful for this - but opening a ChatGPT tab, even with lots of relevant instructions, ain't it in my experience. The interface itself is counter-productive to healthy processing.)