top | item 44895128

(no title)

awesomeusername | 6 months ago

I'm probably in the minority here, but for me it's a foregone conclusion that it will become a better therapist, doctor, architect, etc.

Instead of the rich getting access to the best professionals, it will level the playing field. The average low level lawyer, doctor, etc are not great. How nice if everyone got top level help.

discuss

order

zdragnar|6 months ago

It would still need to be regulated and licensed. There was this [0] I saw today about a guy who tried to replace sodium chloride in his diet with sodium bromide because ChatGPT said he could, and poisoned himself.

With a regulated license, there is someone to hold accountable for wantonly dangerous advice, much like there is with humans.

[0] https://x.com/AnnalsofIMCC/status/1953531705802797070

II2II|6 months ago

There are two different issues here. One is tied to how authoritative we view a source, and the other is tied to the weaknesses of the person receiving advice.

With respect to the former, I firmly believe that the existing LLMs should not be presented as a source for authoritative advice. Giving advice that is not authoritative is okay as long as the recipient realizes such, in the sense that it is something that people have to deal with outside of the technological realm anyhow. For example, if you ask for help for a friend you are doing so with the understanding that, as a friend, they are doing so to the best of their ability. Yet you don't automatically assume they are right. They are either right because they do the footwork for you to ensure accuracy or you check the accuracy of what they are telling you yourself. Likewise, you don't trust the advice of a stranger unless they are certified, and even that depends upon trust in the certifying body.

I think the problem with technology is that we assume it is a cure-all. While we may not automatically trust the results returned by a basic Google search, a basic Google search result coupled with an authoritative sounding name automatically sounds more accurate than a Google search result that is a blog posting. (I'm not suggesting this is the only criteria people use. You are welcome to insert your own criteria in its place.) Our trust of LLMs, as they stand today, is even worse. Few people have developed criteria beyond: it is an LLM, so it must be trustworthy; or, it is an LLM so it must not be trustworthy. And, to be fair, it is bloody difficult to develop criteria for the trustworthiness of LLMs (even arbitrary criteria) because the provide so few cues.

Then there's the bit about the person receiving the advice. There's not a huge amount we can do about that beyond encouraging people regard the results from LLMs as stepping stones. That is to say they should take the results and do research that will either confirm or deny it. But, of course, many people are lazy and nobody has the expertise to analyze the output of an LLM outside of their personal experience/training.

nullc|6 months ago

You don't need a "regulated license" to hold someone accountable for harm they caused you.

The reality is that professional licensing in the US often works to shield its communities from responsibility, though it's primary function is just preventing competition.

oinfoalgo|6 months ago

I would suspect at some point we will get models that are licensed.

Not tomorrow, but I just can't imagine this not happening in the next 20 years.

fl0id|6 months ago

When has technological progress leveled the playing field? Like never. At best it shifted it, like that a machine manufacturer got rich in addition to existing wealth. There is no reason for this to go different with AI, and it’s far from certain that it will become better anything anytime soon. Cheaper, sure. But then ppl might see slight improvements from talking to ann original Eliza/Markov bot, and nobody advocated using those as therapy

jakelazaroff|6 months ago

Why is that a foregone conclusion?

quantummagic|6 months ago

Because meat isn't magic. Anything that can be computed inside your physical body, can be calculated in an "artificially" constructed replica. Given enough time, we'll create that replica, there's no reason to think otherwise.

Mtinie|6 months ago

I agree with you that the possibility of egalitarian care for low costs is becoming very likely.

I’m cynical enough to recognize the price will just go up even if the service overhead is pennies on the dollar.

guappa|6 months ago

I wish I was so naive… but since AI is entirely in the hands of people with money… why would that possibly happen?

sssilver|6 months ago

Wouldn’t the rich afford a much better trained, larger, and computationally more intensive model?

kolinko|6 months ago

With most tech we reach law of diminishing returns. That is sure, there is still a variation, but very little:

- the best laptop/phone/tv in the world doesn’t offer mich more than the most affordable

- you can get for free a pen novadays that is almost as good at writing as the most expensive pens in the world (before BIC, in 1920s, pens were a luxury good reserved for wall street)

- toilets, washing mashines, heating systems and beds in the poorest homes are not very far off from the expensive homes (in EU at least)

- flying/travel is similar

- computer games and entertainment, and software in general

The more we remove human work from the loop, the more democratised and scalable the technology becomes.

socalgal2|6 months ago

does it matter? If mine is way better than I had before, why does it matter that someone else's is better still? My sister's $130 Moto G is much better than whatever phone she could afford 10 years. Does it matter that it's not a $1599 iPhone 16 Pro Max 1TB?

intended|6 months ago

Why will any of those things come to pass? I’m asking as someone who has used it extensively for such situations.

II2II|6 months ago

I've never been to a therapist for anything that can be described as a diagnosable condition, but I have spoken to one about stress management and things of that ilk. For "amusement" I discussed similar things with an LLM.

At a surface level, the LLM was far more accessible. I didn't have to schedule an appointment weeks in advance. Even with the free tier, I didn't have to worry about time limits per se. There were limits, to be sure, but I could easily think about a question or the LLM's response before responding. In my case, what mattered was turnaround time on my terms rather than an in depth discussion. There was also less concern about being judged, both by another human and in a way that could get back to my employer because, yeah, it was employment related stress and the only way I could afford human service was through insurance offered by my employer. While there are significant privacy concerns with LLM's as they stand today, you don't have that direct relationship between who is offering it and the people in your life.

On a deeper level, I simply felt the advice was presented in a more useful form. The human discussions were framed by exercises to be completed between sessions. While the exercises were useful, the feedback was far from immediate and the purpose of the exercises is best described as a delaying tactic: it provided a framework for deeper thought between discussions because discussions were confined to times that were available to both parties. LLMs are more flexible. They are always available. Rather than dealing with big exercises to delay the conversation by a couple of weeks, they can be bite sized exercises to enable the next step. On top of that, LLMs allow for an expanded scope of discussion. Remember, I'm talking about workplace stress in my particular case. An LLM doesn't care whether you are talking about how you personally handle stress, or about how you manage a workplace in order to reduce stress for yourself and others.

Now I'm not going to pretend that this sort of arrangement is useful in all cases. I certainly wouldn't trust it for a psychological or medical diagnosis, and I would trust it even less for prescribed medications. On the other hand, people who cannot afford access to traditional professional services are likely better served by LLMs. After all, there are plenty of people who will offer advice. Those people range from well meaning friends who may lack the scope to offer valid advice, to snake-oil salesmen who could care less about outcomes as long as it contributes to their bottom line. Now I'm not going to pretend that LLMs care about me. On the other hand, they don't care about squeezing me for everything I have either. While the former will never change, I'll admit that the latter may. But I don't foresee that in the immediate future since I suspect the vendors of these models won't push for it until they have established their role in the market place.

nullc|6 months ago

Why do you think the lack of time limits is an advantage?

There is an amount of time spent gazing into your navel which is helpful. Less or more than that can be harmful.

You can absolutely make yourself mentally ill just by spending too much time worrying about how mentally ill you are.

And it's clear that there are a rather large number of people making themselves mentally ill using OpenAI's products right now.

Oh, and, aside, nothing stops OpenAI from giving or selling your chat transcripts to your employer. :P In fact, if your employer sues them they'll very likely be obligated to hand them over and you may have no standing to resist it.