(no title)
awesomeusername | 6 months ago
Instead of the rich getting access to the best professionals, it will level the playing field. The average low level lawyer, doctor, etc are not great. How nice if everyone got top level help.
awesomeusername | 6 months ago
Instead of the rich getting access to the best professionals, it will level the playing field. The average low level lawyer, doctor, etc are not great. How nice if everyone got top level help.
zdragnar|6 months ago
With a regulated license, there is someone to hold accountable for wantonly dangerous advice, much like there is with humans.
[0] https://x.com/AnnalsofIMCC/status/1953531705802797070
II2II|6 months ago
With respect to the former, I firmly believe that the existing LLMs should not be presented as a source for authoritative advice. Giving advice that is not authoritative is okay as long as the recipient realizes such, in the sense that it is something that people have to deal with outside of the technological realm anyhow. For example, if you ask for help for a friend you are doing so with the understanding that, as a friend, they are doing so to the best of their ability. Yet you don't automatically assume they are right. They are either right because they do the footwork for you to ensure accuracy or you check the accuracy of what they are telling you yourself. Likewise, you don't trust the advice of a stranger unless they are certified, and even that depends upon trust in the certifying body.
I think the problem with technology is that we assume it is a cure-all. While we may not automatically trust the results returned by a basic Google search, a basic Google search result coupled with an authoritative sounding name automatically sounds more accurate than a Google search result that is a blog posting. (I'm not suggesting this is the only criteria people use. You are welcome to insert your own criteria in its place.) Our trust of LLMs, as they stand today, is even worse. Few people have developed criteria beyond: it is an LLM, so it must be trustworthy; or, it is an LLM so it must not be trustworthy. And, to be fair, it is bloody difficult to develop criteria for the trustworthiness of LLMs (even arbitrary criteria) because the provide so few cues.
Then there's the bit about the person receiving the advice. There's not a huge amount we can do about that beyond encouraging people regard the results from LLMs as stepping stones. That is to say they should take the results and do research that will either confirm or deny it. But, of course, many people are lazy and nobody has the expertise to analyze the output of an LLM outside of their personal experience/training.
terminalshort|6 months ago
nullc|6 months ago
The reality is that professional licensing in the US often works to shield its communities from responsibility, though it's primary function is just preventing competition.
oinfoalgo|6 months ago
Not tomorrow, but I just can't imagine this not happening in the next 20 years.
fl0id|6 months ago
jakelazaroff|6 months ago
quantummagic|6 months ago
Mtinie|6 months ago
I’m cynical enough to recognize the price will just go up even if the service overhead is pennies on the dollar.
guappa|6 months ago
sssilver|6 months ago
kolinko|6 months ago
- the best laptop/phone/tv in the world doesn’t offer mich more than the most affordable
- you can get for free a pen novadays that is almost as good at writing as the most expensive pens in the world (before BIC, in 1920s, pens were a luxury good reserved for wall street)
- toilets, washing mashines, heating systems and beds in the poorest homes are not very far off from the expensive homes (in EU at least)
- flying/travel is similar
- computer games and entertainment, and software in general
The more we remove human work from the loop, the more democratised and scalable the technology becomes.
socalgal2|6 months ago
intended|6 months ago
II2II|6 months ago
At a surface level, the LLM was far more accessible. I didn't have to schedule an appointment weeks in advance. Even with the free tier, I didn't have to worry about time limits per se. There were limits, to be sure, but I could easily think about a question or the LLM's response before responding. In my case, what mattered was turnaround time on my terms rather than an in depth discussion. There was also less concern about being judged, both by another human and in a way that could get back to my employer because, yeah, it was employment related stress and the only way I could afford human service was through insurance offered by my employer. While there are significant privacy concerns with LLM's as they stand today, you don't have that direct relationship between who is offering it and the people in your life.
On a deeper level, I simply felt the advice was presented in a more useful form. The human discussions were framed by exercises to be completed between sessions. While the exercises were useful, the feedback was far from immediate and the purpose of the exercises is best described as a delaying tactic: it provided a framework for deeper thought between discussions because discussions were confined to times that were available to both parties. LLMs are more flexible. They are always available. Rather than dealing with big exercises to delay the conversation by a couple of weeks, they can be bite sized exercises to enable the next step. On top of that, LLMs allow for an expanded scope of discussion. Remember, I'm talking about workplace stress in my particular case. An LLM doesn't care whether you are talking about how you personally handle stress, or about how you manage a workplace in order to reduce stress for yourself and others.
Now I'm not going to pretend that this sort of arrangement is useful in all cases. I certainly wouldn't trust it for a psychological or medical diagnosis, and I would trust it even less for prescribed medications. On the other hand, people who cannot afford access to traditional professional services are likely better served by LLMs. After all, there are plenty of people who will offer advice. Those people range from well meaning friends who may lack the scope to offer valid advice, to snake-oil salesmen who could care less about outcomes as long as it contributes to their bottom line. Now I'm not going to pretend that LLMs care about me. On the other hand, they don't care about squeezing me for everything I have either. While the former will never change, I'll admit that the latter may. But I don't foresee that in the immediate future since I suspect the vendors of these models won't push for it until they have established their role in the market place.
nullc|6 months ago
There is an amount of time spent gazing into your navel which is helpful. Less or more than that can be harmful.
You can absolutely make yourself mentally ill just by spending too much time worrying about how mentally ill you are.
And it's clear that there are a rather large number of people making themselves mentally ill using OpenAI's products right now.
Oh, and, aside, nothing stops OpenAI from giving or selling your chat transcripts to your employer. :P In fact, if your employer sues them they'll very likely be obligated to hand them over and you may have no standing to resist it.