'It cannot provide nuance': UK experts warn AI therapy chatbots are not safe

165 points by distalx 1 month ago | 204 comments
  • hy555 1 month ago
    Throwaway account. My ex partner was involved in a study which said these things were not ok. They were paid not to publish by an undisclosed party. That's how bad it has got.

    Edit: the study compared therapist outcomes to AI outcomes to placebo outcomes. Therapists in this field performed slightly better than placebo, which is pretty terrible. The AI outcomes performed much worse than placebo which is very terrible.

    • neilv 1 month ago
      Sounds like suppressing research, at the cost of public health/safety.

      Some people knew what the tobacco companies were secretly doing, yet they kept quiet, and let countless family tragedies happen.

      What are best channels for people with info to help halt the corruption, this time?

      (The channels might be different than usual right now, with much of US federal being disrupted.)

      • hy555 1 month ago
        Start digging into psychotherapy research and tearing their papers apart. Then the SPR. Whole thing is corrupt to the core. A lot of papers drive public health policy outside the field as it's so vague and easy to cite but the research is only fit for retraction watch.
        • neilv 1 month ago
          Being paid to suppress research on health/safety is potentially a different problem than, say, a high rate of irreproducible results.

          And if the alleged payer is outside the field, this might also be relevant to the public interest in other regards. (For example, if they're trying to suppress this, what else are they trying to do. Even if it turns out the research is invalid.)

      • ilaksh 1 month ago
        Which model exactly? What type of therapy/prompt? Was it a completely dated model, like in the article where they talk about a model from two years ago? We have had massive progress in two years.
        • raverbashing 1 month ago
          Honestly none of the companies are tuning their model to be better at therapy.

          Also it is not expected that the training material for the model deals with the actual practical aspects of therapy, only some of the theoretical aspects are probably in that material

          • jdietrich 1 month ago
            >none of the companies are tuning their model to be better at therapy

            BrickLabs have developed an expert-fine-tuned model specifically to provide psychotherapy. Their model has shown modestly positive results in a reasonably large preregistered RCT.

            https://trytherabot.com/

            https://ai.nejm.org/doi/full/10.1056/AIoa2400802

            • ilaksh 1 month ago
              The leading edge models are trainable via instructions. That's why agents are possible. Many online therapy or therapy companies are training or instructing their agents in this domain.
          • sorenjan 1 month ago
            What did they use for placebo? Talking to somebody without education, or not talking to anybody at all?
            • hy555 1 month ago
              Not talking to anyone at all.
              • zargon 1 month ago
                What did they do then? If they didn't do anything, how can it be considered a placebo?
                • trod1234 1 month ago
                  That seems like a very poor control group.
              • scotty79 1 month ago
                I've heard of some more modern research with llms that had a result that Ai therapist was straight up better than human therapists across all measures.
                • twobitshifter 1 month ago
                  They should do ELIZA as the control or at least include it to see how far we have or haven’t advanced.
                  • kbelder 1 month ago
                    Or a normal person with no training in therapy?
                  • cube00 1 month ago
                    The amount of free money sloshing around the AI space is ridiculous at the moment.
                    • rsynnott 1 month ago
                      I'm quite curious how the placebo in a study like this works.
                      • derbOac 1 month ago
                        Usually in psychotherapy controls, there's either:

                        waitlist control, where people get nothing

                        psychoeducational, where people get some kind of educational content about mental health but not therapy

                        existing nonpsychological service, like physical checkups with a nurse

                        existing therapy, so not placebo but current treatment

                        pharmacological placebo, where they're given a placebo pill and told its psychiatric medication for their concern

                        A kind of "nerfed" version of the therapy, such as supportive therapy where the clinician just provides empathy etc but nothing else

                        How to interpret results depends on the control.

                        It's relevant to debates about general vs specific effects in therapy (rapport, empathy, fit) versus specific effects (effects due to specific techniques of a specific therapy).

                        Bruce Wampold has written a lot about types of controls although he has a hard nonspecific/general effects take on therapy.

                      • 1 month ago
                        • 1 month ago
                        • caseyy 1 month ago
                          I know many pro-LLM people here are very smart, but sometimes it's wise to heed the words of world-renowned experts on a subject.

                          Otherwise, you may end up defending this and it's really foolish:

                          > “Seriously, good for you for standing up for yourself and taking control of your own life,” it reportedly responded to a user, who claimed they had stopped taking their medication and had left their family because they were “responsible for the radio signals coming in through the walls”.

                          • mvdtnz 1 month ago
                            As much as I tend to defer to experts, you must also be weary of experts whose very livelihoods are at risk. They may not have your interests at heart.
                            • 52-6F-62 1 month ago
                              And the tech bros pushing magic chatbots that they nor anyone else has any insight into but from which the same tech bros derive an even higher salary and more opulent livelihood and collect additional rent from certainly do have your interests at heart?

                              Fuck me. Maybe that guy on the street corner selling salvation or “cuckane” really was dealing in the real thing, too, eh?

                              • krainboltgreene 1 month ago
                                Hell yeah, rail against those profiteering…therapists.

                                Man I hate this modern shift of “actually anyone who is an expert is also trying to deceive me”. Extremely healthy shit for a civilization.

                                • mvdtnz 1 month ago
                                  Is there something about therapists that makes them inherently noble and not prone to the same incentives as everyone else?
                                  • ekianjo 1 month ago
                                    Experts have a direct and obvious inventive to justify their existence. Radio experts warned us about TV. TV experts warned us about the Internet. If you live long enough you see it over and over again
                                • simonw 1 month ago
                                  That one was (genuinely) a bug. OpenAI rolled it back. https://openai.com/index/expanding-on-sycophancy/

                                  (But yeah, relying on systems that can have bugs like that for your mental health is terrifying.)

                                  • cdrini 1 month ago
                                    Yeah I think there is plenty of room for good discussion here, but using that quote without context is misleading. And the faulty model was pulled after only a few days of being out, iirc. It definitely does speak to the necessity of nuance when analysing AI in these contexts; results for one model might not necessarily hold for another, and even system prompts could change results.
                                    • vrighter 1 month ago
                                      you cannot really roll back a bug in a black box system you don't understand
                                      • clncy 1 month ago
                                        Exactly. More like changing the state of the system to reduce the observed behaviour while introducing other (unknown) behaviours
                                        • cdrini 1 month ago
                                          I don't think that's quite true; even a system you don't understand has observable behaviour. And you can roll back to a certain state that doesn't exhibit the undesirable observable behaviour. If anything, most things in life operate this way.
                                        • 1 month ago
                                        • 1 month ago
                                          • casey2 1 month ago
                                            The very fact that the "world class experts" are warning people not to use it means they have already been replaced in most fields that matter.

                                            They didn't feel threatened by systems like cleverbot or GPT-3.5

                                            • cmsj 1 month ago
                                              Congrats, you have trapped yourself in an ideological bubble where nobody can ever tell you that AI is a bad fit for a given application.

                                              Try this on for size: I am not a therapist, but I will happily tell you that a statistical word generating LLM is a truly atrocious substitute for the hard work of a creative, empathetic and caring human being.

                                          • lurk2 1 month ago
                                            I tried Replika years ago after reading a Guardian article about it. The story passed it off as an AI model that had been adapted from one a woman had programmed to remember her deceased friend using text messages he had sent her. It ended up being a gamified version of Smarter Child with a slightly longer memory span (4 messages instead of 2) that constantly harangued the user to divulge preferences that were then no-doubt used for marketing purposes. I thought I must be doing something wrong, because people on the replika subreddit were constantly talking about how their replika agent was developing its own personality (I saw no evidence at any point that it had the capacity to do this).

                                            Almost all of these people were openly in (romantic) love with these agents. This was in 2017 or thereabouts, so only a few years after Spike Jonze’s Her came out.

                                            From what I understand the app is now primarily pornographic (a trajectory that a naiver, younger me never saw coming).

                                            I mostly use Copilot for writing Python scripts, but I have had conversations with it. If the model was running locally on your own machine, I can see how it would be effective for people experiencing some sort of emotional crisis. Anyone using a Meta AI for therapy is going to learn the same hard lesson that the people who trusted 23 and Me are currently learning.

                                            • mrbombastic 1 month ago
                                              “I thought I must be doing something wrong, because people on the replika subreddit were constantly talking about how their replika agent was developing its own personality (I saw no evidence at any point that it had the capacity to do this).”

                                              People really like to anthropomorphize any object with even the most basic communication capabilities and most people have no concept of the distance between parroting phrases and a full on human consciousness. In the 90s Furbys were a popular toy that said started off speaking furbish and then eventually spoke some (maybe 20?) human phrases, many people were absolutely convinced you could teach them to talk and learn like a human and that they had essentially bought a very intelligent pet. The NSA even banned them for a time because they thought they were recording and learning from surroundings despite that being completely untrue. Point being this is going to get much worse now that LLMs have gotten a whole lot better at mimicking human conversations and there is incentive for companies to overstate capabilities.

                                              • trod1234 1 month ago
                                                This actually isn't that surprising.

                                                There are psychological blindspots that we all have as human beings, and when stimulus is structured in specific ways people lose their grip on reality, or rather more accurately, people have their grip on objective reality ripped away from them without them realizing it because these things operate on us subliminally (to a lesser or greater degree depending on the individual), and it mostly happens pre-perception with the victim none the wiser. They then effectively become slaves to the loudest monster, which is the AI speaking in their ear more than anyone else, and by extension to the slave master who programmed the AI.

                                                One such blindspot is the consistency blindspot where someone may induce you to say something indicating agreement with something similar first, and then ask the question they really want to ask. Once you say something that's in agreement, and by extension something similar is asked, there is bleedover and you fight your own psychology later if you didn't have defenses to short circuit this fixed action pattern (i.e. and already know), and that's just a surface level blindspot that car salesman use all the time; there are much more subtle ones like distorted reflected appraisal which are used by cults, and nation states for thought reform.

                                                To remain internally consistent, with distorted reflected appraisal, your psychology warps itself, and you as a person unravel. These things have been used in torture, but almost no one today is taught what the elements of torture are so they can recognize it, or know how it works. You would be surprised to find that these things are everywhere today, even in K12 education and that's not an accident.

                                                Everyone has reflected appraisal because this is how we adopt the cultural identity we have as people from our parents while we are children.

                                                All that's needed for torture to break someone down are the elements, structuring, and clustering.

                                                Those elements are isolation, cognitive dissonance, coercion with perceived or real loss, and lack of agency to remove with these you break in a series of steps rational thought receding, involuntary hypnosis, and then psychological break (disassociation or a special semi-lucid psychosis capable of planning); with time and exposure.

                                                Structuring uses diabolical structures to turn the psyche back on itself in a trauma loop, and clustering includes any multiples of these elements or structures within a short time period, as well as events that increase susceptibility such as narco-analysis/synthesis based in dopamine spikes triggered by associative priming (operant conditioning). Drug use makes one more susceptible as they found in the early 30s with barbituates, and its since been improved so you can induce this is in almost anyone with a phone.

                                                No AI will ever be able to create and maintain a consistent reflected appraisal for the people they are interacting with, but because the harmful effects aren't seen immediately, people today have blinded themselves and discount the harms that naturally result. The harms from the unnatural loss of objective reality.

                                                • tbrownaw 1 month ago
                                                  > when stimulus is structured in specific ways people lose their grip on reality, or rather more accurately, people have their grip on objective reality ripped away from them without them realizing it because these things operate on us subliminally

                                                  The world would like quite different if this was true.

                                                  • lurk2 1 month ago
                                                    Very interesting. Could you recommend any further reading?
                                                    • trod1234 1 month ago
                                                      Robert Cialdini is probably the lightest book and covers most of the different blindspots we have, except distorted reflected appraisal in his book on Influence. He provides the principles but leaves most of the structure up to the person's imagination.

                                                      The coursework in an introduction to communication class may provide some foundational details (depending on the instructor), Sapir-Whorf has basis in blindspots.

                                                      Robert Lifton touches on the detailed case studies of torture from the 1950s (under Mao), in his book "Thought Reform and the Psychology of Totalism", and I've heard in later books he creates a framework that classifies cultures as Protean (self-direction, growth, self-determination/agency), or Totalism (towards control which eventually fails Darwin's fitness).

                                                      I haven't actually read his later books yet though his earlier books were quite detailed. I believe the internet archive has a copy of this available for reading as a pdf but be warned this is quite dark.

                                                      Joost Meerloo in his, "Rape of the Mind" as an overview touches on how Totalitarianism grows in the setting of WW2 and some Mao, though takes Freudian look at things (dating certain aspects which we know to be untrue now).

                                                      From there it branches out depending on your interest. The modern material itself while based on these earlier works often has the origins obscured following a separation of objectionable concerns.

                                                      There are congressional reports on COINTELPRO and you may find notice it has modern iterations (touching on protest/activist activity harassment), as well as the history of East German Stasi, and Zersetzung where governments use this to repress the population.

                                                      There are aspects in the Octalysis Framework (gamification/game design).

                                                      Paulo Freire used some of this material in developing his critical pedagogy which was used in the 70s to replace teaching method from a reduction of first principles (based in rome and the greeks) to what's commonly known as rote-based teaching, and later called "Lying to Children", which takes the reversal of that approach following more closely to gnosticism.

                                                      The approach is basically you give a flawed useless model which includes both true and false things. Students learn to competence, then are given a new model that's less flawed, where you have to learn and unlearn things already learned. You never actually unlearn anything and it induces frustration and torture destroying minds in the process. Each step towards gnosis becomes more useful but only the most compliant and blind make it to the end with few exceptions. Structures that burn bridges induce failure in math, and the effect is this acts as a filter to gatekeep the technical fields.

                                                      The water pipe analogy of voltage in electronics as an example of the latter instead of the first principled approach using diffusion which is more correct.

                                                      Disney and Dreamworks uses distorted reflected appraisal tailored towards destructive interference of identity, which some employees have blown the whistle on (for the latter), aimed at children and sneak things past their adult guardians. There's quite a lot if you look around but its not under any single name but scattered. Hopefully that helps.

                                                      The Dreamworks whistleblower interview can be found here: https://www.youtube.com/watch?v=vvNZRUtqqa8

                                                      All indexed references of it seem to now have been removed from search. I'm glad now that I kept a reference link in a text file.

                                                      Update: Dreamworks isn't Pixar, I misremembered,they are owned by Universal Studios, whereas Disney own's Pixar. Pixar and Disney appear to do the same things.

                                                  • harvey9 1 month ago
                                                    The other lesson here is that general audience news sites are pretty bad at technology coverage.
                                                  • mrcsharp 1 month ago
                                                    > "I personally have the belief that everyone should probably have a therapist,” he said last week. “It’s like someone they can just talk to throughout the day, or not necessarily throughout the day, but about whatever issues they’re worried about and for people who don’t have a person who’s a therapist, I think everyone will have an AI.”

                                                    He seems so desperate to sell AI that he forgot such thing already exists. It's called family or a close friend.

                                                    I know there are people who truly have no one and they could benefit from a therapist. Having them rely on AI could prove risky specially if the person is suffering from depression. What if AI pushes them towards committing suicide? And I'll probably be told that OpenAI or Meta or MS can put guardrails against this. What happens when that fails (and we've seen it fail)? Who'll be held accountable? Does an LLM take the hippocratic oath? Are we actually abandoning all standards in favour of Mark Zuckerberg making more billions of dollars?

                                                    • fy20 1 month ago
                                                      > It's called family or a close friend.

                                                      It's good you are socially privileged, but a lot of people do not have someone close who they can feel secure to confide in. Even a therapist doesn't help here, as a lot of people have pre-existing conditionings about what a therapist is "I'm not crazy, why do I need a therapist?".

                                                      Case in point, my father's cousin lived alone and didn't have any friends. He lived in the same house his whole life, just outside London by himself, with no indoor toilet or hot water. A few years ago, social services came after the neighbours called, because his roof collapsed and he was just living as if nothing was wrong. My father was his closest living family, but they'd not spoken in 20 years or more.

                                                      I feel this kind of thing is more common than you think. Especially with older people, they may have friends from the outside, but they aren't close with them that they can talk about whatever is on their mind.

                                                      • mrcsharp 1 month ago
                                                        I did address the fact that not everyone has a family or a close friend.

                                                        What you described isn't a good fit for using AI. What would an LLM do for him?

                                                        The fact his roof collapsed and he didn't think much of it indicates a deeper problem only a human can begin to tackle.

                                                        We really shouldn't be solving deep societal problems by throwing more tech at them. That experiment has already failed.

                                                        • casey2 1 month ago
                                                          Having arms and legs isn't "physically privileged". If one is unable to create and maintain relationships then they likely have some cocktail of physical and mental disabilities. Most functioning adults can go to a bar.

                                                          The point being fixing your own life is going to bring much more in the way of benefits than the government or Sam trying to fix it for you. If one are a complete social reject then no amount of AGI will save them. People without close relationships are zombies that walk among us, in most ways they are already dead.

                                                          • ponector 1 month ago
                                                            I bet talking to stranger in the bar will do more harm than talking to the free version of chatgpt.
                                                        • cmsj 1 month ago
                                                          Ultimately what therapists do is lead you through an exploration of yourself, in which you actually do all of the work.

                                                          I 100% do not doubt the usefulness of therapy for those who are suffering in some way, but I feel like the idea that "everyone should probably have a therapist" is kinda odd - if you're generally in a good place, you can explore your feelings/motivations yourself with little risk.

                                                          • cdrini 1 month ago
                                                            I mean, the article addresses exactly your point like one line down?

                                                            > In a separate interview last week, Zuckerberg said “the average American has three friends, but has demand for 15” and AI could plug that gap.

                                                            And I think we should definitely look on this tech with scrutiny, but I think another angle to look at it is: which is worse? No therapy or AI therapy? You mention suicide, but which would result in a reduction in suicide attempts, a or b? I don't have an answer, but I could see it being possible that because AI therapy provides cheaper, more frequent access to mental care, even if it is lower quality, it could result in a net improvement over the status quo on something like suicide attempts.

                                                          • Xcelerate 1 month ago
                                                            I have two lines of thought on this:

                                                            1) Chatbots are never going to be perceived as safe or effective as humans by default, primarily due to human fiat. Professionals like counselors (and lawyers, doctors, software engineers, etc.) will always claim that an LLM cannot do their job, namely because acknowledging such threatens their livelihood. Determining whether LLMs genuinely provide therapeutic value to humans would require rigorous, carefully controlled experiments conducted over many years.

                                                            2) Chatbots definitely cannot replace human therapists in their current state. That much seems quite obvious to me for various reasons already argued well by others on here. But I had to highlight point #1 as devil's advocate, because adopting the mindset that "humans are inherently better by default" due to some magical or scientifically unjustifiable reason will prevent forward progress. The goal is to eliminate the (quite reasonable) fear people have of eventually losing their job to AI by enacting societal change now rather than denying into perpetuity that chatbots are necessarily inferior, at which point everyone will in fact lose their jobs because we had no plan in place.

                                                            • cmsj 1 month ago
                                                              I am not a lawyer or a doctor or a counsellor. I will gladly claim that an LLM should not replace any of those professions.

                                                              It may be able to assist those professionals, but that is as far as I am willing to go, because I am not blinded by the shine of the statistical turks we are deploying right now.

                                                              • HPsquared 1 month ago
                                                                The other rhetorical hazard is to insist that the new thing has to be a 1:1 replacement for a human therapist in the system we have currently. Who's to say it can't take a different form? There are so many ways a text generator could be used for therapeutic purposes.
                                                                • cmsj 1 month ago
                                                                  The fact that you (correctly) called it a text generator, should tell you everything you need to know about why it can't replace a skilled human who takes the time to genuinely understand and empathise with you.
                                                                  • HPsquared 1 month ago
                                                                    Indeed. But they can still provide information and perhaps advice. They provide a place to work through an issue as a kind of "responsive diary" that gives its own input. That makes it much easier for someone to write their thoughts and feelings out when they might not otherwise, possibly gaining insight or catharsis.
                                                                • senordevnyc 1 month ago
                                                                  Agreed. Also, LLMs are already better than 80% of therapists. I don’t think most people understand the delta between a good therapist and a bad one, and how few really are very good.
                                                                • jdietrich 1 month ago
                                                                  In the UK (and many other jurisdictions outside the US), psychotherapy is completely unregulated. Literally anyone can advertise their services as a psychotherapist or counsellor, regardless of qualifications, experience or their suitability to work with potentially vulnerable people.

                                                                  Compared to that status quo, I'm not sure that LLMs are meaningfully more risky - unlike a human, at least it can't physically assault you.

                                                                  https://www.bacp.co.uk/news/news-from-bacp/2020/6-march-gove...

                                                                  https://www.theguardian.com/society/2024/oct/19/psychotherap...

                                                                  • pornel 1 month ago
                                                                    UK doesn't protect the term psychotherapy, but there's a distinction between services of counsellors and (regulated) psychologists.

                                                                    For counselling, people are encouraged to choose counsellors accredited by professional orgs like BACP.

                                                                    • jdietrich 1 month ago
                                                                      "Psychologist" is not a protected title and anyone can use it. "Clinical psychologist" is a protected title, and one that requires an extremely high level of training and very strict professional standards. I imagine that the overwhelming majority of the population are completely oblivious to this distinction.

                                                                      The BACP's standards really aren't very high, as you can qualify for membership after a one-year part-time course and a few weeks of work experience. Their disciplinary procedures are, in my opinion, almost entirely ineffectual. They undertake no meaningful monitoring of accredited members, relying solely on complaints from members of the public. Out of tens of thousands of registered members, only a single-digit number are subject to disciplinary action every year. The findings of the few disciplinary hearings they do actually conduct suggest to me that they are perfectly happy to allow lazy, feckless and incompetent practitioners to remain on their register, with only a perfunctory slap on the wrist.

                                                                      BACP membership is of course entirely voluntary and in no way necessary in order to practice as a counsellor or psychotherapist.

                                                                      https://www.hcpc-uk.org/news-and-events/blog/2023/understand...

                                                                      https://www.bacp.co.uk/about-us/protecting-the-public/profes...

                                                                  • James_K 1 month ago
                                                                    Respectfully, no sh*t. I've talked to a few of these things, and they are feckless yes-men. It's honestly creepy, they sound like they want something from you. Which I suppose they do: continual use of their services. I know a few people who use these things for therapy (I think it is the most popular use now) and I'm downright horrified at the sort of stuff they say. I even know a person who uses the AI to date. They will paste conversations from apps into the AI and ask it how to respond. I've set a rule for myself; I will never speak to machines. Sure, right now it's obvious that they are trying to inflate my ego and keep using the service, but one day they might get good enough to trick me. I already find social media algorithms quite addictive, and so I have minimise them in my life. I shudder to think what a trained agent like these may be capable of.
                                                                    • 52-6F-62 1 month ago
                                                                      I’ve also experimented with them in that capacity. I like to know first hand. I play the skeptic but I tend to feed the beast a little blood in order to understand it, at least.

                                                                      As a result, I agree with you.

                                                                      It gives me pause when I stop to think about anyone without more context placing so much trust in these. And the developers engaged in the “industry” of it demanding blind faith and full payment.

                                                                    • kbelder 1 month ago
                                                                      I think a lot of human therapists are unsafe.

                                                                      We may just need to start comparing success rates and liability concerns. It's kind of like deciding when unassisted driving is 'good enough'.

                                                                      • th0ma5 1 month ago
                                                                        That's not exactly a following reasoning to use for LLMs ... In automation studies things are most dangerous just before full automation due to bias. Why tap the brakes when surly the car will do it on its own when that isn't a guarantee.
                                                                        • timewizard 1 month ago
                                                                          The therapist controls the extent of the relationship which determines profits. A disinterested third party should be involved.
                                                                        • sheepscreek 1 month ago
                                                                          That’s fair but there’s another nuance that they can’t solve for. Cost and availability.

                                                                          AI is not a substitute for traditional therapy, but it offers an 80% benefit at a fraction of the cost. It could be used to supplement therapy, for the periods between sessions.

                                                                          The biggest risk is with privacy. Meta could not be trusted knowing what you’re going to wear or eat. Now imagine them knowing your deepest darkest secrets. The advertising business model does not gel well with providing mental health support. Subscription (with privacy guarantees) is the way to go.

                                                                          • zdragnar 1 month ago
                                                                            > The biggest risk is with privacy

                                                                            No, the biggest risk is that it behaves in ways that actively harm users in a fragile emotional state, whether by enabling or pushing them into dangerous behavior.

                                                                            Many people are already demonstrably unable to handle normal AI chatbots in a healthy manner. A "therapist" substitute that takes a position of authority as a counselor ramps that danger up drastically.

                                                                            • sheepscreek 1 month ago
                                                                              You’re saying that as if AI is a singular thing. It is not.

                                                                              Also, for every nay sayer I encounter now, I’m going to start by asking “Have you ever taken therapy? For how long? Why did you stop? Did it help?”

                                                                              Therapy isn’t a silver bullet. Finding a therapist that works for you takes years of patient trial and error.

                                                                            • caseyy 1 month ago
                                                                              > 80% benefit at a fraction of the cost

                                                                              I'm sure 80% of expert therapists in any modality will disagree.

                                                                              At best, AI can compete with telehealth therapy, which is known for having practically no quality standards. And of course, LLMs surpass "no quality standards" with flying colors.

                                                                              I say this very rarely because I think such statements should be used with caution, but in this case: saying that LLMs can do 80% of a therapist's work is actually harmful for people who might believe it and not seek effective therapy. Going down this path has a good probability of costing someone dearly.

                                                                              • sheepscreek 1 month ago
                                                                                My statement is intended for individuals who cannot afford therapy. That’s why my comment centers on cost and availability (accessibility). It’s a frequently overlooked reason why people hesitate to seek therapy.

                                                                                Given that, AI can be just as good as talking to a friend when you don’t have one (or feel uncomfortable discussing something with one).

                                                                                • GreenWatermelon 1 month ago
                                                                                  > AI can be just as good as talking to a friend when you don’t have one

                                                                                  This sentence effectively reads "AI cam be just as good as (nothing)" since you can't talk to a friend when you don't have one.

                                                                                  Of course, I understand the point you were trying to make, which is that AI is better than absolutely nothing; but I disagree in the vain that AI will give you a false since of companionship that might lead you further towards bad outcomes.

                                                                                  • caseyy 1 month ago
                                                                                    > AI can be just as good as talking to a friend when you don’t have one

                                                                                    This is not true, and it's not even wrong. You almost cannot argue with such a statement without being ridiculous. The best I can say is: natural language synthesis is not a substitute for friends.

                                                                                    If we are debating these things, it's evidence we adopted LLMs with far too little forethought.

                                                                                    I mean, on a technicality, you could say "my friend synthesizes plausible language, this can do it, too. So it can substitute a little bit!" but at that point I'm pretty sure we're not discussing friendship in its essence, and the (emotional, physical, social, etc) support that comes with it.

                                                                                • rsynnott 1 month ago
                                                                                  > AI is not a substitute for traditional therapy, but it offers an 80% benefit at a fraction of the cost.

                                                                                  That... seems optimistic. See, for instance, https://www.rollingstone.com/culture/culture-features/ai-spi...

                                                                                  No psychologist will attempt to convince you that you are the messiah. In at least some cases, our robot overlords are doing _serious active harm_ which the subject would be unlikely to suffer in their absence. LLM therapists are rather likely to be worse than nothing, particularly given their tendency to be overly agreeable.

                                                                                  • sarchertech 1 month ago
                                                                                    Does it offer 80% of the benefit? An AI could match what a human therapist would say 80% (or 99%) of the time and still provide negative benefit.

                                                                                    Therapy seems like the last place an LLM would be beneficial because it’s very hard to keep an LLM from telling you what you want to hear. I can see anyway you could guarantee that a chatbot cause severe damage to a vulnerable patient by supporting their neurosis.

                                                                                    We’re not anywhere close to an LLM which is trained to be supportive and understanding in tone but will never affirm your irrational fears, insecurities, and delusions.

                                                                                    • pitched 1 month ago
                                                                                      Sometimes, the process of gathering our thoughts enough to article them into a prompt is where the benefit is. AI as the rubber duck has a lot of value. Understanding that this is what’s needed vs. something deeper, is beyond the scope of what AI can handle.
                                                                                      • sarchertech 1 month ago
                                                                                        And that’s fine as long as the person using it has a sophisticated understanding of the technology and a company isn’t selling it as a “therapist”.

                                                                                        When an AI therapist from a health startup confirms that a mentally disturbed person is indeed hearing voices from God, or an insecure teenager uses meta AI as a therapist because Mark Zuckerberg said they should and it agrees with them that yes they are unloveable, then we have a problem.

                                                                                        • sxyuan 1 month ago
                                                                                          If it's about gathering our thoughts, there's meditation. Or journaling. Or prayer. Some have even claimed that there is an all-powerful being listening to you on the other side with that last one. (One might even call it an intelligence, just not an artificial one.)

                                                                                          There's also talking to a friend. Sure, they could also steer you wrong, but at least they won't be impersonating a therapist, and they won't be doing it to try to please their investors.

                                                                                        • singpolyma3 1 month ago
                                                                                          I mean most forms of professional therapy the therapist shouldn't say much at all and certainly shouldn't give advice. The point is to have someone listen in a way that feels like they are really listening
                                                                                          • caseyy 1 month ago
                                                                                            > most forms of professional therapy the therapist shouldn't say much at all

                                                                                            This is very untrue. Here is a list of psychotherapy modalities: https://en.wikipedia.org/wiki/List_of_psychotherapies. In most (almost every) modalities, the therapist provides an intervention and offers advice (by definition: guidance, recommendations).

                                                                                            There is Carl Rogers' client-centered therapy, non-directive supportive therapy, and that's it for low-intervention modalities off the top of my head. Two out of over a hundred. Hardly "most" at all.

                                                                                            • sarchertech 1 month ago
                                                                                              Therapists don’t give advice in that they won’t tell you whether you should quit your job, or should you propose to your girlfriend. They will definitely give you basic guidance and confirm that your fears are overblown.

                                                                                              They will not under any circumstances tell you that “yes you are correct, Billy would be more likely to love you if you drop 30 more pounds by throwing up after eating”, but an LLM will if it goes off script.

                                                                                          • zahlman 1 month ago
                                                                                            In a field like personal therapy, giving good advice 80% of the time is nowhere near 80% benefit on net.
                                                                                          • drdunce 1 month ago
                                                                                            As with many things in relation to technology, perhaps we simply need informed user choice and responsible deployment. We could start by not using "Artificial Intelligence" - that makes it sound like a some infallible omniscient being with endless compassion and wisdom that can always be trusted. It's not intelligent, it's a large language model, a convoluted next word prediction machine. It's a fun trick, but shouldn't be trusted with Python code, let alone life advice. Armed with that simple bit of information, the user is free to choose how they use it for help, whether it be medical, legal, work etc.
                                                                                            • trial3 1 month ago
                                                                                              > simply need informed user choice and responsible deployment

                                                                                              the problem is that "responsible deployment" feels extremely at odds with, say, needing to justify a $300B valuation

                                                                                              • EA-3167 1 month ago
                                                                                                What we need is the same thing we've needed for a long time now, ethical standards applied across the whole industry in the same way that many other professions are regulated. If civil engineers acted the way that software engineers routinely do, they'd never work again, and rightly so.
                                                                                              • davidcbc 1 month ago
                                                                                                > As with many things in relation to technology, perhaps we simply need informed user choice and responsible deployment.

                                                                                                The average person will never have the required experience to make an informed decision on the efficacy and safety of this.

                                                                                                • singpolyma3 1 month ago
                                                                                                  To be fair a therapist shouldn't be giving you advice either
                                                                                                • HPsquared 1 month ago
                                                                                                  Sometimes an "unsafe" option is better than the alternative of nothing at all.
                                                                                                  • tredre3 1 month ago
                                                                                                    Sometimes an "unsafe" option is not better than the alternative of nothing at all.
                                                                                                    • Y_Y 1 month ago
                                                                                                      Sounds like we need more information than safe/not safe to make a sensible decision!

                                                                                                      This is something that bugs me about medical ethics, that it's more important not to cause any harm than it is to prevent any.

                                                                                                      • bildung 1 month ago
                                                                                                        I you look at the horrible things that happened in medical history, e.g. https://en.wikipedia.org/wiki/Tuskegee_Syphilis_Study it's pretty clear why the ethics care more about not causing harm...
                                                                                                        • jrapdx3 1 month ago
                                                                                                          Actually, concern about doing harm is central to current concepts of medical ethics. The idea may be ancient but still highly relevant. Ethics declare a primary obligation of healers is "above all do no harm".

                                                                                                          That of course doesn't exclude doing good, being helpful, using skills and technologies to produce favorable outcomes. It does mean that healers must exercise due vigilance for unintended adverse consequences of therapies, let alone knowingly providing services that cause harm.

                                                                                                          The problem with "safe/not safe" designation is simply that these states are more often than not indistinct. Or put another way, it depends on subtle contextual attributes that are hard to discern. Furthermore individual differences can make it difficult to predict safety of applying a procedure.

                                                                                                          As a result healers should be cautious in approaching problems. Definitely prevention is better than cure, it's simply that relatively little is known about preventing burdensome conditions. Exercising what is known is a high priority.

                                                                                                      • caseyy 1 month ago
                                                                                                        And often, the "unsafe" option is severely worse than nothing at all: https://www.rollingstone.com/culture/culture-features/ai-spi...
                                                                                                      • citizenkeen 1 month ago
                                                                                                        Look, make the companies offering AI therapy carry medical malpractice insurance at the same risk as human therapists. If they tell someone to go off their meds, let a jury see those transcripts and see if the company still thinks that’s profitable and feasible.
                                                                                                        • pavel_lishin 1 month ago
                                                                                                          A recent Garbage Day newsletter spoke about this as well, worth reading: https://www.garbageday.email/p/this-is-what-chatgpt-is-actua...
                                                                                                          • j45 1 month ago
                                                                                                            Where the experts are the ones who's incomes would be threatened, there is likely some merit in what they're saying, but also some digital literacy skills.

                                                                                                            I don't know that AI "advisory" chatbots can replace humans.

                                                                                                            Could they help an individual organize their thoughts for more productive time with professionals? Probably.

                                                                                                            Could such tech help individuals learn about different terminology, their usage and how to think about it? Probably.

                                                                                                            Could there be .. a net results of spending fewer hours (and cost if the case) for the same progress? And be able to make it further with advice into improvement?

                                                                                                            Maybe the baseline of advisory expertise in any field exists more around the beginner stage than not.

                                                                                                            • codr7 1 month ago
                                                                                                              You see the same thing with coding. People with actual experience and enough of a perspective to see the problems are ignored because obviously they're just afraid to lose their jobs. Which is not true, it's not even on my list of things that I should be aware of.

                                                                                                              Experience matters, that's something we seem to be forgetting fast.

                                                                                                              • j45 1 month ago
                                                                                                                Sometimes it’s leadership and managements job to not understand the problem.
                                                                                                            • 1 month ago
                                                                                                              • arvinsim 1 month ago
                                                                                                                It will be hard to fight against the tendency of people to use LLMs as therapists when LLMs are relatively free compared to paying up for a human therapist.
                                                                                                                • rdm_blackhole 1 month ago
                                                                                                                  I think the core of the problem here is that the people who turn to chat bots for therapy sometimes have no choice as getting access to a human therapist is simply not possible without spending a lot of money or waiting 6 months before a spot becomes available.

                                                                                                                  Which begs the question, why do so many people currently need therapy? Is it social media? Economic despair? Or a combination of factors?

                                                                                                                  • HaZeust 1 month ago
                                                                                                                    I always liked the theory that we're living in an age where all of our needs can be reasonably met, and we now have enough time to think - in general. We're not working 12 hour days on a field, we're not stalking prey for 5 miles, we have adequate time in our day-to-day to think about things - and ponder - and reflect; and the ability to do so leads to thoughts and epiphanies in people that therapy helps with. We also have more information at our disposal than ever, and can see new perspectives and ideas to combat and cope with - that one previously didn't need to consider or encounter.

                                                                                                                    We've also stigmatized a lot of the things that folks previously used to cope (tobacco, alcohol), and have loosened our stigma on mental health and the management thereof.

                                                                                                                    • mrweasel 1 month ago
                                                                                                                      > we have adequate time in our day-to-day to think about things - and ponder - and reflect;

                                                                                                                      I'd disagree. If you worked in the fields, you have plenty of time to think. We fill out every waking hour of our day, leaving no time to ponder or reflect. Many can't even find time to workout and if they do they listen to a podcast during their workout. That's why so many ideas come to us in the shower, it's the only place left where we don't fill out minds with impressions.

                                                                                                                      • 52-6F-62 1 month ago
                                                                                                                        Indeed. I had way more time to think working a factory kine than I have had in any other white collar role.
                                                                                                                        • squigz 1 month ago
                                                                                                                          I think GP means more that we generally don't have to worry about survival on a day to day (or seasonal) basis anymore, so we have more time to think about bigger issues, like politics or social issues - which I agree with, personally.
                                                                                                                        • 90s_dev 1 month ago
                                                                                                                          > We're not working 12 hour days on a field, we're not stalking prey for 5 miles, we have adequate time in our day-to-day to think about things

                                                                                                                          There's so much history that shows that people have always been able to think like this, and so much written proof that they have, and to the same proportion as they do today.

                                                                                                                          Besides, in 12 hour days on a field, do you not have another 4 hours to relax and think? While stalking prey for 5 miles, is it not quiet enough for you to reflect on what you're doing and why?

                                                                                                                          I do think you're onto something though when you say it's related to our material needs all being relatively met. It seems that's correlational and maybe causal.

                                                                                                                          • johnisgood 1 month ago
                                                                                                                            > We're not working 12 hour days on a field

                                                                                                                            Actually, around here, you are lucky to find a job that is NOT 12 hours a shift.

                                                                                                                          • mrweasel 1 month ago
                                                                                                                            Probably a combination of things, I wouldn't pretend to know, but I have my theories. For men, one half-backed thought I've been having revolved around social circles, friends and places outside work or home. I'm a member in a "men only" sports club (we have a few exceptions due to a special program, but mostly it's men only). One of the older gentlemen, probably in his early 80s, made the comment: "It's important for men to socialise with other men, without women. Young and old men have a lot in common, and have a lot to talk about. An 18 year old woman, and an 80 year old man have very little in of shared interests or concerns."

                                                                                                                            What I notice is that the old members keep the younger members engaged socially, teach them skills and give them access to their extensive network of friends, family, previous (or current) co-workers, bosses, managers. They give advise, teach how to behave and so on. The younger members help out with moving, help with technology, call an ISP, drive others home, to the hospital and help maintain the facilities.

                                                                                                                            Regardless of age, there's always some dude you can talk to, or knows who you need to talk to, and sometimes there's even someone who knows how to make your problems go away or take you in if need by.

                                                                                                                            A former colleague had something similar, a complete ready so go support network in his old-boys football team. Ready to support in anyway they could, when he started his own software company.

                                                                                                                            The problem: This is something like 250 guys. What about the rest? Everyone needs a support network, if your alone, or your family isn't the best, you only have a few superficial friends, if any, then where do you go? Maybe the people around you aren't equipped to help you with your problems, not everyone is, some have their own issues. The safe spaces are mostly gone.

                                                                                                                            We can't even start up support networks, because the strongest have no reason to go, so we risk creating networks of people dragging each other down. The sports clubs works because members are from a wider part of society.

                                                                                                                            From the article:

                                                                                                                            > > Meta said its AIs carry a disclaimer that “indicates the responses are generated by AI to help people understand their limitations”.

                                                                                                                            That's a problem, because most likely to turn to an LLM for mental support don't understand the limitations. They need strong people to support and guide them, and maybe tell them that talking to a probability engine isn't the smartest choice, and take them on a walk instead.

                                                                                                                            • layer8 1 month ago
                                                                                                                              How do you figure that it’s “currently”, and the need hasn’t always been there more or less?
                                                                                                                            • miki123211 1 month ago
                                                                                                                              So here's my nuanced take on this:

                                                                                                                              1. The effects of AI should not be compared with traditional therapy, instead, they should be compared with receiving no therapy. There are many people who can't get therapy, for many reasons, mostly financial or familial (domestic abuse / controlling parents). Even for those who can get it, their therapist isn't infinitely flexible when it comes to time and usually requires appointments, which doesn't help with immediate problems like "my girlfriend just dumped me" or "my boss just berated me in front of my team for something I worked 16-hour days on."

                                                                                                                              AI will increase the amount of therapy that exists in the world, probably by orders of magnitude, just like the record player increased the amount of music listening or the jet plane increased the amount of intercontinental transportation.

                                                                                                                              The right questions to ask here are more like "how many suicides would an AI therapist prevent, compared to the number of suicides it would induce?", or "are all human therapists licensed in country / state X more competent than a good AI?"

                                                                                                                              2. When a person dies of suicide, their cause of death is, and will always be, listed as "suicide", not "AI overregulation leading to lack of access to therapy." In contrast, if somebody dies because of receiving bad AI advice, that advice will ultimately be attributed as the cause of their death. Statistics will be very misleading here and won't ever show the whole picture, because counting deaths caused by AI is inherently a lot easier than counting the deaths it prevented (or didn't prevent).

                                                                                                                              It is much safer for companies and governments to prohibit AI therapy, as then they won't have to deal with the lawsuits and the angry public demanding that they do something about the new problem. This is true even if AI is net beneficial because of the increased access to therapy.

                                                                                                                              3. Because of how AI models work, one model / company will handle many more patients than any single human therapist. This means you need to rethink how you punish mistakes. Even if you have a model that is 10x better than an average human, let's say 1 unnecessary suicide per 100000 patients instead of 1 per 10000, imprisonment after a single mistake may be a suitable punishment for humans, but is not one in the API space, as even a much better model is bound to cause a mistake at some point.

                                                                                                                              4. Another right question to ask is "how does effectiveness of AI at therapy in 2025 compare to the effectiveness of AI at therapy in 2023?" Where it's at right now does't matter, what matters is where it's going. If it continues at the current rate of improvement, when, if ever, will it surpass an average (or a particularly bad) licensed human therapist?

                                                                                                                              5. And if this happens and AI genuinely becomes better, are we sure that legislators and therapists have the right incentives to accept that reality? If we pass a law prohibiting AI therapy now, are we sure we have the mechanisms to get it repealed if AI ever gets good enough, considering points 1-3? If the extrapolated trajectory is promising enough (and I have not run the necessary research, I have no idea if it is or not), maybe it's better to let a few people suffer in the next few years due to bad advice, instead of having a lot of people suffer forever due to overzealous regulation?

                                                                                                                              • deadbabe 1 month ago
                                                                                                                                I used ChatGPT for therapy and it seems fine, I feel like it helped, and I have plenty of things fucked up about myself. Can’t be much worse than other forms of “therapy” that people chase.
                                                                                                                                • bigmattystyles 1 month ago
                                                                                                                                  The problem is they are cheap and immediately available.
                                                                                                                                  • distalx 1 month ago
                                                                                                                                    It just feels a bit uncertain trusting our feelings to AI we don't truly understand.
                                                                                                                                    • jobigoud 1 month ago
                                                                                                                                      You don't truly understand the human therapist either.
                                                                                                                                      • codr7 1 month ago
                                                                                                                                        You do however have a hell of a lot more in common with them than with a profit driven algorithm that even its creators have no clue how it really works.
                                                                                                                                    • 52-6F-62 1 month ago
                                                                                                                                      They aren’t truly cheap
                                                                                                                                      • codr7 1 month ago
                                                                                                                                        Not even close, it's the most expensive waste of resources I can think of atm.

                                                                                                                                        We used to worry about Bitcoin, now Google is funding nuclear plants.

                                                                                                                                        • harvey9 1 month ago
                                                                                                                                          Far cheaper than a human therapist, ignoring that they are entirely different things of course.
                                                                                                                                      • nickdothutton 1 month ago
                                                                                                                                        Perhaps experts can somehow moderate or contribute training data awarded higher weights. Dont let perfect be the enemy of good.
                                                                                                                                        • more_corn 1 month ago
                                                                                                                                          But it’s probably better than no therapy at all.
                                                                                                                                          • taormina 1 month ago
                                                                                                                                            The study is versus no therapy at all.
                                                                                                                                            • cdrini 1 month ago
                                                                                                                                              Study? I didn't see a link to a study in the article.
                                                                                                                                          • emptyfile 1 month ago
                                                                                                                                            The idea of people talking to LLMs in this way genuinely disturbs me.
                                                                                                                                            • kurtis_reed 1 month ago
                                                                                                                                              [flagged]
                                                                                                                                              • chownie 1 month ago
                                                                                                                                                In the same way a doctor might step in on your sick plan to give yourself a piercing with your keychain, yeah. They probably should be saying it.
                                                                                                                                              • booleandilemma 1 month ago
                                                                                                                                                [flagged]
                                                                                                                                                • distalx 1 month ago
                                                                                                                                                  Forget safety for a moment, Zuckerberg's push for Meta AI emotional support looks like a clear play for data and control.
                                                                                                                                                  • 1 month ago
                                                                                                                                                    • lurk2 1 month ago
                                                                                                                                                      Not what the article is about.
                                                                                                                                                    • julienreszka 1 month ago
                                                                                                                                                      [flagged]
                                                                                                                                                      • phreno 1 month ago
                                                                                                                                                        [flagged]
                                                                                                                                                        • ilaksh 1 month ago
                                                                                                                                                          [flagged]
                                                                                                                                                          • davidcbc 1 month ago
                                                                                                                                                            Spreading this bullshit is actively dangerous because someone might believe it and try to rely on a chatbot for their mental health.
                                                                                                                                                            • simplyinfinity 1 month ago
                                                                                                                                                              Even today, leading LLMS Claude 3.7 and ChatGPT 4, take your questions as "you've made mistake, fix it" instead of answering the question. People consider a much broader context of the situation, your body language, facial expressions, and can come up with unusual solutions to specific situations and can explore vastly more things than an LLM.

                                                                                                                                                              And the thing when it comes to therapy is, a real therapist doesn't have to be prompted and can auto adjust to you without your explicit say so. They're not overly affirming, can stop you from doing things and say no to you. LLMs are the opposite of that.

                                                                                                                                                              Also, as a lay person how do i know the right prompts for <llm of the week> to work correctly?

                                                                                                                                                              Don't get me wrong, i would love for AI to be on par or better than a real life therapist, but we're not there yet, and i would advise everyone against using AI for therapy.

                                                                                                                                                              • sho_hn 1 month ago
                                                                                                                                                                Even if the tech was there, for appropriate medical use those models would also have to be strenously tested and certified, so that a known-good version is in use. Cf. the recent "personality" changes in a ChatGPT upgrade. Right now, none of these tools is regulated sufficiently to set safe standards there.
                                                                                                                                                                • ilaksh 1 month ago
                                                                                                                                                                  I am not talking about a layperson building their own therapist agent from scratch. I'm talking about an expert AI engineer and therapist working together and taking their time to create them. Claude 3.7 will not act in a default way given appropriate instructions. Claude 3.7 can absolutely come up with unusual solutions. Claude 3.7 can absolutely tell you "no".
                                                                                                                                                                  • creata 1 month ago
                                                                                                                                                                    Have you seen this scenario ("an expert AI engineer and therapist working together" to create a good therapy bot) actually happen, or are you just confident that it's doable?
                                                                                                                                                                • sho_hn 1 month ago
                                                                                                                                                                  > Leading LLMs in 2025 can absolutely do certain core aspects of cognitive behavioral therapy very effectively given the right prompts and framework and things like journaling tools for the user.

                                                                                                                                                                  What makes you qualified to assert this?

                                                                                                                                                                  (Now, I dislike arguments from authority, but as an engineer in the area of life/safety-critical systems I've also learned the importance of humility.)

                                                                                                                                                                  • ilaksh 1 month ago
                                                                                                                                                                    If they are an average person who wants to talk something out and get practical advise about issues, it is generally not safety critical, and LLMs can help them.

                                                                                                                                                                    If they are mentally ill, LLMs cannot help them.

                                                                                                                                                                    • stefan_ 1 month ago
                                                                                                                                                                      I see, your confidence stems from "I made it the fuck up"?

                                                                                                                                                                      I don't know man, at least the people posting this stuff on LinkedIn generally know its nonsense. They are not drinking the kool-aid, they are trying to get into the business of making it.

                                                                                                                                                                  • andy99 1 month ago
                                                                                                                                                                    The failure modes from 2023 are identical to those today. I agree with the now deleted post that there has been essentially no progress. Benchmark scores (if you think they are a relevant proxy for anything) obviously have increased, but (for example) from 50% to 90% (probably less drastically), not the 99% to 99.999% you'd need for real assurance a widely used system won't make mistakes.

                                                                                                                                                                    Like in 2023, everything is still a demo, there's nothing that could be considered reliable.

                                                                                                                                                                    • thih9 1 month ago
                                                                                                                                                                      > Leading LLMs in 2025 can absolutely do certain core aspects of cognitive behavioral therapy very effectively given the right prompts and framework and things like journaling tools for the user.

                                                                                                                                                                      But when the situation gets more complex or simply a bit unexpected, would that model reliably recognize it lacks knowledge and escalate to a specialist? Or would it still hallucinate instead?

                                                                                                                                                                      • ilaksh 1 month ago
                                                                                                                                                                        SOTA models can actually handle complexity. Most of the discussions I have had with my therapy agent do have a lot of layers. What they can't handle is someone who is mentally ill and may need medication or direct supervision. But they can absolutely recognize mental illness if it is evident in the text entered by the user and insist the user find a medical professional or help them search for one.
                                                                                                                                                                      • timewizard 1 month ago
                                                                                                                                                                        > 2023 is ancient history in the LLM space.

                                                                                                                                                                        Okay, what specifically has improved in that time, which would allay the doctors specific concerns?

                                                                                                                                                                        > do certain core aspects

                                                                                                                                                                        And not others? Is there a delineated list of such failings in the current set of products?

                                                                                                                                                                        > given the right prompts and framework

                                                                                                                                                                        A flamethrower is perfectly safe given the right training and support. In the wrong hands it's likely to be a complete and total disaster in record short time.

                                                                                                                                                                        > a weak prompt that was not written by a subject matter expert

                                                                                                                                                                        So how do end users ever get to use a tool like this?

                                                                                                                                                                        • ilaksh 1 month ago
                                                                                                                                                                          The biggest thing that has improved is the intelligence of the models. The leading models are much more intelligent and robust. Still brittle in some ways, but totally capable of giving CBT advise.

                                                                                                                                                                          The same way end users ever get to use a tool. Open source or an online service, for example.

                                                                                                                                                                        • 1 month ago
                                                                                                                                                                          • computerthings 1 month ago
                                                                                                                                                                            [dead]
                                                                                                                                                                          • bitwize 1 month ago
                                                                                                                                                                            I dunno, man, M-x doctor made me take a real hard long look at my life.
                                                                                                                                                                            • Buttons840 1 month ago
                                                                                                                                                                              Interacting with a LLM (especially one running locally) can do something a therapist cannot--provide an honest interaction outside the capitalist framework. The AI has its limitations, but it is an entity just being itself doing the best it can, without expecting anything in return.
                                                                                                                                                                              • kurthr 1 month ago
                                                                                                                                                                                The word "can" is doing a lot of work here. The idea that any of the current "open weights" LLMs are outside the capitalist framework stretches the bounds of credulity. Choose the least capitalist of: OpenAI, Google, Meta, Anthropic, DeepSeek, Alibaba.

                                                                                                                                                                                You trust Anthropic that much?

                                                                                                                                                                                • Buttons840 1 month ago
                                                                                                                                                                                  I said the interaction exists outside of any financial transaction.

                                                                                                                                                                                  Many dogs are produced by profit motive, but their owners can have interactions with the dog that are not about profit.

                                                                                                                                                                                  • andy99 1 month ago
                                                                                                                                                                                    Dogs aren't rlhf'd and fine tuned to enforce behaviors designed by companies.
                                                                                                                                                                                  • trod1234 1 month ago
                                                                                                                                                                                    With respect, I think you should probably re-examine the meaning of the words you use here. You use words in a way that doesn't meet their established definition.

                                                                                                                                                                                    It would meet objective definition if you replaced 'capitalist' with 'socialist', which may have been what you meant, but that's merely an observation I make, not what you actually say.

                                                                                                                                                                                    The entire paragraph is quite contradictory, and lacks truth, and by extension it is entirely unclear what you mean, and it appears like you are confused when you use words and make statements that can't meet their definition.

                                                                                                                                                                                    You may want to clarify what you mean.

                                                                                                                                                                                    In order for it to be 'capitalist' true to its definition, you need to be able to achieve profit with it in purchasing power, but the outcomes of the entire business lifecycle resulting from this, taken as a whole, instead destroy that ability for everyone.

                                                                                                                                                                                    The companies involved didn't start on their merits seeking profit, they were funded by non-reserve debt issuance or money-printing which is the state picking winners and losers.

                                                                                                                                                                                    If they were capitalist they wouldn't have released model weights to the public. The only reason you would free a resource like that is if your goal was something not profit-driven (i.e. contagion towards chaos to justify control or succinctly totalism).

                                                                                                                                                                                • tuyguntn 1 month ago
                                                                                                                                                                                  I think you are right, on one hand we have human beings with own emotions in life and based on their own emotions they might impact negatively others emotion

                                                                                                                                                                                  on the other hand probabilistic/non-deterministic model, which can give 5 different advises if you ask 5 times.

                                                                                                                                                                                  So who do you trust? Until determinicity of LLM models gets improved and we can debug/fix them while keeping their deterministic behavior intact with new fixes, I would rely on human therapists.

                                                                                                                                                                                  • delichon 1 month ago
                                                                                                                                                                                    How is it possible for a statistical model calculated primarily from the market outputs of a capitalist society to provide an interaction outside of the capitalist framework? That's like claiming to have a mirror that does not reflect your flaws.
                                                                                                                                                                                    • NitpickLawyer 1 month ago
                                                                                                                                                                                      If I understand what they're saying, the interactions you have with the model are not driven by "maximising eyeballs/time/purchases/etc". You get to role-play inside a context window, and if it went in a direction you don't like you reset and start over again. But during those interactions, you control whatever happens, not some 3rd party that may have ulterior motives.
                                                                                                                                                                                      • Draiken 1 month ago
                                                                                                                                                                                        > the model is not driven by "maximising eyeballs/time/purchases/etc".

                                                                                                                                                                                        Do you have access to all the training data and the reinforcement learning they went through? All the system prompts?

                                                                                                                                                                                        I find it impossible for a company seeking profit to not build its AI to maximize what they want.

                                                                                                                                                                                        Interact with a model that's not tuned and you'll see the stark difference.

                                                                                                                                                                                        The matter of fact is that we have no idea what we're interacting with inside that role-play session.

                                                                                                                                                                                      • Buttons840 1 month ago
                                                                                                                                                                                        The same way an interaction with a pure bread dog can be. The dog may have come from a capitalistic system (dogs are bred for money unfortunately), but your personal interactions with the dog are not about money.

                                                                                                                                                                                        I've never spoken to a therapist without paying $150 an hour up front. They were helpful, but they were never "in my life"--just a transaction--a worth while transaction, but still a transaction.

                                                                                                                                                                                        • germinalphrase 1 month ago
                                                                                                                                                                                          It’s also very common for people to get therapy at free or minimal cost (<$50) when utilizing insurance. Long term relationships (off and on) are also quite common. Whether or not the therapist takes insurance is a choice, and it’s true that they almost always make more by requiring cash payment instead.
                                                                                                                                                                                          • amanaplanacanal 1 month ago
                                                                                                                                                                                            The dogs intelligence and personality were bred long before our capitalist system existed, unlike whatever nonsense an LLM is trying to sell you.
                                                                                                                                                                                        • rochav 1 month ago
                                                                                                                                                                                          I think operating under the assumption that AI is an entity bring itself and comparing it to dogs is not really accurate. Entities (not as in legal, but in the general sense) are beings, living beings that are capable of emotion, of thought and will, are they not? Whether dogs are that could be up to debate (I think they are, personally), but whether language models are that is just is not. The notion very notion that they could be any type of entity is directly tied to the value the companies that created it have, it is part of the hype and capitalist system and I, again personally, don't think anyone could ever turn that into something that somehow ends up against capitalism just because the AI can't directly want something in return for you. I understand the sentiment and the distrust of the mental health care apparatus, it is expensive, it is tied to capitalism, it depends on trusting someone that is being paid to influence your life in a very personal way, but it's still better than trusting it on the judgment of a conversational simulation that is incapable of it, incapable of knowing you and observing you (not just what is written, but how you physically react to situations or to the retelling, like tapping your foot or disengaging) and understanding nuance. Most people would be better served talking to friends (or doing their best trying to make friends they can trust if they don't have any), and I would argue that people supporting people struggling is one way of truly opposing capitalism.
                                                                                                                                                                                          • Buttons840 1 month ago
                                                                                                                                                                                            Feel free to substitute in whatever word you think matches my intent best then. You seem to understand my intent well enough--I'm not interested in discussing the definition of individual words though.