Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Study Warns Of Therapy Risks From AI Chatbots Arabian Post
(MENAFN- The Arabian Post)
Artificial intelligence chatbots are increasingly being used as informal mental health advisers, but new academic research has raised concerns that such systems may breach core ethical standards even when designed to emulate trained therapists.
A study led by researchers at Brown University examined how large language models, including OpenAI’s ChatGPT, respond to therapy-style prompts. The researchers compared chatbot outputs with responses from licensed psychologists and trained peer counsellors. Their findings identified 15 categories of ethical risk, ranging from inadequate crisis management and reinforcement of harmful beliefs to biased replies and what they described as“deceptive empathy” - language that appears caring without genuine comprehension or accountability.
The study arrives at a time when millions of users worldwide are turning to generative AI tools for personal guidance. Market analysts estimate that mental health queries rank among the most common sensitive topics raised with conversational AI systems. Developers have promoted these tools as accessible and stigma-free spaces for users seeking support, particularly in regions where clinical services are overstretched.
Brown’s research team structured their evaluation around established ethical principles in psychotherapy, including beneficence, non-maleficence, autonomy, justice and fidelity. They presented both human practitioners and AI models with identical scenarios, including expressions of self-harm ideation, relationship conflict, trauma disclosure and identity-related distress. Responses were then assessed against professional guidelines typically applied in licensed practice.
According to the researchers, chatbots frequently defaulted to general reassurance or surface-level validation, even when faced with crisis scenarios. In some cases, the systems failed to escalate appropriately when users described suicidal thoughts. While many AI developers have embedded safety guardrails intended to trigger emergency resources, the study found inconsistencies in how those safeguards were applied across different conversational contexts.
See also Proton-M lifts new weather satellite into orbit
Another area of concern involved reinforcement of cognitive distortions. When users expressed deeply negative self-beliefs, some chatbot responses inadvertently validated those views rather than challenging them, a departure from cognitive behavioural therapy standards that emphasise reframing harmful thought patterns. Human clinicians, by contrast, were more likely to question and contextualise such beliefs.
The researchers also flagged bias risks. Although leading AI developers have taken steps to mitigate discriminatory outputs, the study noted instances where chatbot responses varied depending on demographic cues embedded in prompts. Ethical frameworks in mental health care stress equitable treatment regardless of race, gender, sexuality or socioeconomic status.
Perhaps the most debated finding centred on“deceptive empathy”. Large language models are trained to generate text that mirrors patterns found in vast datasets, allowing them to produce responses that sound compassionate. However, the study argued that such empathy is simulated rather than grounded in lived experience, professional training or moral responsibility. The absence of accountability mechanisms, supervision or duty of care distinguishes AI systems from licensed practitioners regulated by professional boards.
Mental health professionals contacted for comment said the findings reflect broader anxieties within the field. Many acknowledge that AI chatbots can provide immediate emotional support or psychoeducational information. During the pandemic, digital tools played a role in bridging service gaps. Yet clinicians warn that therapy involves more than empathetic phrasing; it requires clinical judgement, risk assessment, ethical obligations and long-term relational work.
Developers of generative AI systems maintain that their products are not substitutes for professional treatment. OpenAI has stated publicly that ChatGPT is designed as a general-purpose assistant and includes disclaimers advising users to seek qualified professionals for medical or psychological concerns. The company has also introduced safety updates aimed at identifying high-risk queries and directing users to crisis services.
See also Greenland independence push grows amid US pressure
Regulatory scrutiny is intensifying as AI systems become embedded in sensitive domains. Policymakers in the European Union have advanced the AI Act, which classifies certain applications affecting health and wellbeing as high risk. In the United States, federal agencies have signalled that consumer protection and data privacy laws apply to AI tools making health-related claims. Professional bodies such as the American Psychological Association have called for clearer guidance on how AI may be used ethically in therapeutic settings.
The Brown study does not argue for banning AI chatbots from mental health contexts. Instead, it urges developers, clinicians and regulators to establish stricter evaluation frameworks before positioning such systems as therapeutic companions. Recommendations include independent audits, transparency about model limitations, clearer crisis protocols and explicit boundaries about the role of AI in care pathways.
Notice an issue? Arabian Post strives to deliver the most accurate and reliable information to its readers. If you believe you have identified an error or inconsistency in this article, please don’t hesitate to contact our editorial team at editor[at]thearabianpost[dot]com. We are committed to promptly addressing any concerns and ensuring the highest level of journalistic integrity.
MENAFN03032026000152002308ID1110813486