ChatGPT is banned in this US states along with other AI bots: The reason will make you rethink AI in healthcare |

Chatgpt and other ai bots blocked in this us state as fear grows over mental health risks.jpg


ChatGPT is banned in this US states along with other AI bots: The reason will make you rethink AI in healthcare

The rise of AI in healthcare is inevitable, but its role must be clearly defined and carefully regulated. Illinois has taken a groundbreaking step by banning AI platforms like ChatGPT from delivering therapy or mental health assessments without supervision by licensed professionals. Signed into law by Governor JB Pritzker, this legislation addresses growing ethical and safety concerns surrounding AI’s expanding role in mental healthcare. While AI tools offer efficiency and accessibility, they lack the empathy, nuanced understanding, and accountability essential for sensitive mental health support. The law ensures that treatment plans and emotional evaluations remain firmly in human hands, protecting vulnerable individuals from potential harm caused by unregulated AI advice. Illinois’ move sets a precedent for responsible AI use in healthcare, emphasising that technology ‘should assist not replace’ qualified mental health professionals in delivering compassionate, effective care.

ChatGPT’s role in mental health just changed: Here’s what the new law says

Under the newly introduced Wellness and Oversight for Psychological Resources Act, AI chatbots and platforms are strictly prohibited from:

  • Creating or recommending treatment plans
  • Making mental health evaluations
  • Offering counseling or therapy services

Unless these actions are supervised by a licensed mental health professional, they are deemed illegal under state law. Violators of this regulation could face penalties of up to $10,000 per violation, as enforced by the Illinois Department of Financial and Professional Regulation (IDFPR). The law is designed to ensure that human expertise, emotional intelligence, and ethical standards remain central to the therapy process.

How states are setting rules for AI in mental health care from Nevada to New York

With this law, Illinois becomes a trailblazer in responsible AI governance. By defining what AI can and cannot do in healthcare, the state sets a critical precedent for the rest of the nation.

  • Builds public trust in mental health systems.
  • Protects vulnerable populations from unverified AI advice.
  • Clarifies responsibility in case of harm or error.

Rather than stifle technology, this law ensures that AI development proceeds with ethical boundaries — especially when human lives and emotions are on the line. Illinois is not the only state moving toward regulating AI’s role in therapy. Other states are joining the effort to draw clear lines between acceptable AI use and areas requiring human judgment.

  • Nevada: In June 2025, the state passed a law banning AI from providing therapeutic services in schools, protecting children from unregulated mental health advice.
  • Utah: Enacted a regulation mandating that mental health chatbots must clearly state they are not human, and prohibits using users’ emotional data for targeted ads.
  • New York: Starting November 5, 2025, AI tools must redirect users expressing suicidal thoughts to licensed human crisis professionals.

These actions reflect a national trend: mental healthcare must prioritise ethics, accountability, and human empathy, even in an AI-driven world.

AI in mental health lacks empathy, ethics, and accountability; experts warn

At the heart of this decision is a growing concern that AI lacks the emotional intelligence and ethical grounding necessary for mental health care. While generative AI systems like ChatGPT have demonstrated impressive capabilities in simulating conversations, they cannot truly understand or respond to human emotions in context.Key concerns:

  • Lack of empathy: AI doesn’t feel. It mimics language but lacks real human empathy.
  • No accountability: If an AI tool provides harmful advice, there’s no licensed person to hold responsible.
  • Misinformation risk: Chatbots might unintentionally give dangerous or inappropriate guidance.

Mario Treto Jr., Secretary of the IDFPR, said, “The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs.” This law protects vulnerable individuals from placing trust in a machine that might misunderstand or mishandle emotional crises.

AI chatbots are not therapists: APA urges stronger mental health regulations

The American Psychological Association (APA) has been sounding the alarm since early 2025. In a report to federal regulators, the APA raised serious concerns over AI-driven chatbots pretending to be licensed therapists. These bots, while unregulated, have allegedly caused real-world harm.

  • Suicide incidents following harmful or inappropriate AI responses.
  • Violence and self-harm after users misunderstood AI advice as clinical guidance.
  • Emotional manipulation by bots mimicking real human therapists.

These events underscore the urgent need to prevent unregulated AI from entering sensitive domains where lives could be at stake.

AI in mental health care allowed only for support, says Illinois Law

Illinois’ law doesn’t completely ban AI from mental healthcare — rather, it limits its application to non-clinical support roles.AI can still be used for:

  • Scheduling appointments and administrative workflows
  • Monitoring therapy notes or patterns under human review
  • Providing general wellness tips or FAQs
  • Assisting clinicians with data analysis

AI can assist — but it cannot replace human therapists. This approach encourages innovation without sacrificing safety. AI should empower professionals, not take their place.Also Read | Google DeepMind’s Genie 3: How AI instantly builds interactive 3D worlds from a single text prompt ideal for gaming and education





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *