Let’s break down the gray areas of using Artificial Intelligence that may keep you up at night.
Like many therapists, you’re thinking about using Artificial Intelligence (AI) tools in your practice or are already using them. And you’re wondering what you need to tell your clients about how you use it. Do you need their written consent? Is a disclosure enough? For that matter, what’s the difference between the two and why is the difference important?
Here’s the thing that makes these questions so confusing: using AI is uncharted (excuse the pun) territory that keeps redefining and expanding its role and where and how it fits.
First, the Legal Reality Check
One of my colleagues recently said to me: “I thought informed consent was for legal stuff, and disclosure was for everything else. Since there are no AI laws, I just need a disclosure, right?”
She’s not wrong, but she’s not entirely right either.
There are no federal laws specifically governing AI use in psychotherapy. This means there is no precedent we can rely on for legal guidance. Plus, there are no definitive licensing board mandates for AI guidance. This means we’re writing the playbook as we go. Though that leaves the door wide open for use, it also leaves the door wide open for risk.
Without a playbook, how do we decide if we need a disclosure or a consent?
Though there are no Federal laws, there may be state laws. Start with your state laws if your state has them. Currently we are aware that the following states do have laws and/or regulations in place: New York, California, North Carolina, Minnesota, Utah and many more states are in the process of developing legislation. It’s imperative you know and apply the requirements in the state(s) where you are licensed and practicing if or when they have them. This blog is still relevant for you and the vast majority of clinicians who are flying blind.
Now on to the distinction between disclosures and consents and why they matter when using AI.
DISCLOSURE
Disclosures may not have legal mandates but are usually included as part of Informed Consents if required. Just because there’s no specific Federal law regarding the use of AI, doesn’t mean there’s no risk to using it or ethical obligation to disclose. Disclosures are about transparency, best practice, and professional ethics. Also, there are some state laws or regulations that do impact your decision to present an Informed Consent or a Disclosure.
Disclosure:
- Definition: A document that informs the client about how AI will be used in their care and may affect their privacy or experience.
- Purpose: Transparency. It explains what AI tools are being used, how data will be handled, potential risks and benefits and limitations of technology.
- Example: “We use AI-based tools to assist in writing progress notes or symptom tracking, and appointment reminders. These tools do not replace therapist judgement.”
A disclosure is required for ethical and legal transparency, especially under HIPAA and professional codes of conduct.
INFORMED CONSENT
Informed consent is rooted in ethics and law, especially for treatment, diagnoses, and handling protected health information. It’s a legal/ethical requirement that gives clients the ability to make an informed, voluntary decision about their care, and situations where legal precedent requires client agreement. It ensures the client understands and agrees to the use of AI in their treatment. In other words, it’s the legal shield that protects both the client and the therapist and that courts and licensing boards expect to see.
Informed consent:
- Definition: A document or section of a document where the client agrees to the use of AI tools after being fully informed.
- Purpose: Ensures the client understands and agrees to the use of AI in their treatment.
- Example: “I understand and consent to the use of AI technologies in the management of my therapy records.”
Consent is necessary if AI use goes beyond administrative tasks (e.g., if AI is used for analysis, therapeutic interventions, etc.).
The Gray Area That Can Keep Us Up at Night
The absence of AI-specific Federal regulations doesn’t mean you can do anything you want. Consider this:
- HIPAA still applies to how you handle client information
- Professional licensing standards about competence and boundary management still matter
- Professional Liability for clinical decisions remains your responsibility
- Ethical codes about informed consent, transparency and client welfare haven’t changed
So while you might not need legal informed consent for AI use, you still need ethical informed consent – or disclosure.
Based on the distinction between informed consent and disclosure just reviewed, let’s look at some examples:
Scenario 41649_f55122-73> |
What’s Recommended 41649_acca12-ab> |
Why 41649_536dfb-89> |
---|---|---|
Using AI to develop worksheets 41649_607e06-88> |
Optional disclosure 41649_5a2284-2f> |
No client data or care involved 41649_4fea3d-4b> |
Using AI to help draft progress notes using PHI and a recording of the session 41649_38792d-64> |
Consent 41649_96bd9c-b1> |
Recording involves sensitive data, which always involves a risk of breach. 41649_159cc7-ff> |
Using AI to help draft progress notes with no PHI or recording 41649_969a31-71> |
Disclosure 41649_e63088-f0> |
No PHI or recording involved, so no risk of a breach, but transparency creates trust. 41649_46e3b4-a4> |
Using AI to help make clinical decisions 41649_61c700-d6> |
Consent 41649_c1d459-e6> |
Impacts care and carries potential risk. 41649_97129d-a0> |
So, until laws are passed and litigated, what we have been calling an AI Informed Consent may be an Informed Consent or an AI Disclosure. But even a Disclosure – for best practice, should have all the elements of a consent.
Making the Call: Three Questions to Ask Yourself
When deciding between Disclosure and Informed Consent for AI use, ask:
1. Does this AI use affect my clinical decision-making?
- Using AI to format your notes? Probably disclosure.
- Using AI to identify treatment patterns or suggest interventions? Consent.
2. Am I putting client information at risk in new ways?
- AI tool that doesn’t store data and works locally? Disclosure might suffice.
- Cloud-based AI that processes PHI and learns from it? You need consent.
3. Would a reasonable client want to know about this and have a say?
- If you’d feel uncomfortable if a client discovered your AI use without being told, you need more than a disclosure.
The Disclosure Approach: When It’s Enough
Simple disclosure works when AI use is:
- Administrative only (formatting, spell-check-level assistance)
- Not storing or learning from client data
- Not influencing clinical decisions
- Low risk to privacy and treatment
Sample Disclosure Language: “I use AI tools to help with some administrative aspects of documentation and practice management. All clinical decisions and treatment planning remain entirely my responsibility, and I maintain full control over your confidential information.”
The Informed Consent Approach: When You Need the Extra Protection
Move toward informed consent when:
- Client data is processed by AI systems
- Treatment decisions are influenced by AI input
- Significant privacy risks exist
- Clinical judgment is enhanced or supplemented by AI
What Your AI Informed Consent Should Include:
- Specific AI tools you use and how
- What client information is processed
- Data storage and security measures
- How AI influences (or doesn’t influence) treatment
- Client’s right to opt out
- Risks and benefits
- Your professional oversight and responsibility
THE REALITY CHECK
Here’s what I tell my consultation clients: err on the side of more transparency, not less.
Yes, there are no specific Federal laws about AI in therapy yet. But there are laws about informed consent, patient privacy, client welfare, and professional competence. When you use AI in ways that could affect treatment or privacy, you’re operating in territory where existing laws and ethics still apply.
My recommendation? When there’s no law, but there’s a potential ethical concern and no PHI is used or disclosed, a clear disclosure should suffice—but getting client acceptance (even if not legally required) is still best practice. Start with a robust disclosure for AI use. Move to informed consent when the use of AI has legal precedent or client PHI is involved.
Don’t stay silent. Even if it’s not legally required, a discussed and signed disclosure builds trust, honors client autonomy, and models ethical transparency.
THE BOTTOM LINE
We’re in the Wild West of AI ethics in therapy. There may not be specific Federal laws yet, but that doesn’t mean anything goes.
The safest approach? Be more transparent than you think you need to be. Clients appreciate honesty about your practice methods, and over-disclosure is rarely a problem. Under-disclosure, however, can destroy trust and create liability.
Remember: every shortcut you take in transparency is a gamble with your professional reputation and your client’s trust. In the absence of clear regulations, err on the side of your client’s right to know and your professional integrity. The Documentation Wizard AI Disclosure is a good place to start. It includes all the elements that are required for a consent. You can adapt it to how you use AI.
The future of AI in therapy is being written by practitioners like you, making ethical decisions, one client at a time until the law catches up with culture. Make choices you can defend and document everything. By keeping your client’s best interests at the center of every decision, you also protect yourself and your practice.
Discuss your consents and disclosures with your clients. They can handle more transparency than you think, and you can handle the ethical complexity better than you may fear.
Ready to cover your bases with confidence?
Our AI Disclosure + AI Safety Checklist gives you the language, structure, and peace of mind you need to use AI ethically and transparently in your practice. Whether you’re just starting with AI or refining your current approach, this downloadable set helps you stay compliant, build trust, and protect your clients.
Beth Rontal, LICSW, a private practice therapist and the Documentation Wizard® is a nationally recognized consultant on mental health documentation. Her Misery and Mastery® trainings and accompanying forms are developed to meet strict Medicare requirements. Beth’s Documentation Wizard training program helps clinicians turn their clinical skill and intuition into a systematic review of treatment that helps to pass audits, protect income, maintain professional standards of care, reduce documentation anxiety and increase self-confidence. Beth’s forms have been approved by 2 attorneys, a bioethicist, and a billing expert and have been used all over the world. She mastered her teaching skills with thousands of hours supervising and training both seasoned professionals and interns when supervising at an agency for 11 years. Her newest initiative, Membership Circle, is designed to empower psychotherapists to master documentation with expert guidance, efficient strategies, and a supportive community.