Can You Use AI for Mental Health Progress Notes?
Although AI has been around for quite a few years, it recently burst onto the scene like fireworks and is causing a lot of noise. Some psychotherapists are already raving about how it cuts down on the time it takes to write their notes and how good a job it does. Despite reading several articles about it and taking a course to learn more, I’ve been slow to warm up, not only because I have so many other documentation projects demanding my attention, and because I’m slow to embrace anything new, but because I’m also leery about using this technology for private and confidential information written about real live people. Though it’s promising, I’m cautious.
AI is promising because it can cut down the amount of time it takes to write the mounds of documentation that can bury us, although there’s an upfront investment of time in setting up the chatbot. You have to “teach” it what to write. Then there’s the time it takes to review chatbot written notes for content and correct the errors (there will be errors) so you can sign and lock the note. But once it’s set up, it can help you blow through notes.
What the Experts Say About Using AI for Mental Health Documentation
To help me better understand AI, I spoke with an attorney who specializes in mental health cases and a very high-level computer programmer about artificial intelligence as it pertains to mental health documentation. They are both wary of AI for the following reasons:
Just like the internet itself, AI can be used for good and for bad and regulations are not yet in place to help prevent the bad. Though we’re told that the information we collect and enter in the chatbot is destroyed, the computer programmer is concerned with the answer to these three security questions:
1) Is the information collected truly destroyed?
2) If it is, how much time goes by before this happens?
3) Could a hacker hack the program in the time it takes to create the note and destroy the original material?
Given the level of security necessary on my Documentation Wizard website to avoid hacking, I am aware of the lengths some creeps will go to stealing personal information for their own nefarious use. The computer programmer I spoke with does not have faith that the information collected will be destroyed in a timely way or appropriately, if at all. He recommends waiting to use AI for confidential information until the industry is regulated and safeguards are in place.
The attorney I spoke with has an additional concern about authenticity. She told me about a lawyer in NY who submitted a brief to an appellate court that was all generated by ChatGPT. Apparently, the arguments the attorney presented were compelling but all the cases, citations, and quotes cited turned out to be “bogus.” The chatbot made them up and the attorney did not take the time to verify the citations until opposing counsel was unable to verify them.
This is a great place for a well-deserved lawyer joke but seriously, integrity in documentation is crucial. The attorney who used AI to write his brief was not familiar with its limitations when he jumped on the AI bandwagon. What would have been an obscure lawsuit has become a lightning rod for the tech and legal professions. The attorney and his law firm are deeply embarrassed by their mistake and pledged to never do it again. Their lawyers believe they should not be sanctioned because, “sanctions would serve no useful purpose” since they have learned their lesson. Perhaps we should be grateful to them for exposing such a serious issue.
Though psychotherapists tend to be an exceptionally honest and ethical group, the constant pressure of note writing and the need for many therapists to have a large caseload to pay the bills has been known to have a negative influence on integrity. I’ve supervised over 50 clinicians and have seen just about everything, including plenty of faked treatment plans and progress notes. Artificial Intelligence makes it even easier to create “bogus” documentation. It will be essential that psychotherapists understand the technology well enough to make sure it meets ethical and privacy standards. Given the dangers this national case exposes, psychotherapists who use AI without regard for accuracy and confidentiality will likely endure serious consequences.
Risks of Using AI
Then there is the concern about losing the human touch that is essential not only to providing good psychotherapy but for maintaining a civilized society. According to an NPR article released on May 30, 2023, “leading experts warn of a risk of extinction from AI.” The headline is clickbait and a little over-dramatic, but the article makes good points about how we can protect ourselves from going overboard with using AI. Leading experts in the field of Artificial Intelligence, including Geoffrey Hinton, the “godfather” of AI who recently left Google, signed a Statement on AI Risk. It’s worth the read.
How AI Can Benefit Mental Health Therapists
Despite the very real warnings, AI is not going away. Used properly, it can have an exceptionally positive impact on our lives and specifically on our ability to complete our burden of daily note writing. Plus, lots of chatbot programs are already popping up so it’s a good idea to get yourself educated. With this in mind, I recommend taking Kym Tolson’s “how-to” course, Revolutionize Your Practice with AI. Also, I am in the process of developing an informed consent for therapists to use AI should they choose to include this aspect of technology into their practice. So, stay tuned!
You still have to provide the chatbot with the right information. So, if you’d like more information about what and how much to write in your documentation, check out my webinar, Misery or Mastery: Documenting Medical Necessity for Psychotherapists. Your notes can only be as good as what you tell the chatbot to write – and what you correct after reviewing what is written.
Joan Fefferman says
this is one of the best AI articles I have read. TY
Beth Rontal says
Thank you! I’m glad it helps you put AI for psychotherapists into perspective.
Christopher Holly says
This is a great summary of the use of AI in documentation. Very interesting!
Beth Rontal says
Christopher, so glad it’s helpful. Nice to see your comment here!
Dea says
Thank you. This up-to-date information summarizing AI in documentation is very much appreciated.
Beth Rontal says
You’re very welcome!
Kym Tolson says
Thank you for sharing my course: Revolutionize Your Private Practice with AI! Thank you for writing this post and showing the pros and cons of AI in private practice.
Beth Rontal says
Kym, my pleasure. I like your course! You were the first to open my eyes to the value of AI despite my reservations.