AI expert Lance Eliot argues that while OpenAI’s ChatGPT Study Mode demonstrates the power of custom instructions for educational purposes, attempting to create similar AI-powered therapy tools through custom instructions alone is fundamentally flawed. Despite interest from mental health professionals in replicating Study Mode’s success for therapeutic applications, Eliot contends that mental health requires purpose-built AI systems rather than retrofitted generic models.
How ChatGPT Study Mode works: OpenAI’s recently launched Study Mode uses custom instructions crafted by educational specialists to guide students through problems step-by-step rather than providing direct answers.
- The system encourages active participation, manages cognitive load, and provides personalized feedback based on the student’s skill level.
- “Study mode is designed to be engaging and interactive, and to help students learn something — not just finish something,” OpenAI explained in their July 29 announcement.
- The capability appears to rely primarily on detailed custom instructions rather than core AI modifications.
The appeal for mental health applications: Mental health professionals have expressed interest in creating similar “Therapy Mode” capabilities using custom instructions to guide AI in therapeutic contexts.
- The approach would involve assembling psychologists and mental health specialists to craft detailed instructions for AI-driven therapy.
- Such systems could potentially provide personalized mental health recommendations and perform diagnostic functions.
- Custom instructions could theoretically transform generic AI into more specialized therapeutic tools.
Why custom instructions fall short for therapy: Eliot identifies several critical limitations that make this approach unsuitable for mental health applications.
- Mental health involves significant risks when inappropriate therapy is employed, making incomplete or misinterpretable instructions potentially dangerous.
- Even well-intentioned custom instructions from licensed therapists can contain “trouble brewing within them” due to the complexity of therapeutic practice.
- Some existing AI therapy applets are “utterly shallow” or outright scams that attempt to harvest personal information.
The risks of custom instructions: Beyond mental health, custom instructions carry inherent downsides that users often overlook.
- Instructions can be misinterpreted by AI systems in ways that differ from the creator’s intent.
- Users may inadvertently include contradictory or harmful directives without realizing their impact.
- “You can just as easily boost the AI as you can undercut the AI,” Eliot warns about assuming custom instructions always improve performance.
The better path forward: Rather than retrofitting generic AI with therapy-focused instructions, Eliot advocates for building specialized mental health AI systems from the ground up.
- Purpose-built therapeutic AI systems designed specifically for mental health contexts offer more promise than modified general-purpose models.
- This approach contrasts with “trying to put fifty pounds into a five-pound bag” by forcing generic AI into specialized therapeutic roles.
- Research into dedicated mental health LLMs represents a more suitable long-term solution.
Bottom line: While custom instructions can effectively enhance AI performance in domains like education, mental health requires more robust, purpose-built solutions rather than quick fixes that may contain “unsavory gotchas and injurious hiccups.”
Psychologists And Mental Health Experts Spurred To Use Custom Instructions And Make AI Into A Therapist Adjunct