Hollywood script readers conducted an experiment to test whether AI can match human analysis of screenplays, as artificial intelligence tools increasingly threaten their traditional gatekeeping role in the entertainment industry. The study, led by Jason Hallock, a Paramount story analyst, and the Editors Guild, revealed that while AI excels at generating loglines and summaries, it struggles with nuanced script analysis and tends to offer overly positive feedback rather than honest criticism.
What you should know: AI script analysis tools are already being adopted across Hollywood, from major agencies to independent producers seeking to manage overwhelming submission volumes.
- WME uses ScriptSense to help agents and assistants sort through submissions and track client work.
- Producer Morris Chapdelaine found AI doubled his reading pace and provided more honest feedback than human readers: “It’s such a time saver, and it’s getting better and better.”
- Platforms like Greenlight Coverage and ScreenplayIQ offer writers AI-powered feedback on their drafts, though sometimes with inflated praise.
The experiment: Hallock gathered several unproduced scripts and compared coverage from human analysts against reports from six AI platforms.
- One test script was described as “‘Heart of Darkness’ in outer space,” while another involved a killer insect for the Syfy channel.
- AI-generated loglines were “indistinguishable from the human ones — maybe even a little better,” according to Hallock.
- However, AI synopses had “11th-grade-essay quality” and used repetitive constructions like “Our story begins with…”
Where AI falls short: The more complex the screenplay, the more likely artificial intelligence was to make critical errors and miss essential story elements.
- AI programs frequently misattributed character actions and hallucinated plot points that didn’t exist.
- Human analysts “won hands down” when providing actual analysis rather than just summarization.
- The AI programs were “an almost total fail across the board” for generating meaningful notes about scripts.
The bias problem: AI tools consistently offered overly positive assessments instead of honest criticism that writers and producers need.
- One romantic comedy received AI praise as “a compelling, well-crafted coming-of-age story,” while the human reader called it a “familiar template” that “lacks bite.”
- “They would definitely tell you everything that was positive and working well, but when you had to get down to problems, they couldn’t necessarily identify them,” said analyst Alegre Rodriguez.
- A 20-year-old script that never sold in Hollywood received an AI “recommend” rating.
What the platforms say: AI tool creators acknowledge current limitations while defending their technology’s potential.
- Jack Zhang of Greenlight notes that only 5% of submitted scripts receive “recommend” ratings: “I wouldn’t say there’s huge inflation.”
- ScriptSense’s Kartik Hosanagar, a Wharton business professor and internet entrepreneur, avoids recommendations partly because “AI can be too sycophantic.”
- “Can AI get to a point where it can be truly critical? I think it can get there. We’re not there yet,” Hosanagar admits.
Industry concerns: Story analysts worry about cost-cutting executives potentially replacing human judgment with cheaper AI alternatives.
- “The most important thing I’m looking for is ‘Do I care?’ An LLM can’t care,” says Holly Sklar, a Warner Bros. analyst.
- The study concluded that “studios may be tempted to forgo quality and accuracy in favor of cheap and fast.”
- Sklar fears younger executives more comfortable with AI summaries may view human analysts as “superfluous.”
The bottom line: While AI can handle routine summarization tasks, human insight remains essential for identifying truly original and compelling scripts that could become successful films.
Hollywood Script Readers Fear They Could be Replaced by AI. They Set Up a Test to See Who Gives Better Feedback