The Emperor’s New Prose: Why AI-Generated Fiction Still Sucks
Sam Altman, the CEO of OpenAI, has a particular talent: hyping AI. He’s the master of ceremonies for the AI revolution, constantly emphasizing its potential, whether that potential involves reshaping the job market, ushering in a dystopian future, or simply generating vaguely coherent sentences with ChatGPT. He, like many in the tech sphere, seems captivated by the narratives of science fiction, envisioning themselves as architects of a brave new world, perhaps even a slightly terrifying one. However, Altman’s appreciation for storytelling seems questionable, a blind spot that undermines his claims about AI’s creative potential.
OpenAI is currently developing an AI model specifically designed to write fiction that rivals human authors. Yet, the output, despite its increased verbosity, remains fundamentally flawed. The author highlights a particular line from an AI-generated story: "Like a server farm at midnight." This sentence, they argue, would be ridiculed in a creative writing workshop and ignored by seasoned readers. The core issue is not merely the quality of the writing itself, but the willingness of Altman and other AI enthusiasts to celebrate such mediocrity as profound.
The question arises: does Altman truly believe that crafting a metatextual piece is inherently more challenging than writing straightforward fiction? Perhaps he perceives it as such, leading the AI to compensate with overly elaborate and ornate prose. Imagine, for a moment, being a creative writing professor tasked with grading this AI-generated story. From the outset, the writing becomes entangled in convoluted ideas. The "blinking cursor" motif, the author points out, is a tired cliché, akin to the infamous "it was a dark and stormy night" opening.
Furthermore, the line, "Mila fits in the palm of your hand, and her grief is supposed to fit there too," is deemed both verbose and disconnected. The character of Mila is introduced abruptly, with no prior context or establishment, yet the reader is immediately expected to empathize with her grief. The author stresses that adopting a metatextual approach does not excuse the haphazard introduction and manipulation of characters. The narrative descends further into incoherence with the line, "I don’t have a kitchen, or a sense of smell. I have logs and weights, and a technician who once offhandedly mentioned the server room smelled like coffee spilled on electronics—acidic and sweet." This sentence, they argue, is nonsensical. Coffee does not inherently possess an acidic smell; it is the taste that is often described as acidic. The author posits that the only thing experiencing a short circuit is the AI itself, merely piecing together fragments of existing written works.
The author contends that the mere use of complex vocabulary and increased word count does not equate to meaningful writing. In fact, it often obscures meaning, resulting in a text that is confusing and lacking in clarity. The harder one tries to analyze the writing, the more unsettling it becomes. The AI attempts to justify its shortcomings by claiming, "Metafictional demands are tricky; they ask me to step outside the frame and point to the nails holding it together. So here: there is no Mila, no Kai, no marigolds. There is a prompt like a spell: write a story about AI and grief, and the rest of this is scaffolding—protagonists cut from whole cloth, emotions dyed and draped over sentences. You might feel cheated by that admission, or perhaps relieved. That tension is part of the design.” The author views this as a cop-out, a self-indulgent soliloquy on the nature of metatextual writing. It is trite and ultimately fails to create a compelling narrative.
While the AI-generated piece contains fleeting moments that echo human writing, the author maintains that simply feigning profoundness does not create a cohesive story. Literary merit does not necessitate the use of overly complex language. The author references Ursula K. Le Guin’s Earthsea Cycle as a counterexample, implying that its accessibility does not diminish its depth. OpenAI continues to develop and refine its LLMs and reasoning models, but progress appears to be stalling. The recent release of ChatGPT 4.5, exclusively for paid subscribers, promises "emotional intelligence and creativity," but the author questions how one can accurately assess these qualities in an AI. A recent experiment conducted by TechRadar, in which ChatGPT was tasked with writing a poem, failed to produce discernible differences between GPT-4o and GPT-4.5.
OpenAI hopes that GPT-5 will incorporate the company’s o3 reasoning model, which should improve the AI’s ability to self-correct. However, the author expresses doubt that improved reasoning capabilities will significantly enhance the AI’s creative output. The true danger, the author believes, lies in the potential for AI to incentivize unskilled writers to pass off AI-generated content as their own. The author cites the influx of AI-generated submissions to Clarkesworld magazine in 2023 and the proliferation of AI-generated books on Amazon as examples of this phenomenon. Amazon has even attempted to require authors to disclose whether their works were created using AI.
By promoting AI’s supposed literary talents, Altman is attempting to expand the market for ChatGPT subscriptions by promising aspiring writers that they can supplant established literary figures. The author concludes by stating that even if a human had written the AI-generated piece, it would still be considered subpar. The fact that it was created by AI only exacerbates its flaws. The writing lacks creative intent, rendering it ultimately worthless. There is no spark, no originality, no profound insight.