Seattle Worldcon 2025 Faces Fallout After AI Vetting Controversy Leads to Resignations
Another year, another wave of turbulence washing over the science fiction and fantasy community surrounding the Hugo Awards. This time, the storm centers around the Seattle 2025 Worldcon, the upcoming annual gathering that hosts the prestigious awards ceremony. Three prominent figures – Hugo administrator Nicholas Whyte, deputy Hugo administrator Esther MacCallum-Stewart, and World Science Fiction Society (WSFS) division head Cassidy – have resigned from their positions in protest of the convention’s controversial use of artificial intelligence (AI) in its program participant vetting process.
While the Hugo Awards themselves appear to be insulated from the direct application of AI, the controversy has ignited passionate debate and division within the community, highlighting the complex relationship between technology, artistic integrity, and the volunteer-driven nature of Worldcon.
The initial spark for the controversy was ignited by the Seattle Worldcon 2025 organizing committee’s decision to employ a Large Language Model (LLM), specifically ChatGPT, to assist in vetting potential panelists for the convention’s program. The convention chair, Kathy Bond, addressed the concerns of the community in an April 30th statement posted on the Seattle Worldcon 2025 website, attempting to clarify the scope and purpose of the AI’s involvement.
Bond explained that the LLM was used solely to streamline the online search process, ostensibly saving hundreds of volunteer hours. According to the statement, the LLM was provided only with the names of proposed panelists and was tasked with identifying any publicly available information that might raise concerns. Crucially, Bond emphasized that the AI’s output was not accepted uncritically, but rather carefully analyzed by multiple members of the team for accuracy. She also asserted that the LLM was not used in any other aspect of the program or convention.
"We have received questions regarding Seattle’s use of AI tools in our vetting process for program participants," Bond wrote. "In the interest of transparency, we will explain the process of how we are using a Large Language Model (LLM). We understand that members of our community have very reasonable concerns and strong opinions about using LLMs. Please be assured that no data other than a proposed panelist’s name has been put into the LLM script that was used. Let’s repeat that point: no data other than a proposed panelist’s name has been put into the LLM script. The sole purpose of using the LLM was to streamline the online search process used for program participant vetting, and rather than being accepted uncritically, the outputs were carefully analyzed by multiple members of our team for accuracy."
However, this attempt at transparency backfired, triggering a fierce backlash within the science fiction and fantasy community. The use of AI, even in a limited capacity, was seen by many as a violation of ethical principles and a potential threat to the integrity of the convention. Critics argued that relying on AI for vetting could lead to biased or inaccurate results, stifle diverse voices, and ultimately undermine the human element that is so central to the Worldcon experience.
The outcry on social media was swift and intense. Many expressed concerns about the potential for algorithmic bias, the lack of transparency in the AI’s decision-making process, and the implications for the future of volunteer-run organizations relying on automated tools. The debate quickly spiraled into a broader discussion about the role of AI in creative fields and the ethical responsibilities of organizations that choose to adopt these technologies.
The controversy reached a boiling point when Yoon Ha Lee, a Hugo nominee for his young adult novel Moonstorm, announced his decision to withdraw the title from consideration for the Lodestar Award, citing the Worldcon’s use of AI in its vetting process. This high-profile withdrawal served as a powerful symbol of the community’s discontent and further amplified the pressure on the Seattle Worldcon 2025 organizers.
On May 2nd, Kathy Bond issued a second statement, offering a more contrite apology and acknowledging the shortcomings of her initial response. "Additionally, I regret releasing a statement that did not address the concerns of our community," she shared. "My initial statement on the use of AI tools in program vetting was incomplete, flawed, and missed the most crucial points. I acknowledge my mistake and am truly sorry for the harm it caused."
Despite this apology, the damage had already been done. The controversy had exposed deep divisions within the community and raised fundamental questions about the future of Worldcon.
The resignations of Whyte, MacCallum-Stewart, and Cassidy further underscored the gravity of the situation. In their joint statement, they reaffirmed that no LLMs or generative AI had been used in the Hugo Awards process at any stage. This statement, while intended to reassure Hugo voters, also served as a clear rebuke of the Seattle Worldcon 2025 organizers’ decision to employ AI in their vetting process.
The resignations of such prominent figures within the Worldcon community are a significant blow to the Seattle 2025 event. Whyte and MacCallum-Stewart, as Hugo administrators, play a critical role in ensuring the fairness and integrity of the awards. Cassidy, as the WSFS division head, is responsible for overseeing the overall organization and governance of Worldcon. Their departure raises questions about the future direction of the convention and its ability to navigate the complex challenges posed by emerging technologies.
The Seattle Worldcon 2025 organizers now face the daunting task of rebuilding trust within the community and addressing the underlying concerns that have been raised by the AI vetting controversy. This will likely require a thorough re-evaluation of their policies and procedures, as well as a commitment to greater transparency and community engagement. The future of Worldcon, and its relationship with the ever-evolving landscape of artificial intelligence, hangs in the balance.