I write science fiction and I’m a fan of science fiction. I like thinking about non-human people and diversity of cognition within and across species (real and fictional). I like playing around with how concepts of personhood are defined, expanded, and abused. (That might be the social history background talking.) I enjoy reading about the exploits of Murderbot, Breq, Wintermute, and Mistake Not My Current State Of Joshing Gentle Peevishness For The Awesome And Terrible Majesty Of The Towering Seas Of Ire That Are Themselves The Mere Milquetoast Shallows Fringing My Vast Oceans Of Wrath.
This post, however, is about 2023 in our real, actual timeline. There are real vehicles called Of Course I Still Love You and Just Read the Instructions. But naming them after Culture Minds does not make them super-powerful machine intelligences any more than it makes them spacefaring General Contact Vehicles. The Singularity, or less Rapture-like framings of the creation of “true” artificial intelligence, perpetually remain fifteen years in the future.
If we get there, it’s going to be interesting. In our real, actual timeline, there’s a concerted political effort to deny the personhood of trans people, disabled people, Black people, women…the list goes on. Those unwilling to wrap their brains around the humanity of someone with darker skin or a gender presentation undocumented by Leave It to Beaver may also be unwilling to see a manufactured, non-human intelligence as a person. There will be a cultural and legal showdown between them and the people who already anthropomorphize their Roombas, and like all cultural shifts it will be drawn out and messy.
Exciting. Call me in fifteen years and we’ll see if it’s time to fight for AI civil rights or time to reset the fifteen year clock. But for right now, systems that are referred to as “AI” are not the sort of autonomous entity that term calls to mind.1
Systems that are billed (explicitly or implicitly) as automated are often revealed to be Mechanical Turks. Teams of actual human beings review material and moderate content. (This is difficult work, generally exploitative, with more than a whiff of colonialism when employers look to the Global South for workers.) Amazon crowdsources Alexa answers and questions from volunteers, and the MTurk service is exactly what it says on the tin: a source of cheap human labor to perform tasks that cannot be automated. Chatbot therapy is an ethically questionable attempt to fill the need for mental health care (or cash in on the mental health crises of late stage capitalism in the Anthropocene) but humans remain on call to intervene. And, of course, systems created by humans import the biases of humans. Thus we have sexualized images of Asian women and chatbots that are racist, confidently wrong (“mansplaining as a service”), threatening or obsessive. The corporate sidelining of expert voices inspires little confidence that these tools will see fundamental improvement.
Over on Mastodon, I said AI artwork is derivative work. I mean that in the legal sense.2 The image of a Mechanical Turk above is a “slavish copy” of a 1789 work now in the public domain, and therefore not eligible for copyright protection. AI artwork depends on training sets of images and while it would certainly be possible to exclusively use public domain works like the one above, the data scraping is much more extensive. Some companies seek licenses for stock images; others do not. Visual works that are very firmly protected by copyright are used to generate new images which are uncopyrightable derivative works.3
You could make the argument that all art is derivative, that there are a limited number of plots, and commercial fiction routinely files off serial numbers of creators’ inspiration. That’s a perfectly dandy philosophical statement about the nature of human creativity, cultural interaction, and the conversations that occur between pieces of art (visual, literary, etc.) But there’s a vague legal/moral consensus that it’s not right to totally rip off someone else’s work. What is celebrated on AO3 would be pilloried in the commercial realm (by fans and rightsholders alike). And plagiarism is right out. (Plagiarism is an accurate description of at least some text presented as original work, and the reproduction of Getty’s watermark gets into trademark issues as well as copyright.) Even if the AI could be recognized as the holder of copyright, not all of the products would clear the bar between “derivative work” and “plagiarism.”
What is the problem when chatbot stories flood magazine submission portals? At the moment, basic editorial gatekeeping and quality control is enough to keep these works from the public eye; the stories are obviously bad or plagiarized. But it is a resource drain. With the sudden uptick in spurious submissions, Clarkesworld was essentially the victim of a DDoS attack. (Remember when automation seemed to be winning the war against spam?) Many of the restrictions that might deter get-rich-quick chatbot spammers—submission fees, regional blocks, story solicitation, postal submissions—will also deter legitimate submissions, particularly from marginalized writers. Despite the perpetually depressing financial picture (neither publishers, editors, nor writers are getting rich from short fiction), the short speculative fiction space is thriving with varied magazines and contributions from an increasingly diverse group of writers. The online presence of magazines plays a big role in this increasing accessibility of reading material and markets. Denial of service attacks are an existential threat to online operations.
What is the problem when art departments use AI stock art of unknown provenance or script writers create a graphic novel without a human artist? These are not simply cases of using the labor of doubtless-uncompensated artists whose work was scraped; it also means the loss of a commission that might have gone to an artist. Art can be licensed—that’s how publishing works, somebody creates a thing and somebody pays them for the right to consume it or package it for consumption. To pretend that such obligations do not exist because “the machine did it,” as though the machine did not repurpose the work of human artists, is profoundly disingenuous.
Cue Luddite comparisons. But one of the nice side effects of the recent AI Discourse is that it has come to include decent contextualization of the Luddites as a labor movement, not an anti-technology movement. And in fact, it’s not merely the tech bros who see positive possibilities of the technology (see, for example, the sunnier sections of Jason Sanford’s reporting and thoughts on how AI might impact authors and artists). But until the very real ethical issues under the hood are solved, the output can’t be used ethically for commercial purposes. And until their functionality improves, the creative use of AI is limited.
Right now, AI is not thinking or acting for itself. It’s being used as a tool. A lot of the people who are using—and developing—it are doing so in a way that actively harms people, entrenches systemic injustices, and ignores ethical arguments (to say nothing of clearly written magazine submission guidelines). This issue is much bigger than the ecosystem of creatives on the web, but ours is a particularly vulnerable ecosystem. Hopefully, we will develop a model for ways to limit, integrate, or transparently declare AI use in the creative process that will allow artistic endeavors to flourish (and facilitate the compensation for artistic labor). The alternative is to become victims in a cautionary tale of bad actors running amuck with technology.
1. I am not going to venture into the weeds discussing specifics of machine learning, large language models, expert systems, and so forth. (Nor, for that matter, am I going to engage with the highly messed up ways “intelligence” has been defined and measured.) For my purposes, this discussion includes some fuzzy categories: “real research scientists and engineers are doing to make machines that do cool stuff,” “things like ChatGPT and Midjourney that are currently in use,” “what people think of when they think of AI (which is a big and varied category, especially when you factor in hardcore SF fans, but let’s just call it Data from Star Trek),” and “whatever AI ends up looking like when/if it arrives.” (Back)
2. I am not a lawyer but I was an archivist, so I have more than passing knowledge of U.S. copyright laws. For a good overview of copyright issues and a digestible chart for determining if a work is in the public domain, I recommend perusing the resources of the Cornell University Library. (Back)
3. There is now legal precedent that non-humans—whether macaques or machines—cannot create copyrightable works. Whether this puts the breaks on the rapid generation of AI content for profit remains to be seen. (Back)
Leave a Reply