‘Artificial Intelligence’ Sounds Straight Out of Sci-Fi. But What Can It Really Do?

The robots are coming for your keyboards and your wallets. At least, that’s how the news may make it sound. In recent months, technological advancements in artificial intelligence (AI) programs have left every industry questioning what the future holds. The discussion is packed with mixed opinions, and the conversation has been particularly divisive in creative fields. Will AI run authors and artists out of business? Or perhaps authors and creatives will adapt, move on, pivot. Whatever your stance on the discussion is, we’d like to give you the quick and dirty on AI so you can broach the subject with an open mind. 

Or not—the robots would love that. 

Encyclopedia Britannica defines artificial intelligence as “the ability of a computer … to do tasks that are usually done by humans because they require human intelligence and discernment.” On this alone, you might be fearful that AI will write books with the click of a button. We’ll discuss the truth behind this idea later on, but for now, we need context to AI’s conception and where it stands now. 

The Past

One of the oldest mentions of AI was by British logician Alan Mathison Turing around 1935, who theorized a machine capable of taking in data, storing it, analyzing it, and then outputting further data. Now, cell phones use AI and facial recognition software to enter the device, keyboards read thumbprints to verify your identity, and software checks your manuscript for common spelling errors. A 2017 study by Pegasystems Inc. found that even though only 34 percent of respondents realized it, 84 percent of people had used AI technology in the past—a number that’s likely shifted as the technology has become more integrated. AI has been around for a long time, and if you had the tech to search for this issue and purchase it online or read it on your device, you’ve probably used it. 

Where AI steals the limelight is in recent developments, from Tesla’s self-driving cars to Sudowrite’s AI-assisted writing software, the influx of text-to-speech services capable of reading off audiobooks with human likeness, and the controversial chatbot ChatGPT.

ChatGPT, launched in 2020, replied the following when prompted: “I am a large language model created by OpenAI. I am designed to understand and generate natural language responses to a wide variety of questions and prompts. My purpose is to assist and provide helpful responses to users who interact with me.”

These generative programs are the areas that have some in the publishing industry especially concerned—both because of the content they produce and because of how companies collect the information they use to make decisions. 

The Present

AI, by design, requires an input of information. As such, how AI sources information has become a talking point in the ethics of AI-generated content. 

Sudowrite states in its FAQ that “the AI [used by the program] works by guessing one word at a time, based on general concepts it has learned from billions of samples of text.” 

When prompted, ChatGPT states something similar. “My knowledge is not based on personal experience, but rather on patterns and relationships that I have learned from the vast amounts of text data I was trained on,” according to the program.

Stable Diffusion, an image-generating program and AI model, states on its website that the program was trained on a dataset collected from a general crawl of the internet.

The commonality among these three services is the data sources used to “train” the AI. AI cannot, by Turing’s definition, create without input. For many of the more popular AI programs of today, including ChatGPT, those sources are expansive datasets “scraped” from publicly available information on the internet, often a few years old. The programs typically do not collect personal data or have access to the internet to search for information in real time. On the other end of the system, prompts have to be crafted and initiated by a user, meaning misuse or abuse of the programs’ generative abilities, such as in using it to generate copyrighted materials, typically result from the user and not the technology on its own.

Since the information used to create images and texts is not unique, the ethics of such creations are at the forefront of the conversation. Whose information are these AI platforms using? Do individuals need to grant express permission for AI “scrapers” to access their data for use? Finally, once the AI does generate something, who owns the output? 

These questions are part of a larger ethical and legal debate that will probably, like AI development itself, be a fluid situation. The latest model of ChatGPT, launched March 14, can output approximately six thousand words currently, and a second version of the model, currently available to select developers, can generate close to twenty-five thousand words at a time. Sudowrite can do the same but markets itself as a creative assistant rather than a chat prompt. So who owns the rights to the content AI produces? 

According to US courts, AI-generated content cannot be copyrighted without an element of human alteration, the same as with public domain images. However, the specific rights and licensing terms a user has to their generated content may vary across programs. The AI art generator Midjourney, for example, grants users different rights depending on whether they have a free or paid account with the program.

The Future?

AI has a lot of capabilities, and ethically, we’ve extended an olive branch to certain functions we’ve decided we’re comfortable with. ProWritingAid and Grammarly both use AI to help edit your manuscript. Microsoft Word uses AI to do the same. Amazon uses its own AI to predict your search results. We’ve written these functions off as OK; the main points of contention today are around generative AI programs. 

This is where creatives must decide for themselves how much of their work they will “outsource” to AI and how much is a labor of love, par for the course, or some other cliché AI should be screaming at us to fix. 

Some authors are already using generative AI programs in their businesses. Joanna Penn, in early February, described on her blog how she utilized ProWritingAid, Sudowrite, and ChatGPT in drafting a recent short story. Before that, in August 2022, Derek Murphy, of Creativindie, outlined how authors and designers can use Midjourney to create a book cover image. 

There have been instances at the other end of the spectrum in which AI was used to fully generate short stories. 

Earlier this year, Clarkesworld, a Sci-Fi literary magazine, had to shut down submissions because of the number of AI-generated short stories it received, with the editor noting the stories could be replicated to some degree with the right prompts. The editor and her team could detect the AI-generated stories, observing similarities between the stories and calling them “inelegant.” Though detectable, the sheer volume of AI-generated stories resulted in the submission pause. 

It should be noted, however, that AI doesn’t need to be used to craft a final product. Just as ProWritingAid and Grammarly are used as tools in the creative process, many authors, like Murphy and Penn, have used AI generative programs to assist in various stages of publication, providing direction, generating ideas, or adding speed to their process.

The conversation surrounding AI-generated content remains current, with some embracing the programs as tools to make their work easier and others using them to circumnavigate various stages of the writing process that used to absorb large quantities of time, such as editing. Alternatively, some view AI as a threat to creatives or consider the programs an ethical dilemma to avoid. Wherever you line up, AI is undoubtedly raising questions and making its presence known in the author community. Only time will be able to answer the question of how it will evolve from here.

Share this article