By Kindall Bramble - February 27 2023


Overview:

Introduction to Artificial Intelligence

Artificial intelligence was once something we only saw in movies. It was a staple of the science fiction genre. However, in today’s world artificial intelligence has become a controversial topic. Some think of it as a tool to help writers, while others believe it may be used to phase human workers out of jobs. Due to the early nature of artificial intelligence, it is difficult to tell how it will affect us far into the future.

When many people think of artificial intelligence, they think of talking robots. Even though this notion is likely inspired from the old science fiction movies of the past, it is somewhat rooted in truth. We now have online AI models that fulfill this role. They are easy to interact with and give impressive results. All the user has to do is type in a sentence and send it to the artificial intelligence. Then, the program will analyze its database and write something it believes will fit. Of course, it will not always say something that makes sense. Artificial intelligence works based on its past experiences, so if you want it to do something entirely new it won’t be as effective. Additionally, not every artificial intelligence is the same, and some are better at their jobs than others.

In the past, people have feared computers and the idea of a machine making decisions for them. Nowadays, many people consider it to be revolutionary. There is still a large amount of people who believe it will be used to harm more than help. However, the field of artificial intelligence in writing is still new. We do not have much information on how it will affect technical writing. Thankfully, artificial intelligence has been integrated into many other fields that we can analyze.

Based on my past experience with artificial intelligence, I believe that AI writing will be taken advantage of by humans for unethical purposes. I have seen this happen time and time again with several different types of AI. Understanding how these past incidents have occurred can help show us how writing may go this path as well. During my research, I sought out articles relating to AI programs and services that were popular in the past few years. In this article, I have included information on three main types of AI: AI art, AI chatting bots, and AI writing services. After doing research on all three, I have concluded that although AI can do amazing things, it is very susceptible to being abused by humans. Managing to use AI in technical writing will require us to be careful and ethical.

AI Art

Art created by artificial intelligence has been a hot topic for the past few months. Many people have joined the debate including artists and non-artists. There are several AI programs used to create art, but the four most popular are Stable Diffusion, DALL-E, NovelAI, MidJourney. The model I will be discussing is NovelAI, as it is the one I am the most familiar with. While NovelAI started as a service generating stories, it is much more well known for generating art. NovelAI specializes in creating art in the Japanese cartoon style known as anime. Almost every computer-generated artwork in the anime style is from NovelAI.

AI art is an important aspect of the AI writing discussion as it shows one major negative aspect of AI: the ability to cause harm. This was easily shown in an incident in October where NovelAI was used to plagiarize. The article “Thief Steals Genshin Impact Fan Art Using AI, Demands Credit From Creator,” by Sisi Jiang explains the situation in more detail.

An artist going by the Twitter name “AT” was streaming their drawing process on the popular website Twitch (Jiang, 2022). However, one viewer decided to screenshot their current progress and finish it with NovelAI (Jiang, 2022). They published the AI-generated image online several hours before the original artist finished their piece (Jiang, 2022). Afterwards, the viewer accused the original artist of stealing and tried to use the fact that they posted later as proof (Jiang, 2022). Other Twitter users quickly learned of the situation and came to the defense of the original artist (Jiang, 2022). In the end, the thief deleted their account and the original artist’s artwork received a boost in popularity (Jiang, 2022).

The original image (right) next to the AI generated picture (left). Notice the real drawing has much more detail and texture compared to the AI generated one. Comparison taken from the Twitter user @GenelJumalon

This incident has shown another side of Artificial Intelligence that many overlook. While AI may not be harmful in and of itself, it is still controlled by humans. This means it can be, and has been, easily exploited. Jiang described AI art as “an ethical landmine” (Jiang, 2022). She says, “I’m a bit of an outlier among art enjoyers in that I think the technology itself has fascinating potential for creating ethical art with creators’ permission. But as long as artists don’t have legal protection against their art being stolen and mangled in unscrupulous ways, that day is still a long way off” (Jiang, 2022).

AI Chatbots

Another popular form of artificial intelligence is the chatbot. These, as the name implies, are services that allow you to send and receive messages from an AI. One of the most infamous examples of an AI chatbot is Tay, a chatbot developed by Microsoft. This was a chatbot that started out as an interesting experiment but it quickly turned into something more sinister. I found most of the information about Tay from the IEEE Spectrum article titled “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation.”

Tay was a chatbot designed to learn from natural conversations. Previously, most AI was trained on a very narrow data set. Tay was given a much wider set to learn from, including actual conversations with users on the website Twitter. This ended up being the worst decision for her. Mere hours after her release, she began writing highly offensive and hateful things. All of these were supplied to her by Twitter users who purposely turned her into a hate-spewing machine. Hate groups from other websites collaborated on a joint effort to make her say horrible things. Microsoft was forced to take her offline shortly after she was released to the public.

Tay's Twitter profile before it was removed.

Tay is yet another AI that was used for harm. While their ambitions were strong, Microsoft had not considered the consequences of letting anyone contribute to her training. If AI service providers do not tightly control their AI’s training, it may lead to another incident like Tay. An AI may slip something horribly offensive into a paragraph, which may not be detected if the reader is not careful enough.

AI Writing

AI writing is the new artificial intelligence craze that is gaining both popularity and infamy. The new AI service ChatGPT allows for users to simply input a topic statement and the AI will write something for them. The length can vary from a few sentences to several pages.

However, programs that write passages for users are highly discouraged in the academic setting. Many colleges and universities across the country have denounced programs like ChatGPT. Students are getting caught using the program on their papers and presentations. Most professors consider this cheating and give the students harsh punishments. The purpose of many college assignments is to get students to think for themselves, and if a website is doing all the thinking for them, then they are not submitting original work. If ChatGPT and related programs aren’t being taken seriously by educators, then that may indicate how businesses will feel about them in the future.

An article from Business Insider titled “Two professors who say they caught students cheating on essays with ChatGPT explain why AI plagiarism can be hard to prove” shows some of the problems using AI writing can bring. The main one is misinformation. While the essays the service generates are grammatically well-written, that does not mean they do their research. One professor mentioned in the article said he had an essay that was written very well but fell apart upon closer inspection (Nolan, 2023). Many of the claims were nonsensical or “just flatly wrong” (Nolan, 2023).

This could be a terrible consequence of using AI in technical writing. As technical writers, it is our duty to do our research and portray things correctly. Things could go terribly wrong if we, for example, put something untrue in a manual for a car. For this reason, we would have to be very careful while using AI generation software. Proofreading the AI’s result multiple times could end up taking as much time as just writing it ourselves.

Artificial intelligence also creates another problem for writers: the lack of empathy. The thing that makes artificial intelligence different from humans may also be the thing that is holding it back. It isn’t made by humans. For many people, writing is a way for them to communicate their genuine feelings, and using artificial intelligence can take away this aspect. This was what many people felt about a recent incident described in the CNN article, “Vanderbilt University apologizes for using ChatGPT to write mass-shooting email.” I was pointed to this article by a fellow student who had heard of what happened. Vanderbilt University released an email responding to the recent shooting at Michigan State University (Korn, 2023). In their email, they expressed their condolences (Korn, 2023). However, many people took issue with the fact that they wrote the email using ChatGPT (Korn, 2023). This program allows for users to input a topic and it will write something for them. Of course, this takes the human element out of it completely. In an email meant to show emotion and express sadness, many feel that using a robot to write it is completely inappropriate.

Conclusion

While AI art is a new and intriguing invention, we have to be very careful while using it. It is extremely easy for us to abuse and take advantage of AI for less than ethical purposes. History has shown that people will use it to spread hate and to take credit for something they did not create. Additionally, academics are hesitant to accept work created with AI. AI may be used in the future to help technical writers ease their workload. However, it is important that we stay vigilant and remember that AI is not omniscient. It works based on what we give it, and it has no way of knowing what we do not tell it. Our responsibility is to make sure our use of AI stays within the ethics of technical writing.

References

Jiang, S. (2022, October 13). Thief Steals Genshin Impact Fan Art Using AI, Demands Credit From Creator. Kotaku. Retrieved February 27, 2023, from https://kotaku.com/genshin-impact-fanart-ai-generated-stolen-twitch-1849655704
Korn, J. (2023, February 22). Vanderbilt University apologizes for using ChatGPT to write mass-shooting email. CNN. https://www.cnn.com/2023/02/22/tech/vanderbilt-chatgpt-shooting-email/index.html
Nolan, B. (2023, January 14). Two professors who say they caught students cheating on essays with ChatGPT explain why AI plagiarism can be hard to prove. Business Insider. https://www.businessinsider.com/chatgpt-essays-college-cheating-professors-caught-students-ai-plagiarism-2023-1
Schwartz, O. (2019, November 26). In 2016, Microsoft’s racist chatbot revealed the dangers of online conversation. IEEE Spectrum. https://spectrum.ieee.org/in-2016-microsofts-racist-chatbot-revealed-the-dangers-of-online-conversation