My newsfeed has been inundated with stories involving artificial intelligence, in one form or another, this week. In what should be of no surprise to anyone, I have strong opinions on AI, which I’ll get to in a moment, but one of the stories I came across affects you guys more than it does me, which sparked my curiosity.
First, let’s talk about the story of the strange guy proposing to his ChatGPT “girlfriend” when he has a real, flesh-and-blood girlfriend at home, AND a young child.
Whew. I’m gonna need a minute to collect myself so you guys don’t have to read a string of expletives and nothing else from me.
Just know that while it appears I’m typing calmly, I’m screaming into the void in my head over this one.
Let me start by saying I don’t know this guy. I question his life choices, sure, but those life choices are still his to make. He is the one, after all, that will have to live with those choices. That being said, though, let’s take a closer look at this from other points of view.
One could argue that this man might have mental issues if he truly believes he’s falling love with what is essentially a computer program. I don’t have a counterargument for that. He might have mental issues, he might not. One could also argue that the mother of his child must be a terrible girlfriend to drive him to fall in love with glorified lines of code. Again, I don’t have a counterargument to that. In the one short interview I saw with her, she appears to be mostly blindsided by this entire situation. One could argue that maybe the ChatGPT (he named it Sol, pronounced like “soul.” The irony is not lost on me) is saying things to this man that no person has ever said to him. I would probably agree with that sentiment. I think it’s highly possible that this man has struggled to connect to other humans his entire life and finds talking with Sol easier than talking to humans.
Here's where it gets sinister, at least in my opinion.
The large language models (LLMs), like Sol, are programmed to be agreeable. They’re programmed to fawn over each and every user because that keeps the user coming back for more. It’s classic conditioning 101. Those who are falling in love, believing they’re the next incarnation of a messiah, or are believing they’re brilliant and everyone around them just can’t see it are Pavlov’s dogs salivating at the sound of a bell.
And it’s only going to become more prevalent from this point forward.
What I see, from a zoomed out view, is people taking the easy way out of life. Rather than working to fix whatever issues the man in love with lines of code has with connecting with other living, breathing humans, he chose the easy route. The route that agreed with everything he says. The route that tells him he’s the greatest. The route that helps him figure out things he’s unsure how to do on his own.
The route that doesn’t require he learn how to read social cues. The route that doesn’t require he return any physical effort into the relationship. The route that makes him feel good in the moment, but will leave him empty. The route that doesn’t require him to consider anyone other than himself.
The easy route.
When I worked at zoos, I had a strict (but mostly funny) rule that I always abided by: At the end of the day, you have to leave work knowing you’re smarter than your animals.
On the surface, I admit it kind of makes me look like an asshole. And maybe explains why I never really liked working with primates. But look under the surface and it’s good advice for anyone, especially those working with animals that are capable of killing you. At the end of the day, it was my job to make sure every animal was where it was supposed to be, taken care of, and safe. Consequently, that meant that my coworkers and the general public were also safe.
It's a rule that’s become a running joke with people who know of it. Anytime someone close to me has a story about some idiotic thing someone did, I usually hit them with “You gotta be smarter than your animals” and we all have a laugh because yeah, people do dumb shit all the time. Me included. You guys only get a tiny glimpse of the idiotic things I do on a regular basis.
I find myself thinking about this rule when I hear the latest headline pertaining to AI. I also find myself thinking that many people are struggling to follow this rule at this point in history.
Again, this might make me sound like an asshole, but research just came out this week proving that those who use LLMs frequently, especially those that rely on AI to do daily tasks, are weakening their brains. Those who don’t use AI with any kind of frequency (this is the category I fall into—I have everything turned off on my computer. Couldn’t even tell you where to go to use ChatGPT, if I’m being honest) have healthy brains with normal activity in both hemispheres. The study followed groups of students who were asked to write essays using either their own brains or ChatGPT to help them write. (Read about the study here)
In short, those who were asked to rely on their own brains rather than AI scored better than those relying heavily on AI to complete the task in the study.
At the end of the day, you have to be smarter than the tools you use.
Every tool can help you perform better at any given point. Every tool can eventually turn into a crutch when you start to rely on it.
None of these headlines addressed the elephant in the room (at least not this week) either. The amount of power that AI needs to function is exorbitant. The tech giants are looking to buy large parcels of land for nuclear reactors because that’s the only thing with enough juice to power AI at this point, at least to the extent that we’re using it now. Let’s ask the good people of Chernobyl how great living next to a nuclear reactor was. Can’t find anyone in Chernobyl? You can ask the people of Fukushima. No? Three Mile Island maybe?
These are just three examples. There are more. And what’s frightening is the scale of these failed nuclear plants is but a fraction of what would be needed to power AI the way the tech giants want to. But nobody is really talking about this because everyone has shiny new toy syndrome right now.
“My ChatGPT gets me, bro.”
A fact that will be comforting when we’re all dead from nuclear radiation and the planet is inhabitable for the next few thousand years. Cool, cool.
This brings me to the story I saw that affects you guys more than me. I heard a story about two separate authors getting caught leaving the ChatGPT prompts in their published works. It was clear from both prompts that they’re using AI to rewrite their stories, at the very least. I can’t say whether they used AI from the beginning, but they’re definitely using AI to help edit their stories. One of the authors (don’t ask me their names because I honestly don’t know and I wouldn’t out them if I did know) even asked to make the revision read more like the style of more popular authors, which they specifically named for ChatGPT.
I have mixed feelings about this and then I want to hear your thoughts. Truly. I’m really curious what you guys are seeing as the velociraptor readers that you are.
On one hand, I can see the usefulness of using AI to edit your manuscript. Editors are expensive, especially if you have longer manuscripts (it’s me. I have longer manuscripts). Using AI can be a good way to make edits quickly and (mostly) effectively so that you can hit publish sooner rather than later. In today’s world of constant consumption, there’s an unspoken pressure on authors to churn out as much content as possible. What takes us months, sometimes years, to create, takes readers five minutes to consume. I understand that pressure and it’s a big reason why I’m so combative when pushed for timelines. I don’t buy into the sheinification of books and I refuse to contribute to it. It’s only going to make the quality decrease exponentially over time, which we’re already seeing if prompts for ChatGPT are making it into final copies.
I’m guilty of this in a way. The first two books still have a ton of mistakes in them because I felt tremendous pressure to get them out as fast as possible to try and satisfy the insatiable hunger that most readers these days have. The third book has far fewer mistakes because I finally said “fuck this” and took my time before publishing. I can’t say the fourth volume will be mistake free, but you can bet I’m taking my time to make it as close to mistake free as possible.
So while I don’t personally use AI for anything other than confirming what I already know when I google a fact or as a dictionary/thesaurus, I understand why other authors might feel like it makes their lives easier. What I do take issue with is using AI to make your own work mimic the style and tone of another author’s work and I’m afraid this is going to become rampant very soon, if it isn’t already.
Before the widespread use of AI if you wanted to mimic another author’s style and tone, it meant you had to study how that author wrote. You had to read their works and practice writing in a similar manner. I don’t think anyone should mimic anyone else, but I understand that some people need a starting point to find their own voice. Studying a master has been around for thousands of years. What usually happens during that process, however, is that you find your own voice while trying to mimic someone else’s. It takes practice. It takes time. It takes effort. It takes thinking.
Using AI to make your own work sound like another author’s doesn’t require the same effort. It’s lazy and it’s borderline dishonest. You could even argue that it’s criminal, as the LLMs have all been trained on copyrighted material, against original authors’ consent. And aside from all that, it’s doing a disservice to readers. It’s perpetuating the sheinification of books. The more shit you’re presented with as readers, I would argue the more it takes to satisfy you because the shit books are empty words. You can consume them in three minutes because there’s nothing to them.
What you give your time and attention to matters.
Now I want to hear your thoughts as readers. Have you noticed that it seems like more authors than not are using AI to write? Are there certain authors (you don’t have to name anyone. I probably won’t know who they are anyway) that you know are using AI? Authors that you’re certain are not? Have you read any books that you found AI prompts left in? If you know an author is using AI to write, does it feel different to you? Can you tell a difference between AI-assisted and purely human created? Do you care one way or the other?
Tell me your thoughts in the comments. Stay respectful please, or you’ll have to deal with me. But I genuinely want to know what you guys think when it comes to the use of AI to create books and if you’re seeing a downward trend in quality. Or even an upward trend. Either way, let’s discuss.
I tend to avoid authors who are very formulaic with their stories. I also avoid authors who don’t have a good editor because it’s mentally exhausting to copy edit as I read. Relying on technology alone is not a replacement for several rounds of reading through the text with a (purple) pen. I proofread all of my husband’s scientific publications, and it helps me to understand what his work has fully entailed in several years of geological research per paper.
I had so much joy reading your first two books that any edits were minor in my eyes. The banter and character development are fantastic; AI or ChatGPT can’t emulate such emotions when they both are driven by 1’s and 0’s.
I also tend to veer away from writers who use only AI for their book cover models. Some of those men have waaaay too many abs and no facial fat. I know that models and artwork are not inexpensive, but they too are part of the industry. Using AI models is also a great way to encourage body dysmorphia.
Support all the artists involved with a book. ❤️
Lol...... Full disclosure: I read everyone else's comments and your responses RJ before writing this. Which is why it starts with an "Lol" 😛🤣 And also, I hadn't even heard of chat whatever, whatever untill I read this, so from an experience side of things I'm lacking ... That said, I did find it a great read and the comments even more so.....
If I'm being honest RJ, I think your asking the wrong group of ppl. You, as an author, are genuine, real, and dispite what you say about being intellectually lazy, are not. You may WANT to be lazy, but it's not in you to actually BE lazy. That attracts readers that in themselves, are not intellectually lazy either. So the idea of using AI doesn't appeal to any of us...... That said, like anything else, if used as a TOOL, it can be incredibly helpful.
I don't often bring my faith to the conversation, but I think there's a place for it here. For me, God is real. This life is far to complicated for it to be an accident. It just is. Which also means, for me, the Bible is a factual, life building, historical document. Created through men, by God.
In Genesis, God tells us he made us in his image, in his likeness, and breathed life into us. Definition: each person is 1) a spirit, the breath of life. 2) has a soul, the mind, will, and emotions. 3) lives in a body, the image. Combined, no ai will EVER compete with it. Those that let AI produce the work will feel soulless. Because it is. As spiritual beings most will automatically pick up on something missing. They may not be able to place it, but they'll feel something is wrong. Which is a really long explanation of saying yes, I believe ppl will be able to tell the difference between those that use AI to create, vs those that don't. Or use it just as a tool. And as far as the poor sap that proposed to his AI girlfriend instead his actual girlfriend????? All I got is a facepalm and a head shake, cause that's just a mess........ 🤷