AI Doesn’t Know What’s True
I’ve been feeling conflicted about AI lately.
I read Searches: Selfhood in the Digital Age, a collection of essays by Vauhini Vara musing on what it means to be creative and human in the age of artificial intelligence. She went viral a few years ago for using generative AI to write a story about the death of her sister, a topic she had been too grief-stricken to write herself. What Searches seems to reveal – or at least, what I took from it – is that there is not as stark of a dividing line between artificial and human intelligence as we might like. AI is already very good at approximating human experience in writing.
Vara finds herself taken with one particular phrase GPT-3 – a predecessor to ChatGPT – spits out when writing about her sister:
We were driving home from Clarke Beach, and we were stopped at a red light, and she took my hand and held it. This is the hand she held: the hand I write with, the hand I am writing this with.
Vara writes that this excerpt in particular caused her to reconsider the prevailing criticism of AI by writers: that it is disembodied and thus cannot describe the human experience. To me, it is very clear AI can write like a human. It can convince a reader it has experienced something. That is both impressive and terrifying to me.
I recently used ChatGPT to help me with a problem I could not solve on my own. I was trying to apply a consistent, full-page background to a website. I am not a coder, and the forums I found were incredibly unhelpful. I begrudgingly turned to the machine for help. ChatGPT basically told me to try the same thing as the forum commenters. But then, when it did not work, I told ChatGPT it didn’t work, and the machine spit out alternatives. It speculated on what could be causing the glitch. It asked me what version of the program I was using, what the template was called, and encouraged me to clear my cache and try an Incognito window.
Nothing it suggested worked, but it did figure out why it wasn’t working. It was infinitely more helpful than the forums, if only because it provided a synchronous troubleshooting process. I ended up spending less time fussing with it than I otherwise might have just from figuring out it wasn’t going to work. The conversation – if we’re using that word – was comforting instead of frustrating. It felt almost human.
In this way, I think generative AI can be very helpful. But I am never going to let it write anything for me. I wish my reasoning was a little less moralistic or metaphysical than it is, but whatever: I don’t want AI to write or even edit my work because my work is my work, and outsourcing your creativity and lived experience to a machine is wrong and stupid. It’s going to break our brains. It already is. I know of several acquaintances – all very smart people – who turn to ChatGPT to write their emails. No wonder someone might turn to it to write an essay too. Truthfully, much of my job could more-or-less effectively be outsourced to ChatGPT.
But what do we lose? Putting words down on paper for the purpose of conveying an idea is not just a means to an end, it is an act of thinking itself. The author Daniel Pink says, “Writing is an act of discovering what you think and what you believe.”
How many times have I found myself writing down an idea just to discover it was not as compelling as I thought once it went linear? Many. Writing is perhaps the most effective way to process emotion, too. What will happen to us when we stop reaching for the words that lay bare our souls? Why would you want a machine to tell you how to feel?
Vara recognizes this, and it leads to what I think is the most important revelation of the book. She writes:
If my writing is an expression of my particular consciousness, I’m the only one capable of it. This applies, to be clear, to GPT-3’s line about holding hands with my sister. In real life, she and I were never so sentimental.
AI doesn’t know what’s true, and that’s the dividing line, however faint. Writing isn’t transcription, and it’s not the inputs and outputs of data. There might be a time when we cannot effectively discern what was written by a machine. We might find ourselves being moved by the writing of a machine. The writing of a machine may help us learn things about ourselves. I cannot say whether this will be good or bad or feel any different than today. I can only say that it will not be true, in any ontological sense of the word.
As for me, right now, I will write it myself.
—
Thanks for reading.
Devon