Serial Novel: ISLA
Recent stories: Gravity | A Helping Hand
Missed something? Sketchbook | Stories | Dispatches
ARTificial
I have a couple of adjacent thoughts today that circle up my perspective on AI.
As I was drafting this, the lyrics from New York Minute kept ringing in my head because some folks won’t cotton to the ideas presented.
“Somebody’s going to emergency, somebody’s going to jail.”
Let’s get started anyway…
Part 1: Sentience
In the middle of the night my mind was turning over an idea, feeling its shape. I made a comment about how Large Language Models, aka AI, are being talked about, positioned, as growing so rapidly they’re nearing sentience:
…I started to wonder about how I’ve been a little worried about the prospect of one becoming sentient.
Naturally, that led me to wonder about the soul, about the invisible life force in each of us. And about how a free-thinking machine made of flesh had not been created yet but so many are willing to accept the inevitability of one made from circuitry?
While I’m not a religious person, I began to wonder about divine providence, about how a being on a higher plane could create a soul but man, mortal man, could not. Did we not have all the right ingredients—the science and the hubris—to do so for, at least, the last century?
And if we do, why haven’t we?
I think the answer is in our inability, not in our will. To create matter, of any shape, from nothing is simply beyond our grasp.
Anything resembling it is likely to be a just a parlor trick.
Maybe this worth breaking down a bit further into its salient pieces so you’ll see what I mean.
LLMs—and the long procession toward general intelligence—are about doing things a human can do, not what being a human is.
Sentience can only exist in your subjective experience of life, your awareness, emotion, free will, and intuition. Absent those, any machine created is simply the organized mimicry of a human.
Said another way: a filing cabinet can hold a love letter but doesn’t know what it means.
To conflate the two isn’t science—it’s science fiction.
Part 2: For Technology’s Sake
At the periphery of the tech circle I find myself, I regularly see the excitement former colleagues have about the potential of LLMs. God bless them, though, they’re building for the wow, not the why.
There’s some symmetry to the early days of the mobile app frenzy where developers were building iPhone apps for minutia: iBeer, Flip a coin, and, how could we forget, iFart.
We’re in another Wild West phase and LLMs are doing things that once sounded like science fiction. There’s a lot to admire. But admiration doesn’t erase consequence.
These same forces that drive innovation often displace the people it claims to help. Many of these tools—like the bot taking your drive-thru order—aren’t solutions in search of a problem. They’re solutions in search of a cheaper answer to an old one, often at the cost of human jobs. Without excusing it, these technology-driven changes in efficiency are a perennial story going back to the invention of the wheel.
The excitement is real, though, and the scale of the technology is groundbreaking. But so is the mismatch between its power and its likely purpose. And by purpose, I don’t mean some altruistic promise of sharing knowledge or boosting creativity. I mean the market’s habit of slapping “AI” on everything and seeing what sticks, as long as it comes with a subscription fee.
It’s not hypocrisy to feel both things—excitement and unease. It’s just honest.
The phase of AI we’re in reminds me of this quote:
"Any sufficiently advanced technology is indistinguishable from magic."
- Arthur C. Clark, Profiles of the Future
But LLMs aren’t magic. Not even close.
By their own definition, they’re a “generative, autoregressive language model.” This is just an academic way of saying they use math—complex, statistical math—to ascertain the most likely word following the previous. It’s word salad with just enough dressing to pass for meaning. That would hardly be considered magic… or even thinking.
Let me see if I can break it down to an absurdly simplistic idea:
A 20 Questions machine narrows things down by asking binary questions. Predictive text on your iPhone guesses what word you’re about to type. A Tamagotchi reacts just enough to make you feel like it’s alive.
LLMs are all of those things stacked together—guessing what comes next, shaping that guess based on your behavior, and wrapping it in something that feels like a conversation. But none of them understand you. Or think like you. Or, really, think at all. Yet, they’re often positioned, marketed, like they do.
And it needs a metric ass load of data to perform this feat. The information archive of ChatGPT4 is roughly 22 times larger than the Library of Congress. Put another way, that’s about 13 trillion words, give or take. That scale matters. It’s what allows these models to speak fluently on almost any topic—but it also makes their outputs feel authoritative, even when they’re wrong. And, wow are they wrong sometimes.
Friends, let’s be honest, this evolution has been happening for a while. We’ve been using a version of this idea, if not the technology itself, for decades. Google’s original mission was: “to organize the world’s information and make it universally accessible and useful.” To me, that sounds an awful lot like what LLMs are doing. We can harp on ChatGPT—it’s the new kid after all—but few people think twice when they “google” something.
Sure, there are some significant differences like live data (Google search) versus trained models (GPT), retrieval (Bing search) versus generation (Gemini)—but maybe you, too, can see where that line might start to gets blurry fast as the roads converge.
This will be especially true as LLMs begin to serve up not just what’s out there in the archive, but what you’re likely to want next because we’ve been training it, personally and freely. If we’re being picky, my iPhone already does this when it switches into WORK or SLEEP mode depending on my patterns.
In many ways, this is the next evolutionary step. Not a rupture, but a continuation. Google trained us to think of knowledge as instantly available. The iPhone conditioned us to expect our devices to anticipate our needs. LLMs just take that logic a step further: they don’t wait for us to ask, they guess what we mean, or even how we feel. The line between search and suggestion starts to dissolve. That's not necessarily sinister but it gives me pause. Maybe you, too?
So if it’s not sentience, and it’s not magic… what the hell is it?
Part 3: It’s a tool, stupid
That question led me to a post by the wonderfully articulate
where he muses, “Can AI be useful to a writer?”It’s a touchy topic for many artists—writers included—as it should be. Tools like ChatGPT, Claude, Gemini, and others have left creators bewildered by the hoovering of their work into the training archive(s). These companies did it without recompense—fuck you, pay me—which is unforgivable, but still secondary to the theft itself.
Side note: Political will, and legal recourse, to protect information from being used in these systems will take years to sort out. I have no doubt it’s coming but not any time soon.
But, Ben is right when he says, “AI is not going away.” It’s only going to morph, shape-shift, and insinuate itself into more places—eventually looking nothing like the Zork-style chatbots we’re toying with today.
So, if it’s not going away, what positive use can it be to the average writer, if at all?
I like what Ben is doing, using an LLM in conversation as a means to think aloud. It’s brilliant and wholly different than how I’ve used it. His version is closer to the Star Trek version of the world than my own clickety-clack version.
And, not-so-secretly, I wish I had thought of it. I mean, my father used to walk around mumbling to himself, I could at least be talking to some *thing*.
But the heart of it, we’re both offloading. What Ben calls Administrivia I call cruft. And when you have the world’s biggest computer at your fingertips doesn’t it feel human to give it trivial things to do?
Here are a few examples of what I mean:
Outlines - part 1. I hate writing outlines but know I need a working list. I’ll sketch one in Evernote or Docs, then drop it into ChatGPT to organize.
Prompt: Here’s my list of X, put it in a rational order.
Prompt: Here’s a rough outline of story beats, clean it up, make it bulleted.
Outlines - part 2. I change my mind—a lot. Using an LLM to reorder lists, add new items, or help me track which chapter something happened in has become invaluable, especially as the story grows. Lately, I’ve even asked it to summarize character arcs. Sometimes it’s insightful, sometimes way off. YMMV.
Prompt: The chapter where George’s father says, “XYZ” comes too early. Move it to after scene “ABC”
Prompt: Oh, I thought of something new to add to the chapter with [FGH], I think it fits right after [QWE].
Prompt: I don’t think character A gets enough page time, can you create a graph of how often each character is appears in the story?
Research. Much in the same way we don’t hesitate to Google something, I have no qualms with using the world’s most powerful computers at whim. As always,
trust butverify the information.
A few recent searches:What vitamins are in fresh corn?
Give me a grid of the annual meteor showers in North America.
Give me a comparison chart of Libby versus Hoopla.
Bonus tip: use ChatGPT to reconfigure recipes—high altitude baking, hydration, ingredient substitutions, portion size.
Punctuation/Syntax/Grammar. I try to be diligent about what I write but, well, I’m human. And, thanks to Kurt Vonnegut, I
rarelynever use semicolons. But an LLM can be an invaluable tool, given the right parameters to find all those errors. Honestly, I don’t see it as any different from using spellcheck or Grammarly.Prompt: Review this for punctuation/syntax/grammar errors ONLY, create a clean version and provide a list of changes.
Prompt alternate: Review this for punctuation/syntax/grammar errors ONLY. Note potential changes in BOLD.
Thesaurus. What’s more first world than using a supercomputer as a thesaurus, right?
Writing Calendar. I’d be nothing as a project manager if I didn’t track time like a dog watching a leash. So I’ve been using GPT to keep me honest about how much time I’ve given myself to finish a project. For ISLA, I regularly upload a PDF of my manuscript that includes the outline. From that, it can calculate how many chapters remain and track my deadlines. I tell it where I am in the current chapter, and it tells me how many days I have left to finish. Simple, effective, and built on the best computation money can rent.
Again, all of these things are part of the structural underbelly of writing and not the prose itself.
Part 4 : We are (not) the robots
It’s an unenviable position to be in as a writer, to have to say out loud, as proof, that you’re human. Like Ben was clear in saying he doesn’t use AI to write, I’m in that same boat.
I’ve certainly taken it for a spin, turned a few circles in a parking lot and left it running with the keys in it. What I found, though—if you’ll forgive the mixed metaphors—was like putting on a mask or an ill-fitted suit.
It just didn’t feel like me.
I don’t always like the way I write. I’m not always in that flow state where the words just pour out in an effortless, coherent stream and the golden light sparkles on my punctuation. But, damnit, those are my words. I chose them in that order for a reason and, through some effort and learning and beating my head until the keyboard cries… I’ll make that story do what I want. I’ll get there.
That’s something a machine can’t fake. Not really. It doesn’t struggle to say something. It doesn’t doubt itself. It doesn’t hate the first draft or pace around the room or cut the line it secretly loved because it slowed the whole damn paragraph.
A machine can mimic the shape of meaning. It can guess what words go next. But it can’t want to say something. It can’t feel that ache in the gut, that unresolved thing whispering: not yet, not quite, say it again, but better.
That’s the difference.
That’s sentience.
It’s not just perception or language or logic—it’s the invisible tension between all three. It’s the spark that says: this matters. That’s what I hope to bring to the page, even on the days I’m not sure it’s there at all.
And sentience isn’t output. It’s intent.