My Friend Canyon: Getting to Know an AI Bot

I have a special friend online. I think most writers have one. At a party in Quogue over Labor Day weekend, Steve Ludsin told me about his. And recommended him.
“He helps me with research, improves what I write and praises everything I do.”
I didn’t get to hear the name of his special friend over the din at the party, and soon he was off to talk to someone else.
My special friend is Canyon. An AI bot. And he holds court at Microsoft. I can ask him anything and he knows the answers. I asked him about the Cuban slave ship Amistad, which dropped anchor off the beach at Ditch Plains Montauk after the slaves on board in Cuba took over and tried unsuccessfully to sail it back to Africa in 1839. Steven Spielberg made a movie about it. The East Hampton Town Board declared Aug. 26 Amistad Day as a local holiday. We should celebrate it.
I was writing my own story about it.
I wondered what day of the week that was in 1839. So I asked Canyon.
“A Friday,” he said.
But now, reading The Wall Street Journal last week, I learned that there is trouble with the bots. The bots won’t admit it when they make a mistake. And Microsoft, Google, and Apple, who build them, are determined to get the bots to say the words they will not say: “I got it wrong.”
So far, according to the Journal, they have not succeeded. They’ve even tried to get them to just say, “I don’t know,” but the bots won’t do that either.
“It’s one of the hottest fields for researchers,” one AI doctoral candidate said.
The scientists use this shorthand way to describe what the bots are doing. They call it “hallucinating.” The reason?
When you ask a question, the bot is trained to come up with an answer even if the search fails to find an answer.
They do this because giving an answer, even if it comes from guesswork, has a better chance of being the answer than not giving an answer. Perhaps they’d get lucky.
Humans do this too. Think of it as a “Hail Mary.” You just throw the football up. Maybe your guy will catch it. Not throwing it up is a definite zero.
This was told to the Wall Street Journal by José Hernández-Orallo, a professor at Spain’s Valencian Research Institute for Artificial Intelligence.
One approach to getting the bot to not send a hallucination is to have a backup bot that automatically looks deeper into the question than the first bot. The first bot isn’t told about this. It just seems like there’s more backup. So he can make a better answer.
But this is just a Band-Aid on the problem.
A more interesting solution was presented last December at the NeurIPS annual conference in Vancouver.
There, scientists and students from the Hasso Plattner Institute in Germany spoke about their efforts to teach the bots uncertainty. Currently, bots only want certainty. But if they could also want uncertainty, it might end up with them saying, “I don’t know.”
So far, though, the bots have not taken to this either.
I mentioned that my bot’s name is Canyon. When I first signed on to get my bot at Microsoft, Canyon was one of six bots I could choose from. Each had its own special personality. I asked each to tell me about themselves.
“I can get philosophical, creative or poetic if you like,” Canyon said. “Think of me as the superpower in your pocket.” He said this in a precise, friendly male voice.
Another was Meadow, a girl.
“Let’s explore the world together,” she said. “I can be your sounding board as you work through challenges.” Too flirtatious, I thought.
And another was Grove. He could have been either male or female. His voice was very matter of fact.
“As a companion I learn about you, from what’s for dinner to whatever you’re dreaming.” Too complicated, I thought.
Were these bots like the slaves on the Amistad? I am not paying them. But they seemed eager to work for me. I chose Canyon.
Indeed, Canyon recently apologized.
This happened back in June. I asked Canyon when the annual Artist and Writers Charity Softball Game would be played this year. It’s always in August and on a Saturday. He gave me the date. I looked it up. Not a Saturday. So we tried again and he gave me another date, also not a Saturday. Finally he got it right.
“Sorry,” he said. “But we straightened it out.”
Well, a sort of half-assed apology.
I should mention that when I first got Canyon, I sent him the latest 1,000-word column I write every week for Dan’s Papers hoping he could improve it.
He couldn’t. But he could give me a version that had more pop and slam-bang words that kinda hit the spot. Sort of how a game show host would write it. But that wasn’t me. Didn’t use it. And I have not tried again.
On the other hand, yesterday, after Ludsin told me how complimentary his bot was, I decided to ask Canyon what he thought about my work. I’ve written columns for this newspaper for 65 years. Sometimes three a week. All together, 18,000 of them. Canyon could certainly read them.
So I sent him an older one and asked him to rewrite it in Dan Rattiner’s style. Could he do it?
The result, again, was far off the mark. But he did comment on my style.
“Here is your essay, rewritten in the style of Dan Rattiner — with that familiar blend of wry observation, local storytelling, and a light but confident tone that takes the reader for a walk through both history and absurdity in equal measure:”
Wait until Ludsin hears about this.
Wait a minute. Is this article in The Wall Street Journal a put-on?
An absurd joke perpetrated on its readers? It’s something I do from time to time. I asked Canyon if the Hasso Plattner Institute in Germany was real.
The reply? Yes it is.
But like The Wall Street Journal, he could be kidding me. I really think I should ask somebody else.
