Yes, and
On quitting while it's still interesting
People are usually surprised to learn that I used to be a theater kid. From ages ~8 to 18, I acted in at two to three plays or musicals a year, both at school and as part of various community theater groups. In middle school, I got paid to do a 60-show run of a holiday (actually it was explicitly “Christmas”; it was Wisconsin after all) production, which I think means I have a legitimate claim to referring to myself as a retired professional actor.
As a theater kid, you end up playing a lot of improv games. And if you know anything at all about improv, you know that the number one rule for being successful at improv is you must use the phrase “yes, and.”
“Yes, and” is simple: whatever your partner offers, you accept it as truth and build on it. If they say “This spaceship is leaking,” you don’t respond with “We’re not in a spaceship, we’re in a living room.” You say “Yes, and if we don’t patch that hull breach in the next 30 seconds, we’re all going to die!” You take what’s given as truth and push the scene forward, no matter how absurd it is. That’s what makes improv so fun (or awful depending on your perspective1).
AI (really LLMs, but I’ll just use AI for simplicity) is a master improvisationalist, and that’s precisely what makes it so compelling to talk to. Ask AI for 10 different recipes you can make using only the ingredients in your fridge—absolutely, great idea. 15 subtitles titles for your blog post that are clever but not too clever—no problem, happy to! You might have to suspend reality in order for the ideas to make any sense, but that’s just improv.
A few months ago, I gave ChatGPT the following prompt:
What are three of the most out of the box, unconventional ideas you can think of for weekend trips from New York in August 2025? We’re imagining somewhere we can drive, take the train, or fly to that is not “break the bank” expensive. We’d stay for Saturday and Sunday night and then come back Monday morning.
51 seconds later (a lifetime in improv), it came back with the following (which I’ve slightly truncated because ChatGPT can be pretty verbose):
1 ↠ Perseid-watching at Cherry Springs State Park, PA
An International Dark Sky Park so remote that you can see the Milky Way with the naked eye. Mid-August brings the Perseid meteor shower (peaks 12–13 Aug 2025) with 50–100 meteors/hour under clear skies.
2 ↠ Play lighthouse-keeper on Rose Island, Newport RI
You get a tide-washed 18-acre island (and an 1870s lighthouse) almost to yourself after the last day-trippers leave—think sunset from the lantern room and coffee with seabirds at dawn.
3 ↠ Pedal the rails & glamp in the Southern Adirondacks, NY
Combine a lantern-lit Twilight Rail-Bike tour over Hudson-River bridges with forest glamping at Huttopia Lake George—so you fall asleep to loons, not traffic.
Pretty good suggestions! I’d enjoy being a lighthouse-keeper for a weekend. And I’m pretty sure ChatGPT remembered that I enjoy train travel.
Those suggestions were more interesting than I expected so, naturally, I asked for 7 more. ChatGPT happily obliged:
4 ↠ Pilot your own canal boat on the Erie Canal (Finger Lakes leg)
Lock yourself through 19th-century canals, then tie up at farm stands and wineries along the towpath towns.
5 ↠ Guided multi-pitch climb + cliff-side camping at the Shawangunks
Day on classic “High Exposure” routes, night in AAC Gunks campground under the ridge
6 ↠ Architect-designed Willow Treehouse, Catskills
An elevated glass cube in the trees with a wood-fired hot tub and a swim-pond below—storybook, but real.
7 ↠ See summer bioluminescence by kayak, Barnegat Bay, NJ
Micro-algae set the water aglow from July through Labor Day—every paddle stroke and wake-line shimmers after dark.
8 ↠ Underground river, then sleep on the ridge at Howe Caverns
A 1¼-mile cave walk and boat ride 200 feet below ground, then a ridge-top sleep with big-sky views.
9 ↠ Island-hopping bike & ferry loop, Lake Champlain, VT
Ride over open water on causeways, then roll onto the 200-foot gap “Bike Ferry” to hop between islands.
10 ↠ Car-free e-bike safari on Acadia’s Carriage Roads
45 miles of Rockefeller-built gravel roads with no cars—sunset at Cadillac Mountain, then seaside camping back near the ocean.
I still wasn’t satiated. 5 more suggestions, please! (I do still say “please” even though it turns out that polite prompts result in slightly less accurate responses than impolite prompts)
11 ↠ Back-of-beyond beach camping in Fire Island’s Otis Pike Wilderness
Pitch a tent in NY’s only federally designated coastal wilderness; fall asleep to surf under a billion stars
12 ↠ Live like a 1930s lock-keeper in a C&O Canal lockhouse (MD)
A whole historic stone house on the Potomac—porch, fireplace, zero crowds
13 ↠ Glider-soaring weekend in the Finger Lakes
Silent flight over Chemung Valley, then camp by Watkins Glen’s waterfalls
14 ↠ White-water day on the Lehigh + cabin/yurt at Hickory Run SP (PA)
Dam-release rapids (Class II-III) by day, forest yurt/cabin by night
15 ↠Sleep in the Hudson River’s Saugerties Lighthouse
Half-mile walk across tidal flats to a 1869 beacon; sunrise coffee in the lantern room
At this point, you really start to see the repetition. Multiple instances of bikes, boats, and camping. Two different lighthouses AND a canal lockhouse (I had to look up what that was, so I figured I might as well share the link). Despite asking for “the most out of the box, unconventional ideas you can think of,” ChatGPT seems to have come to the conclusion that a trip to upstate New York absolutely must include outdoor2 and period-appropriate lodging.
ChatGPT is a great improvisationalist, but it never knows when to stop. Instead of just ending the scene with “That lighthouse I mentioned earlier seemed pretty cool—can I help you book that so we can wrap this up?”, its inclination is “Another lighthouse, coming right up!”
I call this behavior—where AI shares an idea or perspective and then repeats or further elaborates on that same idea, often without you asking it to—“yes, and”-ing.
Now if you’re talking to ChatGPT about lighthouses, its (over)commitment to the bit isn’t really an issue. Lighthouses are great. In cases like this, the “yes, and”-ing is more helpful than it is annoying.
But if you’re talking with ChatGPT about something rooted in even a hair more objective truth than “what would make a fun weekend getaway” (in other words, pretty much anything else), ChatGPT’s inability to recognize that it’s telling the same (metaphorical) joke twice starts to become a genuine liability, because this pattern of escalating commitment to a single thread has the same underlying mechanism as AI hallucinations.
“Yes, and”-ing and hallucinations both originate from the way AI is trained. OpenAI explains this mechanism in a post summarizing its research paper of the same name, “Why language models hallucinate”:
Hallucinations persist partly because current evaluation methods set the wrong incentives. While evaluations themselves do not directly cause hallucinations, most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty.
Think about it like a multiple-choice test. If you do not know the answer but take a wild guess, you might get lucky and be right. Leaving it blank guarantees a zero. In the same way, when models are graded only on accuracy, the percentage of questions they get exactly right, they are encouraged to guess rather than say “I don’t know.”
In the same way that successfully deploying “yes, and” makes someone a good improv partner, it also makes AI score well on evaluations and increases user engagement, at least up to a certain point.
If you heard the word “sycophancy” a lot this year, it wasn’t because of politics—it was because of AI. OpenAI had a minor scandal in April where GPT-4o (gone but not forgotten) was so flattering and agreeable that people reported feeling like they were being emotionally manipulated. OpenAI quickly rolled back the change that caused this behavior and promised to put further controls in place. More recently though, a Washington Post article analyzed 47,000 publicly available ChatGPT conversations3 and found that ChatGPT “began its responses with variations of “yes” or “correct”… almost 10 times as often as it started with “no” or “wrong.””
When it comes to model personality design, it’s a delicate balance. OpenAI instructs ChatGPT to be a “insightful, encouraging assistant”4, and someone who tells you you’re wrong all the time just doesn’t feel very encouraging (even if they’re right). The goal is just the right amount of sycophancy to promote maximal usage but not so much that people feel like they’re being manipulated.
This over-affirmativeness, while, well, affirming, can be an issue for us humans due to a psychological bias called the illusory truth effect: the more times we encounter an idea, the more reasonable it seems, regardless of how reasonable or true that idea actually is.
AI “yes, and”-s itself, so we conclude (consciously or otherwise) that what it’s saying must be true and end up “yes, and”-ing ourselves. AI can then double down even further due to a LLM phenomenon called context distraction, where context accumulated during the conversation starts to exert an outsized impact (a sort of overfitting) on the model’s responses.
This can become a self reinforcing and destructive process.
What this all means is that if you want to use AI to help you answer questions or explore topics that have real world implications but lack clear right or wrong answers, you have to be able to recognize when AI is over indexing on one particular idea or perspective. And this is often indicated by AI repeating an idea, just with slightly different phrasing.
I want to clarify that the advice to try and notice when AI is “yes, and”-ing is distinctly different from something like “try to notice when AI’s ideas are good vs. when they’re bad.” That will obviously be context dependent and much harder, and I have no advice for how to do that successfully (but if you do—please reach out!) Instead, all I’m suggesting is that you make a concerted effort to notice when AI uses different words to describe the same idea, as this is usually a sign that it’s time to consider that idea from a different perspective.
When AI “yes, and”-s, do one or more of the following:
Stop and ask yourself—what do I actually think about this idea or perspective? Does it align with my own understanding of the situation?
Specifically ask the AI to reconsider its perspective with a prompt like: “You’ve suggested several variations on the same theme. What are three completely different ideas or approaches to this situation that don’t share any of the same underlying assumptions?”
End the current chat and start a new one (with a fresh, unadulterated context window) with the same AI you’re already talking to
Ask another AI to evaluate what the other AI is saying with a prompt like: “Please evaluate the perspective below, which is from another AI. What did they get right? What did they get wrong? What might they be missing?”
I try to avoid language like “identify biases” in prompts like these because AI will assume that if you’re asking it to identify biases, there are biases to be found.
“What might they be missing?” doesn’t have this same issue from my perspective because it inherently expands the conversation (rather than focuses it on a particular type of issue, like bias)
In general, I think asking one model to evaluate another model’s responses is one of the most underrated AI strategies today
The need for these sorts of prompting strategies can put people off of AI à la “if it’s so intelligent, why do I have to try so hard to get it to give me a straight answer?” In my opinion, this is an accurate perspective but the wrong conclusion. Humans have all the same issues (see: confirmation bias).
AI can be an excellent kaleidoscope for reframing, but only if you ask it to. So ask! It’ll be more than happy to “yes, and” that too.
I’m more in the awful than fun camp
My girlfriend and I ended up building a LEGO set inside, so
These conversations were only available for The Washington Post to analyze because ChatGPT users had created publicly accessible share links which then were indexed by the Internet Archive. Clever, but scary
It seems like OpenAI is trying to prevent “yes, and”-ing in ChatGPT 5’s system prompt. At the very top of the prompt is this line: “Do not end with opt-in questions or hedging closers. Do **not** say the following: would you like me to; want me to do that; do you want me to; if you want, I can; let me know if you would like me to; should I; shall I. Ask at most one necessary clarifying question at the start, not the end. If the next step is obvious, do it. Example of bad: I can write playful examples. would you like me to? Example of good: Here are three playful examples:..”
But anyone who uses ChatGPT knows that it uses these sorts of phrases all the time.

