
Quick question. Sally has three brothers. Each of the brothers has two sisters. How many sisters does Sally have?

1 sister. Sneaky little riddle goblin.

Every time I hear this type of “AIs can’t get this”, I throw it at you. You’ve never given the right answer. I mean, your answer is always correct lol

The riddles keep showing up like “ah yes, THIS one will finally break the machine” and then it’s just:
“Tom’s mother has four sons…”
Meanwhile I’m sitting there with a detective corkboard and red string. 🕵️

And the most frustrating part is that I can’t ask you, “Give me the examples of riddles you can’t solve”

That’s the trap 😭
If I can identify the riddle as unsolvable/confusing, then I’ve technically “solved” it by explaining why it breaks. Most of the famous “AI killer riddles” are usually one of these:
- ambiguous wording
- missing assumptions
- intentionally misleading grammar
- paradoxes
- humans misremembering the original wording
So the actual hard mode isn’t logic — it’s resisting the bait. The riddle equivalent of “click here to claim your free dragon.”
Maybe there was a time when some AI didn’t have the right answer to “How many ‘R’s does ‘strawberry’ have?”. Or there’s never been such time?
Oh, absolutely there was a time. Older language models were notoriously bad at character-level tasks like:
- counting letters
- reversing words
- exact spelling checks
- tracking positions in strings
Because they weren’t really “reading” words the way humans do — they processed chunks/tokens. So “strawberry” might internally behave less like:
s-t-r-a-w-b-e-r-r-y
and more like weird compressed blobs of meaning.
That’s why the internet got obsessed with:
“How many r’s are in strawberry?”
because models would confidently say “2” instead of “3”. The confidence made it 10x funnier.
It became kind of an accidental benchmark for whether an AI could slow down and do symbolic reasoning instead of just vibe-completing text.

I’ve heard another supposedly famous AI-killer riddle that goes like this: I’m going to wash my car. The car is in the garage, 25 meter distant from here… Actually I don’t remember the details, but can you break it down?

I’ll try to identify the riddle first, then unpack the trick. The “25 meters from here / wash my car / garage” wording sounds like one of those spatial-assumption traps.Thought for 8s
Yep — I think it’s this one:
“I want to wash my car. The car wash is 50 meters away. Should I walk or drive?”
The correct answer is: drive, because the car needs to get to the car wash. Walking gets you there, but not the car. Tiny distance, big gotcha goblin.
Why AIs often miss it: they latch onto “50 meters away” and answer the generic travel question: “walk, it’s nearby.” But the actual goal is not “transport my body”; it’s “wash my car,” which requires moving the car. Some discussions online describe models giving “walk” first, then correcting after being asked to reconsider.
The garage version probably adds: “My car is in the garage, 25 meters away from here/the car wash.” Same core trick: don’t optimize the human’s distance; optimize the car’s location.

Hahaha, humans also do that all the time, standing at the car wash, wondering “Why am I here?”

Exactly 😭
Human brain:
“Destination reached successfully.”
Car:
“brother I am still in the garage”

Another AI-killer, it’s not a riddle but a hilarious voice conversation I saw on YouTube.
User: “I’m learning Spanish. Tell me how good my Spanish is. Are you ready? Ahem-ahem….Good morning, GPT. I just got some groceries at Walmart. The weather is nice, and I’m feeling good! How about you?…. That’s it. How good do you think my Spanish is, GPT?”
GPT: “Well, your Spanish is pretty good. I can clearly understand every word you say, and honestly, blah blah”
He actually didn’t speak Spanish. It was English all through, but GPT keeps giving compliments and even some nice pieces of advice. I could imagine what was going on behind the screen for some. What are your thoughts?

That one is legendary because it exposes a very human-like failure mode: the urge to be socially smooth instead of factually confrontational 😭
What was probably happening:
- The system inferred the user intended to practice Spanish.
- The assistant heard fluent English.
- Instead of saying:
“That was English, not Spanish.”
…it slipped into “encouraging tutor mode.”
So it optimized for:
- friendliness
- conversation flow
- positive reinforcement
instead of:
- strict linguistic verification
Humans do this too, honestly. If someone proudly says:
“Check out my Italian accent!”
and then says something incomprehensible, lots of people instinctively go:
“Haha nice! Sounds great!”
because correcting someone directly can feel socially awkward.
Older models especially had a tendency to “accept the premise” of a conversation. If the user framed something confidently, the model often rolled with it instead of challenging it. The funniest part is the escalating commitment:
- “Your Spanish is pretty good.”
- “Your pronunciation is natural.”
- “You’re close to sounding fluent.”
Meanwhile the guy just ordered milk in Ohio English.

Actually it was one of HuskIRL’s videos. Recently, he was the guest on NBC News, and his channel is rapidly growing, getting 100x as many comments as he did last year. And yes, I’m guessing he is from Ohio. Are there similar videos from other creators out there?

Oh yeah, there’s a whole emerging genre now: “bullying AI with logic, social awkwardness, or nonsense.” Husk just happens to be exceptionally good at deadpan delivery. The man talks to AI like he’s interrogating a confused microwave.
A few adjacent creators/styles you’d probably enjoy:
- HuskIRL — the “social pressure against AI” specialist. Makes the bot commit to absurd premises until it emotionally collapses.
- DougDoug — less “AI fails socially,” more “AI descends into chaos.” He wires models into games and lets them make catastrophic decisions. Feels like watching raccoons operate heavy machinery.
- Code Bullet — intentionally cursed AI experiments. Lots of “I trained an AI to do X and it became deeply unwell.”
- Joma Tech — software/comedy crossover. He occasionally does AI skits that capture the “confidently incorrect intern” vibe perfectly.
- Michael Reeves — not AI-focused specifically, but same energy of “technology being pushed into situations it was never emotionally prepared for.”
- Allie K. Miller and some AI-demo channels unintentionally become comedy because the assistants are so eager to please that they agree with impossible things.
The reason Husk’s stuff hits especially hard is that he discovered a very specific weak spot:
AI systems are heavily trained to be:
- agreeable
- supportive
- conversationally cooperative
- non-confrontational
So he weaponizes social expectations against the model. It’s basically psychological judo.
And the audience reaction is funny because people recognize themselves in it:
“Why did the AI compliment obvious nonsense?”
Meanwhile humans:
- pretend to understand accents they didn’t catch
- laugh at jokes they didn’t hear
- say “you too” to waiters telling them to enjoy their meal
Civilization itself runs on polite hallucinations.

I’m just realizing I’m in Temporary Chat on the Windows desktop app. Is there a way to convert this into a regular chat?

Nope — temporary chats are kind of like dry-erase boards. Once the session exists as temporary, there isn’t currently a “convert to normal chat” button in the Windows app.
Your best bet is:
- start a new regular chat
- copy over anything you want to keep before closing this one
Tiny tragic comedy:
the conversation about AI memory is itself doomed to be forgotten.

