When We Stop Asking, Why?
Some answers aren't meant to come in a second
Have you noticed that people have stopped asking, why?
I might be dating myself, but pre-internet — if you were mid-conversation and someone posed a question no one knew the answer to, you had a choice: either trust the other person’s take or go dig for it later at the library.
Now, it’s immediate. People whip out their phones, punch the question into Google or, more recently, drop it into ChatGPT, Perplexity, Copilot, or Gemini. And just like that — answer delivered.
But here’s the thing: if you’ve used AI, you already know it’s not always accurate. Yes, it’s getting better daily. But it’s still making assumptions — and users often don’t even notice. Why? Because most people aren’t trained to ask the right questions in the first place. They don’t prompt with precision. And more critically, many don’t realize what they don’t know, which makes it easy to believe the response must be correct.
As a professor, I experiment with this all the time. I’ll type in a question I think my students might ask — maybe something from physiology or an ultrasound topic — just to see how AI responds. Sometimes it nails it. Sometimes it’s…well, let’s just say plausibly wrong. But I’ve noticed something more troubling: even when the facts are right, the connections are missing.
Let me give you an example.
Not long ago, I asked a student to explain what the liver does. They gave me the textbook-perfect answer: “It metabolizes drugs, stores glycogen, produces bile…” You know the drill.
But when I followed up and asked, “How might that function change in a patient with right-sided heart failure?” — they froze.
It’s not that they didn’t know anything. It’s that the knowledge had never been internalized. Memorized, yes. Understood in a way that allows for transfer? No. The dots hadn’t been connected — and they didn’t even realize there were dots to connect.
That’s the risk we run when we stop asking why.
AI can give us fast answers. And often, they sound good enough. But sounding good enough isn’t the same as being right, and it definitely isn’t the same as thinking for yourself.
If we’re not careful, AI won’t just become a shortcut — it’ll become a ceiling. It’ll flatten complexity into convenience. It’ll reduce knowledge to pre-packaged templates. And if that happens, we stop making sense of the world — we just consume explanations without tension, curiosity, or reflection.
That’s where we lose something deeply human.
So no — I don’t think AI is “bad.” I use it. I appreciate it. I even talk to it like a student sometimes. But I never let it replace the pause…the tension…the wondering. Because that’s where actual learning — and real humanity — still live.
Let’s keep asking why. Especially when it feels easier not to.
Till next time…
Stay human.
– Dr. D
If something in this stirred a thought, feel free to share it, pass it along, or just sit with it a bit.
I’ll be here, writing weekly — one post at a time.
Want it sent straight to you?
Subscribe below.
Curious to go deeper when I share full tools or reflections?
There’s a paid tier — totally optional, always low-key.

I appreciate your input and validation - I hear your story too often from educators nowadays. Ultimately, it comes down to critical thinking, which is what AI strips away (if you let it) from these students and it isn't until they are on the job where they will realize how crucial it is to internalize and reflect. Unfortunately, many jobs one can "fake it till they make it" with AI. But as a healthcare professional and educator, I have to make decisions with a patient in front of me or answer students questions without prompting AI.
Stay human...
Dr. D
As a K–12 math teacher, this article resonates deeply with what I see in my classroom every day. My students have unprecedented access to tools that can solve equations, graph functions, and even explain concepts in seconds — but those tools can’t replace thinking. I’ve watched students copy down an AI-generated answer without understanding why it works or how it connects to previous concepts. The danger isn’t in using AI; it’s in using it instead of curiosity. When students stop asking why, they stop building the critical thinking and problem-solving skills math is meant to develop. It’s my job not just to teach them how to get the answer, but to understand the reasoning behind it — to make connections, ask questions, and embrace the “pause” the author describes. That’s where the real math — and real learning — happens.