Propped Up by a Prompt
Propped Up by a Prompt
What happens when AI confidence masks real-world inexperience—and how businesses pay the price
It started with a comment from a client.
He told me, “I don’t think they actually know how to do half of what they’re being assigned. But they know how to ask ChatGPT. So they think that’s the same thing.”
That’s what AI overconfidence in the workplace looks like.
And the more we unpacked it, the more I saw it:
The quiet emergence of a new kind of employee.
One that’s well-meaning, technically curious, and dangerously overconfident—because somewhere along the way, a fast answer from AI started to feel like real expertise.
It’s not.
And now, a lot of businesses are paying for the difference.
When AI Becomes a Crutch
This isn’t an anti-AI article. I use it. I build systems that rely on it.

I’m not worried about the technology. I’m worried about how we’ve decided to use it—without rules, without training, and without checks.
In company after company, here’s the pattern I’m seeing:
- A junior employee is given a task they’ve never done before.
- Instead of asking their manager or reviewing SOPs, they go straight to an AI tool.
- The tool responds quickly, clearly, and confidently.
- The employee now feels like they’ve “got it.”
- They go execute.
At first, it works. The task is completed. A deliverable is produced.
But then the cracks start to show.
The process doesn’t scale. The logic is off. The decision backfires.
And no one understands why—because it looked so good on paper.
What we’re seeing is a growing gap between confidence and competence.
And when AI props that up unchecked, the business ends up footing the bill.
Pretending to Know is the New Workplace Default
Let’s call this what it is: performance.
Employees, often unintentionally, are now performing expertise—not developing it.
The tools have made it easier than ever to look capable.
AI can write the pitch. Summarize the brief. Build the formula. Create the SOP.
But it can’t:
- Understand context
- Manage risk
- Make tradeoffs
- Handle fallout
- Lead
I’ve seen junior employees build entire project timelines from AI prompts without once checking whether their team had the capacity to deliver. I’ve seen marketing plans that looked beautiful but lacked any awareness of market nuance or brand tone.
Because AI can give you the answer.
It can’t tell you whether that answer makes sense for the business you’re in.
The Cost of Misplaced Confidence
The problem here isn’t ambition. It’s absence.
An absence of mentorship. Of review. Of operational guardrails.
When people build with tools they don’t understand, the issues show up later.
In lost time. In rework. In decisions that cost money because no one asked the second layer of questions.
I’m watching leaders struggle to balance encouragement with caution.
They don’t want to squash initiative. But they also can’t afford to let unchecked overconfidence turn into quiet operational chaos.
And this is where the real damage happens: not from AI itself—but from the belief that speed and confidence equals skill.
What Leaders Need to Do
If you’re running a company right now, you don’t need to ban AI.
You need to manage it.
Here’s where I start when I work with a leadership team on this issue:
- Define where AI is allowed—and where it’s not.
Make it clear what tasks can be supported by AI and which ones require human review, judgment, or context. - Create friction intentionally.
Add review steps, peer checks, or internal prompts that force second-level thinking. - Refocus on operational fluency.
If your team can’t explain why they did something—beyond “because the prompt said so”—they’re not ready to own the result. - Install systems that catch it early.
This is where Lean AI® comes in. You don’t eliminate tools. You build infrastructure that makes them accountable.
Why Lean AI® Was Built for This
Lean AI® was never designed to “replace people.”
It was designed to support the people who are already trying to do too much with too little.
It does that by:
- Removing manual friction
- Building smart, structure-first workflows
- Applying intelligence in narrow, high-leverage ways
- Catching breakdowns before they escalate
More importantly, it creates a system where AI output isn’t blindly trusted.
It’s integrated, audited, and supported by a real operational backbone.
The result?
People grow. Teams get faster. And the business doesn’t pay for overconfidence masked as capability.
Final Word
AI is powerful. No question.
But it’s only as useful as the structure it lives inside—and the judgment of the people using it.
If your team is producing polished work that keeps breaking, ask why.
If your junior staff seem brilliant on Monday and underwater by Friday, dig deeper.
Because in too many companies, what looks like productivity is really just AI-generated performance.
And unless you’ve built systems to tell the difference, you won’t see the cracks until it’s too late.
Ryan Gartrell
Consultant. Operator. Creator of Lean AI®.
ryangartrell.com |
angryshrimpmedia.com
https://open.substack.com/pub/ryangartrell/p/propped-up-by-a-prompt?r=62qi06&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true