AI · 2026-04-08
AI Fluency vs. AI Dependence
From the outside, someone who's fluent with AI and someone who's dependent on it can look exactly the same. Both use it constantly. Both produce work faster. The difference is in what happens when you take the tool away, and whether the person using it can still explain what they built.
Anthropic published the AI Fluency Index in February 2026. It analyzed 9,830 Claude conversations over a single week in January, looking for specific behaviors that signal effective collaboration with AI. Not just whether people use it, but how. The findings are worth sitting with.
What the data shows
The report identifies 24 behaviors that make up AI fluency, organized into a 4D framework: Delegation, Description, Discernment, and Diligence. Of those 24, only 11 are directly observable in a chat interface. The rest happen outside the conversation, in how you verify, share, and act on what AI gives you.
The biggest signal? Iteration. 85.7% of conversations showed some form of iteration or refinement, and those conversations displayed roughly double the fluency behaviors compared to one-and-done exchanges (2.67 vs. 1.33). Users who iterated were 5.6x more likely to question the AI's reasoning and 4x more likely to identify missing context.
But here's the part that stuck with me: only 30% of users set the terms of the collaboration upfront. Most people just start prompting without telling the AI how they want to work together. That's like walking into a meeting without an agenda and hoping it goes well.
The most counterintuitive finding involved what the report calls artifact conversations, the 12.3% where people were building something concrete like code or documents. These users were more directive upfront. They clarified goals, specified formats, provided examples. But they were also less likely to critically evaluate the output. Fact-checking dropped. Questioning reasoning dropped. The more polished the result looked, the less people scrutinized it.
The report calls this an evaluation gap. I'd call it the shiny object problem.
What I've noticed in my own work
I use AI for code generation, as a thinking partner, and for research. It's part of my daily workflow. And I've noticed the same pattern the data describes, not as a single moment of failure, but as a gradual drift.
You start by carefully reviewing everything. Then the outputs get good enough that you start skimming. Then you stop reading the parts that look right. It happens slowly, and that's what makes it dangerous. It's not that you decide to stop evaluating. It's that the need to evaluate stops feeling urgent.
The check I've landed on is simple: before I commit to using something AI produced, I ask myself if I can explain it back. Not to the AI. To myself. If I can walk through the logic, defend the choices, and identify what I'd change, then I've used the tool well. If I can't, I've outsourced my thinking.
This isn't just something I think about for myself. As a leader, I'm actively working through what AI fluency looks like for my team. The question isn't whether people should use AI. It's whether they're getting better at their craft because of it, or quietly losing the muscle.
Staying on the right side of the line
The data suggests a few concrete habits that separate fluency from dependence.
Set the terms early. Tell the AI how you want to work before you start working. Define the format, the constraints, the role you want it to play. Only 30% of people do this, and the ones who do show measurably more fluency behaviors.
Iterate instead of accepting. The first output is a draft, not an answer. The conversations with the most fluency signals were the ones where users pushed back, asked follow-ups, and refined. Treat the first response the way you'd treat a first draft from a colleague.
Question the reasoning, not just the result. It's easy to check whether code runs or whether prose reads well. It's harder to ask why the AI made a particular choice and whether that choice holds up. That's where the real evaluation happens.
Watch for the polish trap. The better the output looks, the less you'll want to question it. That's the evaluation gap in the data, and it's the most practical thing to guard against. A clean-looking result is not the same as a correct one.
None of this means delegation is bad. Sometimes you should hand a task to AI and move on. The problem is unconscious delegation, when you stop noticing that you've stopped thinking. Fluency is a spectrum, not a badge. The goal isn't to never rely on AI. It's to always know when you are.