top of page

AI Fluency Has Nothing to Do With AI

  • Writer: Igor Martins · Human-in-the-Loop
    Igor Martins · Human-in-the-Loop
  • Apr 16
  • 6 min read


Everyone is asking for "AI fluency". Almost nobody knows what they're actually asking for.


Which is fine, honestly. The marketing industry has a long and proud tradition of demanding things it can't define. Remember when everyone needed "synergy"? Or "authentic storytelling"? At least with AI fluency we've upgraded to a term that sounds like it could mean something.


Here's what I've noticed over the past year: every job post, every CMO conversation, every agency brief mentions it. "We need a team with AI fluency." It's become the most requested and least defined capability in marketing right now. It's like putting "must be a team player" on a job description. Technically a requirement. Practically meaningless without more questions.


And the gap between what people say and what they mean is creating real problems.


Most people treat AI fluency as a binary thing. Either your team uses AI or it doesn't. Either you're "in" or you're "behind." That framing is wrong, and if you're making hiring decisions or evaluating creative partners based on it, you're basically choosing a surgeon based on whether they've heard of the scalpel.


There are three layers. They don't work without each other.





Layer 1: The human comes first. Not the tools.


This surprises people.


When we talk about AI fluency, the first instinct is to look at tools. What software is the team using? Are they on the latest models? Can they write a decent prompt?


Those are useful questions but they're the wrong starting point. Asking "which AI tools does your team use" before asking "who's making the directional decisions" is like hiring a Formula 1 crew and starting the interview with "so, does anyone here know where the car is parked."


The real question is: who's in charge of direction?


AI fluency starts with a human who can do four things clearly: set the direction (what are we actually making here, and why does it matter to this specific audience right now?), supervise outputs without just rubber-stamping, connect things that seem unrelated (how does this creative decision relate to where the buyer actually is in the funnel?), and audit with insight rather than checklist.


This is not a passive role. It's active sense-making, the kind of cognitive work that requires accumulated context, memory, judgment, and yes, some scar tissue from past brand mistakes. Things no model has. Things that cannot be prompted into existence.


When a CMO says they want a team with AI fluency, what they're often feeling but not fully articulating is: "I want the human judgment layer to be strong, not absent". I want someone who can spot when the output is plausible but wrong.


That last part especially. Plausible-but-wrong is the most dangerous output in brand communication. It passes every review. It sounds like the brand. It has roughly the right tone and approximately the correct message. It's the content equivalent of a GPS that says "in 400 meters, turn into the ocean" in a completely calm, confident voice. The technology is working perfectly. The output is still going to kill you.





Layer 2: Fluency is a system, not a skill.


This is where most teams get stuck.


They have one or two people who are genuinely good with AI. Maybe there's someone who writes great prompts, another who knows which tools to use for what. But the capability lives in those individuals, not in the workflow. The moment those people are unavailable, or just having a rough week, output quality drops and nobody knows exactly why.


That's not fluency. That's dependency dressed up as capability. It's the organizational equivalent of having exactly one person who knows where the WiFi password is written down.


Real AI fluency at the team level is about methodology. Where does AI enter the creative workflow. Under what conditions. With which guardrails. Reviewed by whom, at which stage, and with what criteria for rejection.


Methodology. Not magic.


The difference is absurdly visible when you're shipping consistent brand content at volume under deadline pressure. Which, if you're leading marketing at a scaling company, is what your weeks actually look like.





Layer 3: Augmented intelligence (velocity is a consequence, not a pitch)


This is the part everyone leads with. It should be the last thing you think about.


When layers 1 and 2 are solid, speed happens naturally. The human judgment is sharp, the methodology is reliable, so AI genuinely multiplies those things rather than exposing their absence. That's augmented intelligence in practice: the team does more, processes more, sees more, because the human is working on richer problems.


Teams that skip straight to "AI makes us fast" without building the layers underneath are essentially giving a sous chef control of the entire kitchen because he's quick with a knife. Technically accurate. Possibly fine. Until the birthday cake arrives with wasabi frosting and a note that says "the client mentioned they enjoy bold flavors."


The companies reporting real productivity gains from AI aren't the ones who deployed the most tools. They're the ones who invested in human judgment first and let AI multiply it. Speed was the result. Not the strategy.




The gap that should bother you


Gartner published something last year that's been quietly circulating in CMO circles. 65% of CMOs believe AI will dramatically change their role within two years. Only 32% think they need to update their skills to respond.


Read that again.


The majority of marketing leaders see disruption coming. And less than half of them think they personally need to do anything different. This is the professional equivalent of watching a wave form on the horizon and deciding your best move is to finish your sandwich first. There's a name for this kind of blind spot, it's not rare in leadership transitions, and it typically resolves itself when someone gets replaced rather than when someone gets curious.


I'm not saying this to be harsh. I've watched smart, experienced marketing leaders get caught flat-footed on this kind of shift because the tools looked optional until they weren't. The transition from "interesting experiment" to "table stakes" never sends a calendar invite.



The transition period we're in right now is genuinely strange. Teams are adopting tools faster than they're developing the judgment to operate them well. Most companies have someone who "handles the AI stuff." Fewer have a documented process. Almost none have built the human layer that makes everything actually work when it's Tuesday afternoon and the campaign ships at 8am Wednesday.


There's something almost comedic about it. The most human skill in marketing, the ability to tell a true story in a way that actually lands, has never been more valuable. And the industry response has been to invest heavily in systems that remove humans from the process and then act confused when the content sounds like it was written by a very confident intern who read a lot of brand guidelines but never once talked to a customer.



The thing nobody wants to sit with is that the more AI handles execution, the more valuable the human direction layer becomes. Not less. More.


If every team has access to similar tools, the differentiator isn't the tool. It's the judgment running it. Which means the scarcest thing in marketing right now isn't AI capability. It's the human who can tell the difference between content that sounds like the brand and content that is the brand.


Those are not the same thing. Most AI-generated content is the former. Most teams can't reliably tell which one they're shipping. And most clients only notice six months later when they realize everything they published could have been written by any company in their category, and some of it probably was.


I keep watching organizations invest heavily in tools and almost nothing in developing the judgment layer that makes those tools worth running. The tools get cheaper every quarter. The judgment gap gets wider. There's something unresolved there that I don't think the market has fully reckoned with yet. Maybe it needs to get more expensive first.



What layer do you think most marketing teams are actually weakest on?



ps. the Gartner numbers above may vary depending on which version of the report you're reading. I've seen a few different cuts cited. The direction is consistent. The size of the gap is not in dispute. The sandwich metaphor, however, is entirely my own problem.


 
 
bottom of page