Working with AI adoption in large organizations for the past years, I regularly read new market reports with a “practical lens”.
Recently, KPMG released their quarterly AI Pulse which surveyed 130 leaders from $1B+ revenues enterprises in the US. KPMG frames the key takeaway as AI agents moving from pilots to production at an accelerated pace – not without challenges… The study is quite unique in terms of scope and size, so I took a closer look. Keep in mind that’s self-reported data: So, rather treat it as “trend signals”, not as “hard facts”.
Below I pick 6 striking insights from the survey for you – and share my take “from the trenches” incl. practical tips what to do with these.
Table of Contents
Almost half of organizations deployed AI agents
What the report says
The survey’s standout headline: Reported “deployments” of AI agents land at 42% – and the share nearly quadrupled vs. start of the year.
My two cents
I think many “deployments” are likely “lighter” SaaS pilots, not “deep legacy rewires”. But that’s fine – even partial wins move the adoption curve. Consider working iteratively: harvest learnings fast via MVPs, then fix its “rough edges”, then scale robust systems. Also, clarify what “rolled out” really means to you – to avoid counting “zombie pilots” as wins.
AI agents’ complexity is (unsurprisingly) a top hurdle
What the report says
Complexity is now a leading barrier for 7 in 10 respondents as firms start facing the real intricacies of compound AI systems “in action”. (There are (too) many moving parts in real AI systems to juggle).
My two cents
That resonates – also from the perspective of basic “probability calculations”. I wrote about the “underlying issue” in detail here (i.e. “conjunctive” and “disjunctive events bias”). For instance, small failure odds multiply across steps – i.e. data, engineering, UX etc. Unfortunately, we naturally tend to underestimate such accumulated risks and their impact. So, better trim moving parts where possible, test the riskiest assumptions first and keep rollback paths open…
Everyone appears to be “past exploration stage”
What the report says
9 in 10 leaders say their organizations have already moved beyond experimentation toward piloting, deployment etc. (Already in the quarterly survey before.)
My two cents
With AI you never fully leave experimentation mode. AI capabilities shift monthly. A big risk I see is quitting too early on use cases that would click (a bit) later. Many ideas seemingly impossible 1–2 years ago are trivial today (e.g. hyper-realistic video generation). I keep a rolling use case backlog and revisit/test “stale” ideas whenever relevant models or data significantly improve.
Workforce resistance gets milder with better integration
What the report says
Reported resistance drops to 21% – half of the level reported in the prior instance of this quarterly pulse check. The analysis doesn’t go deep on the “why” – which would be interesting to me – but mentions some enabling factors (empowerment, learning, safety etc.)
My two cents
This (kinda) mirrors my own experiences managing change around human-AI collaboration. Demystifying AI, making its value tangible fast and involving people widely early-on go a long way. The “IKEA-DIY effect” is real – mindsets shift when teams build and benefit themselves. Try design thinking workshops with teams, train them on AI’s potentials and limits, map real pain points and co-develop solutions which matter.
Classic KPIs are missing AI’s full impact
What the report says
78% think traditional performance metrics don’t capture the value of AI well. Yet most still expect ROI within 12 months while feeling growing pressure from investors and boards to show quick results. (Tough combination…)
My two cents
I “foresaw” this in my annual AI trend predictions: “pure playing time is over”. Try layered metrics instead: start with qualitative “value drivers” in exploration, then move to napkin numbers with first evidence (from pilots), harder KPIs at rollout etc. “Stage-gate” your way (explore → validate → roll-out) and tie each stage to clear go/no-go criteria. Differentiate metrics by type of AI (e.g. chatbot vs agent) and use case (e.g. sales vs finance) as one “template” won’t fit all.
Planned annual AI spending keeps rising
What the report says
Budgets continue to climb (at least allegedly), heading towards ca. $130M per surveyed company. (Again, to put this figure into perspective, we’re talking about very large organizations.)
My two cents
Always take investment claims with a grain of salt though – self-reporting plays a role here. Better watch for the “quiet plumbers” solving real use cases, not the loud headlines. Walking beats talking in my books… Let spend follow proof: block/release more budget only after a tightly scoped solution hits clear success thresholds etc.
Conclusion: “Pick two” – explore broadly & scale selectively
The “direction” and sentiment in this report match (roughly) what I experience in the field: more agentic AI applications (and “reality shocks”), stronger emphasis of AI literacy and stable “AI commitments”. Surveys like this help spot “typical accelerators and blockers” (as long as we treat such self-reported data with care…)
To balance the visible dilemma between “short-term delivery with AI” and “capturing its longer-term (kinda fuzzy) potential”, I recommend a “two-speed model”: Explore to learn quickly what’s possible (i.e. plenty cheap tests, stop rules, weekly progress etc.) while you exploit what works sustainably (standardize, secure, scale etc.) Yes, being “ambidextrous” isn’t easy – but IMO still your best shot at effective AI transformation.
Lastly, I’d love to see such a report with European nuances too: I suppose different market conditions, regulations and adoption patterns can paint a somewhat different picture. Now your turn – how do these “market signals” match your experience? Leave your thoughts below or get in touch.
Cheers,
John

What do you think?