The Agentic AI Tightrope: Navigating the Risks of Rushing In

To get agentic AI right, you need to be fast, careful, curious, and analytical all at once. The louder the hype, the more tempting it is to skip the boring work. Data hygiene. Governance. Customer journey redesign. But every shortcut creates agentic AI risks, ranging from harmless hallucinations to regulatory penalties and even the slow atrophy of human creativity.

Table of contents

    The Pressure To Move Fast is Real

    Agentic AI isn’t a flash-in-the-pan trend. More than 9 in 10 senior decision makers believe agentic AI will help deliver better customer experiences, and 88% believe it’ll help them hit their business goals.

    The technical community echoes that confidence. Among devs who build AI applications for enterprise, 99% are already exploring or building agents.

    The trouble is, the agentic AI arms race could be a race to the bottom for businesses that don’t do the groundwork.

    A fear of falling behind is fuelling a frenzy of slapdash builds propped up by bad data and misguided expectations. Agents are turning up in places where they have no business being. It’s causing a bunch of problems, ranging from sloppy content (albeit harmless) to data compliance issues, poor CX, and the slow atrophy of human creativity.

    In light of all this, how do organisations keep pace without compounding their risk exposure? That is, how are they supposed to balance speed with caution, given that resources are tight and most companies are making it up as they go?

    Please don’t misunderstand. We’re not Luddites. We’re simply advocating for a strategic approach. We’ve been in the customer experience management (CXM) business for too long to overlook the risks of rushing into agentic AI – or any tech trend for that matter.

    7 Strategic Risks of Agentic AI and How To Manage Them

    Shiny Object Syndrome

    The buzz around agentic AI has led many teams to leap straight into building complex workflows before they’ve properly defined the problem. They’re deploying agents to automate processes that could’ve been solved by standard automation tools or better design.

    What to do instead: Diagnose the customer problem before prescribing the solution. Step into their shoes. Find the gaps, then weigh up agents alongside other solutions to find the best remedy.

    “Enterprises need to be careful to not become the hammer in search of a nail. We had this when LLMs first came on the scene. People said, ‘Step one: we’re going to use LLMs. Step two: What should we use them for?’”

    Marina Danilevsky, IBM Senior Research Scientist, Language Technologies.

    Automating Obsolete Journeys

    Another perspective on the “wrong problem” agentic AI risk is that organisations are using tomorrow’s tech to solve yesterday’s challenges. For example, mapping a model that automates retargeting after a website visit.

    But behaviour is shifting faster than most organisations realise. Customers are already using AI tools to research, compare, and transact. That entire purchase might happen without the customer touching the brand’s digital platforms, meaning no trail to use for retargeting.

    Good developers can build agents quickly. But that doesn’t make them free. Building for the wrong customer journey still costs you.

    What to do instead: Zoom out. How are your customers’ journeys evolving? What will it look like with customer-side AI agents involved? Build for that model and continue optimising it to extract more value over time.

    Lacklustre Experiences

    You don’t need to look far to find examples of brands publishing AI slop. The same thing is happening with agents, although it’s harder to see because they work in the background.

    Yes, agents can resolve issues faster. They can answer questions 24/7 and call generative tools to produce content at scale. But they can also flatten the customer experience into something lifeless. The result is brand erosion by journeys devoid of the human touch.

    What to do instead: Keep the creative and decision-making bits human. Use agents to handle the grunt work (data wrangling, segmentation, delivery). Give them a curated, modular content system so they can assemble best-fit experiences, but don’t let them instruct genAI to create new content.

    Creative and Cultural Erosion

    You might have seen the MIT study linking ChatGPT use to cognitive decline. Imagine that on the scale of solving enterprise process problems and engaging a global customer base.

    Teams that outsource every thought process to agentic AI risk losing the ability to think creatively and critically. Originality – the ‘surprise and delight’ element that customers love – is replaced with generic experiences, albeit faster and at a larger scale.

    There’s also the productivity trap. As output increases, so do expectations. Teams end up under pressure to output more work, rather than having space for higher-value strategic tasks.

    What to do instead: Automate thoughtfully, focusing on time-consuming tasks rather than cognitive projects. And be careful how you measure success. More output doesn’t always mean better CXM.

    The Skill Illusion

    Around 1 in 4 of the developers that IBM surveyed called themselves “experts” in generative AI. Yet 99% are exploring or developing AI agents. Many are learning on the fly, under pressure to deliver something impressive, fast.

    Unrealistic expectations about the agents themselves also play in here. Although models have come a long way in a short time, there’s still a gap between what they can do today and what they’ll be capable of at full power.

    So on one hand, companies are pushing underprepared developers to roll out an AI agent just to say they’ve got one. And on the other hand, agents aren’t yet capable of fulfilling their promised potential.

    That can only end in one place: disappointment.

    What to do instead: Give developers time and resources to get up to speed. Let them experiment with low-risk internal processes before pushing out a customer-facing agent that embarrasses the brand.

    Regulatory and Legal Exposure

    New legislation is rolling out to curtail AI misuse and protect users:

    The EU AI Act starts applying to general-purpose models in August 2025.

    • Stricter rules for high-risk systems land in 2027.
    • In the US, the FTC is cracking down on AI misuse.
    • Some pundits are calling 2025 the “wild west” of agentic AI in CXM, but the reality is a little more sober. Experimentation without oversight is bad business. Trying to fix a rogue agent’s behaviour after launch tends to end in public embarrassment (at best).

    What to do instead: Build governance and guardrails into your agentic strategy from day one. That means:

    • Building with encryption and role-based access.
    • Stress-testing with “red-teaming”.
    • Creating prompts and systems that prevent harm before it happens.

    Gilded Garbage

    AI hallucinations are nothing new. But imagine those ‘hallucinations’ in the form of decisions impacting customer journeys. AI systems fed bad data tend to be confidently wrong. Left unchecked, they’ll keep giving incorrect answers, repeat them faster, and amplify them across channels.

    It’s “garbage in, garbage out”, powered by AI.

    What to do instead: Make data hygiene an organisation-wide initiative. Don’t leave it to one person. Systematise it.

    • Clean your knowledge bases.
    • Structure internal content.
    • Create a master prompt that enforces tone, accuracy, and guardrails.
    • Keep humans in the loop for important decisions.

    In most cases we’ve seen, businesses find that there’s a fair bit of foundational work to sort out their internal knowledge bases and data hygiene before they can benefit from agentic AI.

    The Risk of Not Rushing

    For all these risks, the biggest one might be standing still. Agentic AI is already reshaping how experiences are delivered. Early adopters – usually big businesses with big budgets – are seeing a significant first-mover advantage.

    81% of leaders expect agentic AI to create a competitive edge. Waiting and watching isn’t strategic. It’s risky.

    What to do instead: Move fast, but don’t skip the strategy. Companies need to find a way to remove internal bureaucracy that was set up to serve old ways of working.

    • Run lightweight, low-risk pilots.
    • Embrace rapid iteration.
    • Create a feedback system to learn, adjust, and build momentum.

    Speed isn’t the enemy. Speed without safeguards is.

    We’ll Say It Again: Start With Strategy and Scale Up At Your Speed

    Whether you think agentic AI is a bubble or a boon, you’re right. It has the potential to be both. Although you can’t afford to sit out agentic AI, rushing in without a plan or purpose is risky.

    The solution is to move forward with a considered approach. This means applying the right tool to the right problem, recognising where quality matters (and where it’s missing), and keeping the human element in critical moments.

    It’s about prioritising use cases and ensuring the net benefit is clear before committing to a complex build.

    Ultimately, it’s about strategically adopting new ways of working to build long-term benefits. Boiled down to that, agentic AI in CXM doesn’t seem so different from any other digital transformation project.


    Keep in touch

    Stay informed with personalised updates and insights by signing up to our customer experience focused newsletter.

    • Get the latest CX trends and updates
    • Get inspired by the latest success stories
    • On average, we’ll send one email a month
    • You can unsubscribe at any time
    We just need a few more details
    Newsletter Pop Up
    Radio Buttons
    close