Insight

What We Learned by Running an Agentic AI Hackathon

Four weeks. Open brief. Paid tools. No rules. We ran an agentic AI hackathon to field test ideas and figure out whether there’s any value in AI tools. (Spoiler: there is).

Table of contents

    Background Briefing: Setting Up the Agentic AI Hackathon

    We’ve seen the headlines. We feel the hype. And we heard the question whispered in the wings: Is agentic AI actually worth investing in?

    The best way to learn is by doing. So we laid down a challenge to see what our people could create with agentic AI.

    The teams were a mix of CXM consultants, martech specialists, IT engineers, and B2B strategists. Each had access to paid OpenAI accounts and a toolkit of workflow automation platforms like N8N and Microsoft Copilot. They were free to build whatever they wanted.

    The goal wasn’t to produce finished products. It was to test ideas, learn what works and what doesn’t, and clarify what it really takes to turn agentic AI into value.

    IT Support Agent

    The idea

    IT support can be frustrating for both sides. One team in our agentic AI hackathon built a prototype agent that would streamline support and ease pressure.

    End users (employees) describe their issue in a Teams channel. The agent checks a knowledge base, searches the web, and suggests fixes. Anything it can’t solve gets escalated into a trackable ticket.

    The Build

    The team tested two configurations:

    • N8N + Pinecone + OpenAI + SerpApi.
    • Copilot Studio + Power Automate + Planner + Teams.

    The Copilot version won out for its flexibility, intuitive UI, and integration with our existing systems.

    What Worked

    Teams-based workflow felt natural for users.

    • Faster troubleshooting and fewer unnecessary tickets.
    • Critical issues were automatically prioritised.
    • Integration with existing systems was relatively straightforward.

    What Needs More Work

    • The prototype couldn’t handle attachments.
    • A “reset button” to prevent dead-end conversations.
    • Smoother ways to update the knowledge base with automation and human oversight.

    The Spark

    Support agents with a bespoke knowledge base are highly scalable. They can be used anywhere repetitive queries and ticketing are common. The ability to call other processes and tools, upgrade the knowledge base, and route issues to the right person means it’s not limited to the same rigid processes that frustrate chatbot users.

    Skills Matrix Bot

    The idea

    Resourcing decisions often rely on messy spreadsheets, human memory, and too many meetings. There’s also room for bias. A team of three CXM consultants built a bot to quickly surface the right people for a project based on skills and experience.

    The Build

    The team attached an Airtable database, OpenAI-powered chat, and RAM-style simple memory to an agent. They started with Copilot but shifted to N8N and Airtable due to issues with debugging and efficiency. Users submitted natural language queries and received answers informed by contextual memory and database records. 

    What Worked

    • Restructuring skills data provided significantly better results.
    • The bot returned skill matches in seconds.
    • The N8N and Airtable combination allows for reconfiguration and repurposing.

    What Needs More Work

    • Skills change frequently, so the database needs regular updates.
    • The interface wasn’t user-friendly for non-techy folks.
    • Upgrading Airtable to PostgreSQL (or similar) would provide enterprise scalability.

    The Spark

    This prototype looked and felt like the foundation of a SaaS-style tool. Lightweight and immediately useful, with an extensible architecture for other use cases. It was proof that with the right data, AI agents can make processes faster and smarter.

    RFP Agent

    The idea

    Gathering data for RFP responses is repetitive and time-consuming. One team in our agentic AI hackathon built an agent to generate first drafts by collecting data from past proposals, case studies, and internal reference documents.

    Users interact with a Teams chat. In the background, the agent queries Tap’s internal knowledge library to extract relevant information. It’s also digested the RFP and can filter responses according to the instructions.

    The Build

    The prototype combined Copilot and n8n workflows with curated SharePoint master sources. Questions from an RFP could be fed in, and the agent produced draft documents (Word, Excel, PPT, and a project plan) ready for consultant review.

    What Worked

    • Drafting time dropped from weeks to hours.
    • Consistency improved by using a single source of truth.
    • Outputs were structured and easy to refine.
    • Testing a single use case proved more effective than sprawling tasks.

    What Needs More Work

    • Data in master sources must be clear, current, and accessible.
    • File structure management is crucial.
    • Human oversight is still needed to polish and add value.
    • Prompts need to be clear and detailed; non-technical users may need prompt training.

    The Spark

    A clear, repeatable use case is where agentic AI adds real value. The method could be applied in organisations with a heavy RFP or proposal workload, such as commercial real estate, tech sales, or construction.

    Campaign Explainer

    The idea

    Adobe campaign setups can look like alien code. Non-technical users are often unsure of what’s happening under the hood. One of our hackathon teams created an agent to decode campaigns and explain them in plain English.

    The agent interprets campaign files and generates conversational explanations of technical elements (fields, tables, and workflows), enabling two-way conversations to refine campaign understanding.

    The Build

    Users prompt the agent with a question about campaign performance or purpose. Using data parsing scripts and layered prompts, it calls multiple sources to understand what’s happening. All that context is sent to an LLM, which outputs a response in plain English, ready for the next user input.

    What Worked

    • Bridged the gap between technical and business users.
    • No longer need to audit campaigns to understand what they do.
    • Providing full context and certified Adobe documentation yielded more accurate responses.

    What Needs More Work

    • Heavy reliance on certified documentation.
    • Hard to replicate or maintain without domain expertise.
    • Currently tailored to Adobe environments.

    The Spark

    This is a great example of agentic AI as a clarity tool rather than just an automation engine. It shows how agents can build trust and confidence in complex systems.

    What We Learned

    • The rocket science is knowing what to build: Spinning up agents is easy. Finding the use cases that create value is the hard part.
    • Data quality is make-or-break: Agents are only as good as their inputs. Clean, structured, up-to-date data is non-negotiable.
    • Master prompts > mountains of documents: Curated knowledge beats volume. One well-designed “master source” worked better than hundreds of random files.
    • Agents excel at QA, not creativity: Checking links, surfacing gaps, highlighting errors? Yes. Replacing human judgment? No.
    • AI literacy is now a must-have: It’s not enough to have tools. You need people who know how to steer them, debug them, and use them wisely.
    • Prototypes are fast, not forever: Builds last weeks or months before being eclipsed. That’s fine. They’re cheap and quick to rebuild.

    What Should You Do With All This?

    As fun as it was, the real point of our agentic AI hackathon was to test the viability and value of the tech everyone’s talking about.

    Clarify Use Cases

    Don’t let your organisation become a hammer in search of a nail. Evaluate agentic AI against other solutions before going all-in. You might find that removing bureaucracy, rejigging team structures, or improving data quality is a better way to achieve your goals.

    Still, it’s worthwhile experimenting with agentic AI to understand how it works. Start with friction points. QA, compliance, or data retrieval are safe, low-effort entry points where the value and parameters are clear.

    Think Ahead, Not Backwards

    Agents aren’t intended to patch over holes in existing processes. Especially since workflows tend to mirror the organisation’s capabilities, and the regulatory landscape at the time they were devised.

    Rather than retrofitting, use agentic AI to prototype future-ready processes. Also, consider how journeys will change once customers have their own agents – and build your agent to meet theirs.

    Pilot, Learn, Repeat

    Treat AI agents as products, not one-off experiments. Establish KPIs, track performance metrics, and define ownership from the outset. Most importantly, make sure you understand its purpose so you can decide whether it’s worthwhile to maintain.

    Agents can self-teach to some extent. But they’re only as good as the data they’re given. Ongoing investment and iteration are the key to turning a prototype experiment into a valuable internal tool.

    The Bigger Picture: Why Most AI Pilots Fail

    If you follow tech news, you might think we’re in an AI bubble. Headlines love to trumpet scary stats:

    Dig deeper, though, and it’s clear that the project failure rate isn’t evidence of fatal flaws in AI. It’s a sign that organisations are rushing to build with tools they don’t understand.

    Most projects fail because organisations skip the heavy lifting: strategy, integration, ownership, and adaptability.

    Projects that succeed are laser-focused on one well-defined problem. Organisations partner with the right experts and embed AI into targeted workflows. Often, that’s back-office roles, not flashy marketing campaigns.

    Our hackathon taught us that. The prototypes were exciting, but it was obvious in every case that clean data, strong governance, and task clarity were needed to make a viable and valuable product.

    That’s how you end up with an AI agent that delivers meaningful long-term value.


    Keep in touch

    Stay informed with personalised updates and insights by signing up to our customer experience focused newsletter.

    • Get the latest CX trends and updates
    • Get inspired by the latest success stories
    • On average, we’ll send one email a month
    • You can unsubscribe at any time
    We just need a few more details
    Newsletter Pop Up
    Radio Buttons
    close