section-asset-3

The AI Negotiation Copilot: Already outdated in 2026?

The negotiation copilot is already outdated in 2026. In fact I think I called this in 2025… Microsoft co-pilot is only going to get you so far on efficiency gains and AI benefits.

I’ve been reading a recent article from McKinsey & Company Agentic AI in procurement (Feb 2026). It’s an important piece and a sign that autonomous agents are finally being taken seriously at C-suite level.

They’re right about a lot. But when it comes to AI negotiation, I think they’re wrong. In fact the report reads like they are playing it safe, it reads a bit like a corporate horoscope which is always going to contain a grian of truth for everybody but doesn’t ACTUALLY predict where we are heading.

Of course, McKinsey are experts and they get MOST things right

The article frames a clear shift from Analytical AI (“show me the data”) to Agentic AI (“do it for me”).

In procurement terms, their position on negotiation boils down this this:

  • AI as a digital colleague – Agents prepare the ground: analysing bids overnight, tracking indices, building negotiation playbooks. Humans still do the negotiating.
  • Best suited to the long tail – The strongest examples are in high-volume, lower-complexity categories (software renewals, consumables). The value comes from speed, coverage and efficiency but not fundamentally different outcomes.
  • A hybrid workforce – Humans focus on judgment and relationships. Agents handle scale, speed and synthesis. Human-in-the-loop remains the default.

This is a sensible, credible position and it will feel reassuring to many procurement leaders who don’t want to step out of their comfort zone. I just don’t think it’s where this ends up.

(If I wrote with AI this would be the “uncomfortable truth” part of the article 😉 IYKYK)

Why the AI Negotiation “copilot” narrative won’t last

Right now, the industry is obsessed with negotiation prep agents.   AI as a very clever intern: tidying spreadsheets so humans can do the “real” work.

I think that phase will be MUCH shorter than people expect.

Firstly, AI negotiation doesn’t stop at the long tail. That’s just where organisations feel safe starting and it as nothing to do with the technology or capability.

Once an agent can process millions of price points, live market data, historical outcomes and behavioural patterns simultaneously, it will outperform most human negotiators on mid- to high-complexity deals — not because it’s smarter, but because it’s faster and relentlessly consistent.

Second, suppliers will actively prefer negotiating with agents. We talk a lot about “relationship building” as a human moat. But in practice, suppliers often experience human negotiators as emotional, inconsistent or slow. An agent is objective, predictable and instant. Over time, likability will be defined by low friction and fair outcomes, not charisma or golf handicaps.

Third, the definition of “strategic” will shrink. We’re heading toward a world where the majority of procurement is agent-to-agent. Human-to-human negotiation won’t disappear, but it will be reserved for genuinely existential deals:

  • a 10-year aircraft order,
  • a multi-decade energy contract,
  • a once-in-a-generation platform partnership.

Everything else, we will likely find an agent will do it faster, better and often more diplomatically.

The hidden assumption behind the AI Negotiation copilot model

The idea of the copilot rests on a flawed assumption that negotiation is primarily about human judgement. But most procurement negotiations are structured exchanges governed by strict guardrails:

  • target price ranges
  • acceptable concession paths
  • standard payment terms
  • fallback positions

In other words, our B2B negotiations are rule-bound systems with defined objectives; doesn’t it sound like an AI could handle that by itself?

Once objectives and constraints are clear, much of the execution becomes pattern recognition, consistency and patience. The ‘copilot’ model exists because we are uncomfortable at the prospect of being wrestled away from the steering wheel.

Governance is the real job in AI negotiation

The shift from copilot to autonomy does NOT remove humans from procurement negotiation but it does change their role. Instead of negotiating event by event, leaders will define where autonomy is allowed, what guardrails apply, when escalation is required and how outcomes are measured.

This requires clarity of policy, risk appetite and commercial intent. We actually find this is often the hardest part of setting up AI negotiation – you think your have set rules but it can be hard to put them into words and explain if and when expections arise.

It also requires systems that can audit every concession they make and explain every outcome. Ironically, autonomous negotiation may actually increase transparency compared to human-led processes. Every move they make is logged, with every decision traceable to a rule or objective.

For organisations serious about control, this should be reassuring, not threatening.

Agent-to-agent AI negotiation is coming

There’s another force at play in this: suppliers are building agents too. We should know as we operate both sides of this divide, in procurement and B2B sales.

As agent-to-agent negotiation becomes technically feasible, insisting on human-only negotiation becomes less a sign of sophistication and more of a bottleneck.

The first organisations to adopt autonomy at scale will not just reduce their costs – they will reset expectations about speed and fairness. Suppliers, in turn, will adapt to the new norm because it is predictable and efficient.

Eventually, once both sides run structured, transparent agents, I think we will negotiation become less about theatrics and more about calibrated optimisation within shared constraints. Pareto maximum is what they call it in the negotiation world.

The real shift leaders should be making with AI negotiation

If your AI strategy is about helping your team negotiate, you’re aiming too low. The real opportunity is to build systems where:

  • AI is the negotiator,
  • humans design the parameters,
  • and procurement leaders govern risk, strategy and intent.

Behind the scenes at Nibble we are designing a second generation “co-pilot” where your controls are visible, easy to set up and embedded in your workflow so sending out an agent to negotiate for you is easy. Drag-drop-negotiate. Watch this space.

Just One More Thing

Regular readers of my newsletter will note I am keen on designing AI which is ethical and responsible; however, the more I learn about tech design, the more I understand that seemingly small design decisions can have wide-ranging and not always predictable consequences. We need to think from different perspectives and viewpoints as we build technology today.

How many people question it when ChatGPT offers to help you with the next most likely question after each answer? Helpful? Yes, possibly—but also almost certainly designed to KEEP YOU THERE. It is not inviting you to leave the conversation now that your query is answered.

One of the most famous examples of such a design feature is the infinite scroll. In the old days, search results were in pages; you needed to click to turn the page, much like a book. Then came the infinite scroll—no need to click “next,” and consequently, no natural break to pause or stop. It has driven addictive behaviours in social media and—stop press—it looks like it might soon be outlawed in Europe:

The EU vs. The Infinite Scroll The European Union has formally accused TikTok of violating the Digital Services Act by using “addictive design” features like infinite scroll and autoplay. Regulators argue these mechanics force users into an “autopilot mode” that harms mental health, particularly for minors. To avoid fines of up to 6% of its global revenue, TikTok may be forced to redesign its core interface to include mandatory “friction” and screen-time breaks. This marks a historic shift from regulating what we see to regulating how apps are engineered to keep us hooked.


This isn’t just about social media and, if the EU succeeds, it sets a global precedent that could force all of us—AI developers included—to prioritise user well-being over engagement metrics. Read all about it on techcrunch here

Interested in Nibble?

    Nibble is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick below to say how you would like us to contact you:

    You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

    By clicking submit below, you consent to allow Nibble to store and process the personal information submitted above to provide you the content requested.