
This newsletter is a bit techy this week. I try to illustrate a subtlety in the decisions which un-named and unknown developers all over the world are taking when they design negotiation agents (or other agents).
This topic expands on my core AI principle and recommendation that we should all, as leaders and as choosers of AI solutions, “Ask the second question”. You need to go a little deeper on HOW the agent is programmed and WHY it makes decisions, keep demanding explanations from vendors and software companies.
In this newsletter, I am going to use a new study about the power of chatbots in persuasion to illustrate this.
A new academic study from (my old uni) Oxford, shows that the more facts and arguments an AI chatbot puts into a conversation, the more persuasive it becomes. It was actually shown using people talking about their political beliefs, so this conclusion feels especially important to be aware of in today’s increasingly polarised and divided world.
It is also something I urge you to consider when you are designing AI agents for business or buying solutions. Agents, or chatbots, can be extremely powerful drivers of change, persuasive by design is how we think about it at Nibble BUT with great influence also comes great responsibility.
The academic findings don’t surprise me at all. It’s how negotiation has always worked in the real world and it’s something we deliberately use in Nibble conversations and have done for the last five years. We apply academic principles from human-to-human negotiations to the AI context to be more persuasive, including structured reasoning, multiple justifications, clear trade-offs and multiple simultaneous offers. It isn’t about confidence tricks or psychological games but academically proven negotiation techniques.
However, as I thought about it more, what the research surfaces is, at its heart, an uncomfortable design choice for AI providers like us at Nibble.
You can make an AI negotiator more persuasive by giving it suitable arguments to deploy by adopting two different approaches:
- You can tightly control the facts and reasoning it’s allowed to use, and be confident every argument stands up but the agent will be limited in scope and breadth. In turn this may limit its persuasiveness and make it appear a little more robotic; alternatively,
- You can give it free rein to assimilate information for the whole internet, generate arguments dynamically, and say more things, probably using deep research in a LLM or Googling facts for itself on the fly.
Persuasion will increases with that second approach but accuracy drops rapidly. In a commercial context, that isn’t just an accuracy problem, it is an honesty problem.
We’re frequently asked by clients whether Nibble can “just take in information and make the arguments itself” . Sometimes the way we ask you to set up the agent can feel too onerous. I understand why it’s appealing to ask the chatbot to “think for itself” and we have all got used to LLMs being really smart from using Chat GPT and Gemini day to day. But at scale, I think this approach is unwise, possibly even unethical for supplier relationships if you accidentally end up lying to them and, when regulation finally catches up with AI, probably very risky from a legal perspective.
We hear a related issue all the time in real negotiations. Suppliers deploy weak and flaky arguments to support higher prices or better terms. Inflation, fuel costs, labour shortages, regulation. Whilst they each sound plausible, if you do the research, you often find a flaw. Fuel inflation applied to delivery fleets that are electric. Cost increases that don’t match volumes. Risks that don’t actually apply to this contract. The persuasion works until someone knowledgeable drills into the detail.
That’s why our clients don’t just want Nibble to argue well. They want it to challenge weak reasoning as well as present its own. Persuasion with discipline.
There’s also a legal reality here that’s easy to ignore. A few years ago, a Canadian airline’s chatbot offered a customer a discount outside the company’s normal rules. The airline argued the chatbot had made a mistake. The court disagreed. The chatbot’s words were binding.
That case matters. If an AI negotiator secures agreement using arguments that don’t stand up, you can’t simply “wipe the conversation” and keep the final numbers. That might work for a while. I don’t think it holds in the end. Funnily enough, this is an area where AI takes us into a new legal paradigm – because you now document every twist and turn in the negotiation (in the real world things can be said verbally, negotiations are often undocumented) you have a greater need for accuracy and truth.
We all know plenty of examples where public figures prioritise persuasion over reliable facts. In AI negotiation, that approach is deeply unwise. You won’t find Nibble ever doing it that way.
This was prompted by a new large-scale Oxford-led study on AI persuasion published on arXiv. It’s well worth reading the original paper if you want to understand how design choices — not model size — shape persuasive power:
Just One More Thing
I was speaking to a CPO this week who is still in his first 100 days and all his challenges are people ones, not tech ones. He is excited about tech but more excited about bringing in great people to help him drive that change.
This reminded me of a fantastic call to action from James Meads about making procurement more entrepreneurial. If you want a boost your oomph this January (is there still time for new year resolutions?) have a read of this and make your procurement team more empowered, more entrepreneurial and – dare I say it? – more fun:
“Procurement stands at a crossroads. The old ways of working no longer fit our digital world.
Traditional procurement teams operate too much like bureaucratic machines. They all-too-often focus on process over results. This inevitably means they slow down decisions, when we should instead be striving to speed them up.
This approach costs organizations dearly. It invariably keeps procurement teams away from the top table, as leaders struggle to appreciate our value. This in turn reduces our impact that we can have on business success.
The solution requires a complete mindset shift.
You need to think like an entrepreneur, not a bureaucrat.”
https://entproc.com/entrepreneurial-procurement