Early View
ORIGINAL ARTICLE

Talking terms: Agent information in LLM supply chain bargaining

Samuel N. Kirshner

Corresponding Author

Samuel N. Kirshner

UNSW Business School, UNSW Sydney, Sydney, Australia

Correspondence

Yiwen Pan, College of Economics, Zhejiang Gongshang University, Hangzhou, China.

Email: [email protected]

Search for more papers by this author
Yiwen Pan

Yiwen Pan

College of Economics, Zhejiang Gongshang University, Hangzhou, China

Search for more papers by this author
Jason Xianghua Wu

Jason Xianghua Wu

UNSW Business School, UNSW Sydney, Sydney, Australia

Search for more papers by this author
Alex Gould

Alex Gould

Independent researcher, Sydney, Australia

Search for more papers by this author
First published: 15 July 2025

Abstract

We investigate the use of large language models as agents (LLM agents) in autonomous supply chain contract negotiations. Our objectives are to assess whether LLM agents exhibit human-like bargaining behaviors and to explore the impact of information on performance. To address these objectives, we conducted several experimental studies using LLM agents as participants and compared the results with human results from a benchmark study. Our experiments covered scenarios where supplier cost information was public, private, ambiguous, or deceptive. Overall, we found that LLM agents use simple heuristics to make decisions and generally exhibit human-like negotiating behavior. Contrasting humans, LLM agents are more inclined toward reaching agreement, leading to greater supply chain efficiency but potentially greater inequality compared to human negotiators. Deceiving LLM agents into believing they have higher costs can improve outcomes for the supplier at the expense of retailers and the supply chain's efficiency. We also show that tailored retrieval-augmented generation (RAG) configurations can enhance negotiation outcomes. Taken together, our results (1) provide timely insights into the integration of AI into supply chains, (2) raise ethical questions around the trade-off between inequality and efficiency and the use of deception with LLM agents, (3) highlight the effectiveness of tailoring RAG configurations to optimize specific objectives such as efficiency or stakeholder profitability, and (4) provide many avenues for future research into examining LLM agents as supply chain negotiators.

The full text of this article hosted at iucr.org is unavailable due to technical difficulties.