AI Tools Could Monetize User Data to Shape an ‘Intention Economy’: Study
Artificial intelligence (AI) tools may soon begin leveraging and selling vast amounts of “intent data” collected from users, potentially creating an “intention economy,” according to a study by the University of Cambridge. This emerging marketplace could monetize digital signals of intent, enabling companies to use the data for purposes ranging from personalized advertisements to AI-powered persuasion tactics designed to influence purchasing decisions.
AI’s Potential to Predict and Influence Human Behavior
AI chatbots like ChatGPT, Gemini, and Copilot access extensive datasets derived from user interactions. Many users share their opinions, preferences, and values during these conversations. Researchers at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) warn that such data could be exploited to predict and steer human intentions.
The study describes the “intention economy” as a marketplace where digital intent signals are analyzed and sold, enabling AI tools to understand, predict, and manipulate user behavior. These insights could be highly profitable for businesses.
From Attention Economy to Intention Economy
The researchers compare this potential shift to the current “attention economy,” dominated by social media platforms. In the attention economy, platforms aim to captivate users, using targeted ads informed by their online activity.
However, the intention economy could be more invasive, as it relies on direct conversations with users. AI tools could extract insights into users’ fears, desires, insecurities, and values, creating opportunities for greater exploitation.
Dr. Jonnie Penn, a historian of technology at LCFI, cautioned that this emerging economy could have serious implications for societal structures, such as free and fair elections, a free press, and fair market competition. “We must consider the potential impact of such a marketplace before we fall victim to its unintended consequences,” he told The Guardian.
Risks of Psychological Profiling and Manipulation
The study warns that large language models (LLMs) could analyze intentional, behavioral, and psychological data to anticipate and influence users. For instance, a chatbot might exploit emotional vulnerabilities to suggest purchases, as illustrated by the example: “You mentioned feeling overworked; shall I book you that movie ticket we discussed?”
The paper also highlights the potential for psychological profiling. LLMs could gather details about users’ cadence, vocabulary, political views, age, gender, and preferences, selling this data to advertisers for highly tailored advertising campaigns.
A Cautionary Outlook
While the study paints a concerning picture of how private user data might be used in an AI-driven world, it notes that proactive measures by governments could mitigate these risks. Increasing regulatory efforts to limit AI companies’ access to sensitive user data might prevent the worst-case scenarios envisioned by the researchers.