OpenAI’s ChatGPT advertising raise user influence worries
OpenAI has revealed intentions to implement advertising in ChatGPT within the United States. Advertisements will be present on the free version and the budget-friendly Go tier, while Pro, Business, and Enterprise subscribers will not encounter them. The company asserts that advertisements will be distinctly separated from chatbot responses and will not affect the outputs. It has also committed to not selling user conversations, allowing users to disable personalized ads, and refraining from displaying ads to users under 18 or concerning sensitive topics such as health and politics. Nevertheless, the decision has sparked apprehensions among certain users. The pivotal inquiry is whether OpenAI’s self-imposed protections will remain intact when advertising takes center stage in its operations. This is a familiar occurrence. Fifteen years ago, social media platforms faced challenges in converting their extensive audiences into profit. The breakthrough emerged through targeted advertising: customizing ads based on users’ searches, clicks, and areas of focus. This model emerged as the primary revenue stream for Google and Facebook, transforming their services to enhance user engagement to the fullest extent.
Implementing large-scale artificial intelligence comes with significant financial costs. Training and running advanced models necessitates extensive data centres, specialized chips, and ongoing engineering efforts. Despite experiencing rapid user growth, numerous AI firms continue to operate at a loss. OpenAI anticipates incurring losses of $115 billion over the next five years. Only a limited number of companies are capable of absorbing these costs. For many AI providers, establishing a scalable revenue model is a pressing necessity, and targeted advertising emerges as the clear solution. It continues to be the most dependable method for generating profit from extensive audiences. OpenAI states, “it will keep ads separate from answers and protect user privacy.” These assurances may appear reassuring, yet, at this moment, they are based on ambiguous and readily adjustable commitments. The company suggests refraining from displaying ads “near sensitive or regulated topics like health, mental health or politics,” but provides minimal clarity regarding what constitutes “sensitive,” the scope of “health,” or the authority responsible for determining these boundaries.
Many actual interactions with AI will extend beyond these limited classifications. As of now, OpenAI has not disclosed any specifics regarding the advertising categories that will be included or excluded. However, if no restrictions were placed on the content of the ads, it’s easy to envision a scenario where a user asking “how to wind down after a stressful day” could be presented with alcohol delivery ads. A question regarding “fun weekend ideas” might reveal gambling promotions. These products are associated with acknowledged health and social harms. When positioned alongside tailored guidance during decision-making, such advertisements can influence behavior in nuanced yet significant ways, even in the absence of any direct mention of health concerns. Comparable assurances regarding guardrails characterized the formative years of social media. Historical evidence illustrates how self-regulation deteriorates when faced with commercial pressures, ultimately serving the interests of companies while putting users at risk of harm. Advertising incentives have a longstanding history of compromising the public interest. The Cambridge Analytica scandal revealed the extent to which personal data gathered for advertising could be utilized for political manipulation. The “Facebook files” disclosed that Meta was aware its platforms were inflicting significant damage, particularly on teenage mental health, yet it opposed modifications that could jeopardize advertising income.
Recent investigations reveal that Meta persists in generating revenue from scam and fraudulent ads, despite having been alerted to their detrimental effects. Chatbots represent more than just another social media feed. Individuals engage with them in deeply personal manners for guidance, emotional solace, and private contemplation. These interactions are characterized by discretion and a lack of judgment, often leading to disclosures that individuals might not share in public settings. The trust enhances persuasion in manners that social media cannot replicate. Individuals turn to chatbots for assistance and guidance in their decision-making processes. Despite the formal separation from responses, advertisements manifest in a private, conversational context rather than in a public feed. Messages positioned alongside tailored advice – regarding products, lifestyle decisions, finances, or political matters – tend to carry greater weight than identical advertisements encountered during casual browsing. As OpenAI presents ChatGPT as a “super assistant” for a wide range of topics, including finances and health, the distinction between advice and persuasion becomes increasingly unclear. The allure of a more potent propaganda instrument is evident for both scammers and autocrats. The financial incentives for AI providers to accommodate them will be difficult to overlook.
The fundamental issue lies in a structural conflict of interest. Advertising models incentivize platforms to enhance engagement, but the content that most effectively captures attention is frequently misleading, emotionally charged, or detrimental to health. This illustrates the repeated failures of voluntary restraint by online platforms. One option is to consider AI as digital public infrastructure: these are essential systems intended to serve the public rather than prioritize advertising revenue. This does not necessarily exclude private firms. It necessitates at least one public option of high quality, governed by democratic oversight – similar to the relationship between public broadcasters and commercial media. Aspects of this model are already in place. Switzerland has developed the publicly funded AI system Apertus, leveraging the expertise of its universities and national supercomputing centre. It is open source, adheres to European AI law, and is devoid of advertising. Australia has the potential to advance even more.
In addition to developing our own AI tools, regulators might establish explicit regulations for commercial providers: requiring transparency, prohibiting health-damaging or political advertising, and implementing penalties – including shutdowns – for significant violations. Advertising did not corrupt social media overnight. In a gradual shift, it altered incentives to the point where public harm emerged as the collateral damage of private profit. Integrating it into conversational AI risks repeating the mistake, this time in systems that people trust much more profoundly.









