AI’s the word: the race for LLM and chatbot market domination

by | May 11, 2023 | AI and technology | 0 comments

Author: Sarah Ledoux

Unless you have been living under a rock this past month, you will have noticed that the whole world has been raving about AI chatbots, specifically OpenAI’s ChatGPT and its bigger better younger sibling, GPT-4. As I’m sure you have already been made aware of its features and the frenzy it has generated, I will spare you the details.

Instead, I will share some of the interesting developments that have emerged from the hubbub, with some of my own reflections.

First, OpenAI is becoming less and less open. Originally a project founded in 2015 as a non-profit by tech elites, including Elon Musk (who resigned in 2018), it promised to advance digital intelligence “unconstrained by financial return”. At this stage, OpenAI made its patents open to the public, mainly involving institutions and researchers interested in collaborating on deep learning. But in 2019 the company set up a for-profit sector, and in 2020 Microsoft invested $1 billion in GPT-3 for exclusive licensing, closing its initial ‘open’ chapter. With last month’s release of GPT-4, OpenAI has been more secretive than ever on the model’s hardware, training compute, parameters and data (although it is almost certainly using the same dataset as ChatGPT). This decision was made amid the staggering market reaction to ChatGPT’s success and competitors’ race to have a piece of the AI cake. A few hungry contenders include Flamingo at DeepMind, Big Bard at Google, Claude at Anthropic (already used by Notion and Quora), LLaMA at Meta and BLOOM at Hugging Face (open-source), to name a few. China’s tech giant Baidu has also released its own chatbot, Ernie Bot, but the model has performed poorly and despite a waiting list of over 120k companies, access has been suspended temporarily (ChatGPT is banned in China). Alibaba also recently announced the development of its own Large Language Model (LLM), Tongyi Qianwen, but has not disclosed when it will be launched.

OpenAI’s other main argument to close access to GPT-4 is that it will make the software safer for its users. However, without the scrutiny afforded to open-source projects, some argue this would have the opposite effect. In other words, a closed chatbot could make its users more vulnerable. Left to its own devices, OpenAI is less likely to anticipate or prevent the numerous threats to GPT-4 users’ safety and subsequently, threats to their entourage. Italy has been the first Western country to block ChatGPT while it investigates whether it complies with GDPR.

Second, ethics is regrettably not at the forefront of the AI crusade. Microsoft recently let go of its entire Ethics and Society team within its Responsible AI department as part of a 10,000 employee lay off and restructuring effort. Microsoft still maintains its Office of Responsible AI, but the strategic decision reveals where the company’s priorities lie. Tech firms developing AI claim they want ethical products, but employees working in responsible and ethical AI offices voice nothing but concern for their futures, in addition to regularly suffering from burnout. This is ideally where regulatory bodies should step in and impose mandatory ethics and transparency clearance for AI development and commercialisation. Alarm bells should be ringing for the political elite, particularly after more than 500 tech and AI experts (including Elon Musk and Apple co-founder Steve Wozniak) signed an open letter to pause AI training for at least 6 months, due to the risks it causes to ‘society and humanity’. This warning has been disregarded by the US government and rejected by Google’s former CEO as benefitting competitors, namely China. It is an unrealistic approach to confront the issues underlying generative AI, but witnessing Silicon Valley react negatively to the potential unforeseen social consequences of public access to technology (a.k.a. the Collingridge dilemma) is somewhat refreshing.

Third, there is not enough incentive for governments to intervene because they are likely to benefit from unregulated AI. The most explicit example is India, which has declared it has no intention to restrict AI. Part of the reason behind this is to foment AI R&D in India, in order to catch up with the AI goldrush. Limited intervention has also been considered an opportunity for AI companies to collaborate with governments (see also UK and Iceland), as well as incorporate it into governance and policy-making procedures despite its known biases against women and minorities. LLMs (like GPT-4) can be trained to self-correct biases if told to do so. Whether this method is effective remains unclear.

Fourth and finally, non-governmental political actors are also keen to use AI. Generative models are capable of providing key insights into the fate of legislative amendments, which is of great appeal to interest groups aiming to secure their desired outcomes (using AI generated strategies like ‘undetectable text alterations’ or ‘impact assessments’). The grassroot organisations and interest groups that incorporate tools like ChatGPT into their workflow will certainly benefit from the information and recommendations it provides. Indeed, actors who learn to strategise their uses of AI will gain the upper hand relative to their adversaries. However, this also means that those who can afford specialised AI services, such as larger and profit-seeking lobby groups, will have the greatest advantage in legislative tug-of-wars.  

The sudden commotion around generative AI inevitably raises questions on whether this current boom will resemble the dotcom bubble. It is difficult to tell whether things will continue to heat up or whether they will stabilise when stakeholders become more conscious of LLM limitations and of the cost of feeding them with new data. In the midst of this uncertainty, academics and policymakers have the difficult task of planning for potential risks and damages of increased AI dependence. A fundamental starting point would be to prescribe the (re)establishment of ethical principles in AI, and preferably through the (re)empowerment of its ethics and responsibility task forces.

[This post was originally written for the Technology, Internet and Policy (TIP) group newsletter series, which Sarah writes. TIP is a specialist group within the Political Studies Association. For more information about TIP membership see here, for their Twitter page see here.]

 

Short bio
Sarah Ledoux is a PhD student in Politics at The University of Manchester. Her thesis focuses on legislators’ policy responsiveness to citizen expression on social media. More specifically, she is interested in the contextual and institutional factors that enable online interactions, and the stages at which they influence policymaking. Her research currently draws from case studies in Brazil and Mexico. Broader research interests include online political behaviour, e-governance and AI in public policy. 

 

 

Image credits: rawpixel.com (free license)

0 Comments

Related