Skip to content

'Everything has changed' since last year's AI for Good Global Summit

Equitable models of AI governance rooted in comprehensive and inclusive approaches are needed more than ever.

The ITU chief acknowledges 'lots and lots of public skepticism' about AI.
The ITU chief acknowledges 'lots and lots of public skepticism' about AI. (AN/Victor/Unsplash)

[Editor's Note: This is part two of a two-part series this week on the future of AI. Read the first part here.]

GENEVA (AN) — Already, artificial intelligence is the main driver of big data, robotics and the Internet of Things, and it's pushing the oldest U.N. agency into more uncharted waters of standard-setting and governance.

The International Telecommunication Union, which dates to the 19th century era of the telegraph, sets technical standards for smartphones, satellites, internet and TV. Some 95% of global communications traffic runs on optical transport networks built to ITU standards. Last year's AI for Good Global Summit was ITU's first since OpenAI launched ChatGPT to the public in Nov. 2022, unleashing a surprisingly powerful yet sometimes "hallucinatory" generative AI that became an overnight commercial hit.

"Our last summit in July marked the first opportunity for leading AI experts to gather since the advent of generative AI. A new future was really taking shape at the time – one that was filled with tremendous opportunity and at the same time also much uncertainty," says ITU Secretary-General Doreen Bogdan-Martin, a policy expert and regulator who began her career in Washington then joined ITU as an analyst 30 years ago. "So, what's changed since the last summit? The short answer is: Everything has changed."

With all of the change has come a tsunami of private investment. In 2022, the global total corporate investment in AI reached almost US$92 billion, a slight decrease from a year earlier, likely due to the COVID-19 pandemic. In 2018, the yearly investment in AI saw a slight downturn, but only temporarily. Private money accounts for most AI corporate investment, which increased more than sixfold since 2016, a staggering amount in any market that points to the importance of AI development around the world.

From 2020 to 2022, investments in startups globally, and, in particular, AI startups, increased by US$5 billion, nearly double the previous amount, much of it coming from U.S. private capital. As the 2024 AI for Good Global Summit is held at the International Conference Center Geneva, the most recent top-funded AI businesses are machine learning and chatbot companies focused on the human interface with machines.

The global AI market, which was valued at US$142.3 billion as of last year, continues to grow driven by an influx of venture capital investments and is anticipated to expand from billions to trillions of U.S. dollars.

Arete News Development
We’re developing products to help you at work and would appreciate a minute of your time to get your feedback.
(OECD/Preqin)
(OECD/Preqin)

Making inclusive decision-making more than an afterthought

Bogdan-Martin, the first woman to lead the ITU, points to breakthroughs in protein folding, climate modeling, and neuroscience that are revolutionizing our understanding of science as companies rethink their business models. "At the same time, we have lots and lots of public skepticism about the technology. We have more and more countries asking institutions like ITU to support their capacity-developing initiatives," she says. "Probably the most visible and perhaps the most consequential would be the change that we have seen in terms of swift policy and regulatory responses from governments."

Equitable models of AI governance that are rooted in comprehensive and inclusive approaches are needed more than ever to serve diverse populations, particularly in Africa and the Global South. At the moment, however, the paradigm of inclusion amounts to something of an afterthought in designing AI systems. By establishing and adhering to technical standards, the ITU says it aims to create a more secure, responsible, and equitable AI ecosystem that works for the global good.

Technology policy and governance expert Nanjira Sambuli says the European Union has made itself into the "regulatory superpower" for the digital era by setting influential norms, but its regulations are more along the lines of "compliance instruments" to facilitate trade between other countries and the E.U., rather than serve as "protective instruments for other jurisdictions."

When it comes to leading the AI charge on both innovation and regulatory fronts, Europe would benefit from more humility and a global mindset, she argues, by considering what the "Brussels effect" might be on African markets. "All to say, questions of regulation are deeply political and contextual, that I sometimes think we need to break down these questions in order to rebuild possible, feasible solutions," she tells Arete News. "The African concept of Ubuntu is instructive here: I am, because you are."

OECD national policies for trustworthy AI
OECD national policies for trustworthy AI

Shaping the 'digital future we want'

Looking forward, Bogdan-Martin says ITU's focus for the 2024 summit is to translate principles into actionable policies, particularly on AI Governance Day a day earlier. Ministers and regulatory authorities from more than 70 governments are expected, along with representatives of industry, academia, civil society and U.N. agencies. An ITU survey found AI "readiness" varied widely among the 193 U.N. member nations and "only 15% of respondents actually said that they had an AI policy," she adds. "So, I think there is lots of room and scope for further developments."

While AI has enormous potential for addressing global challenges in areas such as health care, climate change, poverty alleviation, and education, policy discussions tend to focus more on its potential negative implications in areas such as surveillance, misinformation, and autonomous weaponry.

However, there is increasing recognition of AI's potential for positive impact. Efforts to leverage it for the greater good include the use of AI for data analysis in health care; AI-driven solutions for environmental monitoring and disaster response; and AI-assisted access to education and resources in underserved communities.

In March, the U.N. General Assembly unanimously adopted its first resolution on AI, which was proposed by the U.S. and co-sponsored by China and more than 120 other nations. The non-binding resolution aims to put AI under human control and ensure it will benefit all of humanity while calling on all 193 U.N. member nations to monitor AI for risks while protecting human rights and personal data.

U.N. Secretary-General António Guterres created a high-level advisory body on AI and a U.N. global digital compact also is in the works. Last year, he endorsed creating a new international AI watchdog agency as recommended by leading AI scientists and experts. They argued that rapid advances in generative AI technology merits founding something similar to the U.N. atomic watchdog agency that oversees nuclear technology.

But while some U.N. entities jostle for influence in shaping future AI governance as part of what Sambuli calls the "forum shopping model," Devex reported that U.S. President Joe Biden's administration – whose nation is assessed 22% of the U.N.'s regular budget, making it the single largest financial contributor to the world body – released a confidential paper to foreign governments opposing the U.N.'s aspiration of creating new global institutions to govern AI.

"Regulation needs a political economy treatment when discussed. For regulation to prevent bias and deliver all the nice things, it is a deeply political decision," says Sambuli, a Kenya-based researcher, policy analyst and strategist of information and communications technology. "Unfortunately, AI conversations have relaunched the forum shopping model that, to me, has created a contagion in the discourse around AI — with the views of the influential predominating what should otherwise be globally inclusive spaces."

The U.S. guidance to foreign governments reportedly argues it would be premature to call for new international oversight bodies without first gaining a clearer picture of how existing U.N. agencies such as the ITU may already be equipped to govern this rapidly evolving technology.

Earlier this month, U.S. and Chinese officials described their first high-level talks on artificial intelligence as "constructive," despite intense competition and broad differences over calls for a new global AI regulator. U.S. National Security Council spokesperson Adrienne Watson said her nation "underscored the importance of ensuring AI systems are safe, secure, and trustworthy in order to realize these benefits of AI, and of continuing to build global consensus on that basis."

State broadcaster China Global Television Network reported that China "supports strengthening global governance of AI, advocates the role of the United Nations as the main channel, and stands ready to strengthen communication and coordination with the international community, including the U.S., to form a global framework and standards for AI governance with broad consensus."

The danger of 'frontier AI systems'

OpenAI CEO Sam Altman, who is one of the proponents of creating an international agency to regulate AI, says the regulations should be similar to those applied to airlines or to anything else in which "significant loss of human life is a serious possibility, like airplanes, or any number of other examples where I think we're happy to have some sort of testing framework" that can offer safety assurances.

"I think there will come a time in the not-so-distant future, like we're not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm," Altman told the All-In podcast earlier this month. "I'd be super nervous about regulatory overreach here. I think we get this wrong by doing way too much or a little too much. I think we can get this wrong by doing not enough."

Sambuli says some international organizations like the ITU have done a good job of convening stakeholders to set technical standards, but their member nations "would have to confer legitimacy" to them to make any regulatory functions work. "This often boils down to very narrow regulatory functions, if any," she adds. "Ever since the tech players themselves started championing for regulation, I’ve found that we also need to assess what the term is intended to mean. Those who study their utterances and actions closely say that they are calling for self-regulation."

ITU says its goal is to help AI to live up to its potential and to boost progress on the SDGs. "It's an opportunity for us as a community to gather around the critical instruments like standards, capacity building, and multi-stakeholder mechanisms for dialogue, such as 'AI for Good,'" Bogdan-Martin says. "I do think we can still write a digital future that we want — and a future that will given everyone, everywhere, equal opportunities."

Comments

Latest