#178 THE G|O BRIEFING, APRIL 25, 2024
As AI for Good Summit Approaches, Cautious Optimism is Coupled with Sense of Urgency to Prevent AI Titans from Playing “Russian Roulette” with Humanity’s Future | UNRWA Still Facing Shortfalls, with Switzerland Still Withholding Donations
Today in The Geneva Observer:
“Playing Russian roulette with the human race is unacceptable,” Berkeley University professor and AI expert Stuart Russell told a virtual gathering of reporters last week, ahead of the International Telecommunication Union’s 2024 AI for Good Summit. Russell’s finger was pointed at the AI titans that continue to release what many critics consider unreliable and possibly risky generative AI tools and are working towards the development of general AI systems. But, for now at least, the ITU remains cautiously optimistic that it and other UN agencies will be able to play a meaningful role in setting safeguards to prevent the gun from being loaded in the first place:
Ahead of its ‘AI for Good’ Summit, Guarded Optimism but Sense of Urgency at the International Telecommunication Union
“Playing ‘Russian roulette’ with the human race is unacceptable.”
“Playing ‘Russian roulette’ with the human race is unacceptable.”
“We are going to need to update and create institutions that are up to the task of mitigating the AI risks through robust governance.”
Has the AI horse already left the stable? How realistic is it to expect that the development, by a few enormously powerful private companies, of what some of its own creators consider the most dangerous technology since the atomic bomb can be governed by multilateral organizations? Can Big Tech, but also governments, agree that an AI built on mankind’s accumulated and aggregated knowledge should be considered a common good, and that they should favor access, transparency and equity over unlimited return on (massive) investments and narrow national interests? And will the multilateral system be able to play a meaningful role in helping to unleash AI’s potential opportunities for good, while mitigating the extraordinary risks it poses? With new applications being released every day—some pushing the frontiers of science and medicine, like gene editing; others surpassing the bounds of common decency, like voice-cloning for fake political advertisements or even generating child pornography—is it not already too late in this era of “minilateralism” to believe that the UN can be a significant actor?
No, the Geneva-based International Telecommunication Union’s (ITU) Secretary-General answers with unwavering conviction, it is not: “I think we can still write the digital future we want, a future that will be for everyone, everywhere,” Doreen Bogdan-Martin told a media roundtable organized last week ahead of the ITU’s May ‘AI for Good’ summit. AI for Good is the technology’s largest multi-stakeholder platform within the UN system. Over the last days of May, the summit will bring the Who’s Who of AI to Geneva. AI’s founders and their devoted advocates will share the stage—sometimes remotely—with their doubters and even some of their fiercest critics. For some ITU watchers, having such prominent speakers as Sam Altman, Open AI’s CEO, address the summit underlines the convening power of the UN agency.
Bogdan-Martin readily admitted to the press last week that the release of ChatGPT in November 2022 took the ITU, like the rest of the world, by surprise, ahead of the 2023 AI for Good event. In her view, her organization and its 14 UN partners had to seize the moment: “A new future was taking shape, one we all felt was filled with tremendous opportunity and at the same time much uncertainty; it was a pivotal moment to assess the benefits and also the risks of this technology and look at how we could move forward together,” she said. ChatGPT had taken the world by storm—it still holds the record of the fastest adoption rate in the world for a new technology, with over a million subscribers in its first five days, and 100 million in its first two months. The ITU was navigating in completely uncharted territory. On the day of the opening of its gathering, the UN Secretariat in New York broadcasted a message to the entire organization, asking staff to use extreme caution when using AI, sternly warning of its potential dangers. To help focus the conversation on the task, Bogdan-Martin outlined three scenarios at the outset of the 2023 summit:
“The first one was where AI lived up to its potential. We had dramatic healthcare improvements, food insecurity was no longer a concern, climate action got a big boost. […] In the second scenario, I described the risks of AI outweighing the benefits. Bias, misinformation, job displacement and other ethical and security concerns became the norm. And we missed our chance to deliver on the Sustainable Development Goals [SDGs]. In the last scenario, the AI revolution basically left too many people behind. Only a handful of developed countries that had sufficient computing power and resources were leading AI research and development […]. All three scenarios really hinged on the global community’s capacity to govern AI—and to find that right balance between innovation, inclusion, and regulation.”
Where are we a year later? “The short answer,” Bogdan-Martin says today, “is that everything has changed. Breakthroughs in fields like protein folding, like climate modeling, like neuroscience, are really revolutionizing our understanding of science. Companies are rethinking their business models. At the same time, we have lots and lots of public skepticism about the technology. We have more and more countries asking institutions like ITU to support their capacity-developing initiatives. But I would say probably the most visible—and perhaps the most consequential—would be the change that we have seen in terms of swift policy and regulatory responses from governments. We have witnessed various national government efforts with differing approaches, from the US to India to China, and I think they actually challenge the argument that governments lack initiative when it comes to tech. But at the same time, it's also important to remember that countries are at different stages when we look at their AI journeys.”
The Secretary-General revealed that among the 193 ITU’s Member States, only 15% had indicated in a survey that they have a AI policy.
Technology historians remind us that from the industrial revolution to the internet, every technological “revolution” in developed societies has engendered massive creative disruption across our economies, with some jobs and skills lost and others created. During the early stages of such economic upheavals, the state has often refrained from intervening, letting experimentation and creativity play out. But many experts tell us today that AI cannot be compared with previous technology. “Since the beginning, AI has been about creating machines that exceed human intelligence in all relevant dimensions. […] And it is very likely that we will succeed. It might be a good thing,” in giving us choices about our future, Berkeley professor and foremost AI expert Stuart Russell told last week’s roundtable. But he also sounded the alarm: “If we create entities that are more intelligent than us, therefore more powerful, it’s not obvious how we retain power over those entities forever. And we have to answer that question before we succeed in creating general purpose intelligent machines. We also must enforce that answer. “It is not good enough,” Russell said, to just “pass a lot of laws saying this is how we need to build [safely] if they are not enforced. […] The idea that private entities, in particular, get to play Russian roulette with the entire human race for their own private gain is simply unacceptable—no one has the right to play Russian roulette with the future of the human race.”
Russell’s concerns were shared, and spelled out, by Emilia Javorsky of the Future of Life Institute. “We must not develop and deploy AI technologies that pose large-scale and extreme risks to society. This includes societal risks like AI-triggered political chaos and epistemic collapse when collectively we can no longer tell what’s real and what’s fake; ethical risks of bias and discrimination; and [risks] of economic disempowerment of many humans. […] AI can exacerbate large-scale risks like chemical and biological weapons, nuclear security, instability and cyber-attacks. There are military risks of [integrating] AI into military command. There are social risks of extreme power concentration by the architects of these technologies into the hands of a few.” We will need, she said, “to update and create institutions that are up to the task of mitigating the risks through robust governance and safety engineering. We also need to create incentive structures that promote unlocking the real-world benefits that AI has to offer. I think it's worth keeping in mind that the existing incentive structures governing technology development have not really helped us realize the SDGs up until this point.”
AI’s development is essentially industry-driven at this point, with standards and common objectives lacking, as the latest 2024 report on the state of AI underlines, and with staggering financial and investment figures involved: “Industry continues to dominate frontier AI research. In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high,” its authors write. They also note that “frontier models get way more expensive. According to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI’s GPT-4 used an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million for compute.” Investments in the sector have also skyrocketed, the report points out, octupling from 2022 to 2023 to reach $25.2.billion today.
With such levels of investment in the technology, competitive pressures are pushing companies to release generative AI tools without assessing their risks. “Left to their own devices it looks like AI companies might go in a similar direction to social media companies,” warned Helen Toner last week. Toner recently resigned from OpenAI’s board over the issue of risk prevention. Now a director at Georgetown University’s Center for Security and Emerging Technology, she called for outside auditing of major AI companies such as OpenAI, Google and Microsoft. In the absence of such oversight, these companies are “just grading their own homework,” she deplored.
“Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models,” the authors of the 2024 AI index write. They also indicate that “people across the globe are more cognizant of AI’s potential impact—and more nervous. A survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022.”
Fundamentally, we have to change our mindset and understand what AI is about: we are talking about creating “a new digital species,” Mustafa Suleyman, one of the most influential AI leaders, believes.
The battle between the believers and the doubters is raging on. For three days in late May, it will be fought in the heart of Geneva.
-PHM
History Already Tells Us the Future of AI
By Daron Acemoglu and Simon Johnson*
Artificial intelligence and the threat that it poses to good jobs would seem to be an entirely new problem. But we can find useful ideas about how to respond in the work of David Ricardo, a founder of modern economics who observed the British Industrial Revolution firsthand. The evolution of his thinking, including some points that he missed, holds many helpful lessons for us today.
Private-sector tech leaders promise us a brighter future of less stress at work, fewer boring meetings, more leisure time, and perhaps even a universal basic income. But should we believe them? Many people may simply lose what they regarded as a good job – forcing them to find work at a lower wage. After all, algorithms are already taking over tasks that currently require people’s time and attention.
In his seminal 1817 work, On the Principles of Political Economy and Taxation, Ricardo took a positive view of the machinery that had already transformed the spinning of cotton. Following the conventional wisdom of the time, he famously told the House of Commons that “machinery did not lessen the demand for labour.”
Since the 1770s, the automation of spinning had reduced the price of spun cotton and increased demand for the complementary task of weaving spun cotton into finished cloth. And since almost all weaving was done by hand prior to the 1810s, this explosion in demand helped turn cotton handweaving into a high-paying artisanal job employing several hundred thousand British men (including many displaced, pre-industrial spinners). This early, positive experience with automation likely informed Ricardo’s initially optimistic view.
"Algorithms’ takeover of tasks previously performed by workers will not be good news for displaced workers unless they can find well-paid new tasks."
But the development of large-scale machinery did not stop with spinning. Soon, steam-powered looms were being deployed in cotton-weaving factories. No longer would artisanal “hand weavers” be making good money working five days per week from their own cottages. Instead, they would struggle to feed their families while working much longer hours under strict discipline in factories.
As anxiety and protests spread across northern England, Ricardo changed his mind. In the third edition of his influential book, published in 1821, he added a new chapter, “On Machinery,” where he hit the nail on the head: “If machinery could do all the work that labour now does, there would be no demand for labour.” The same concern applies today. Algorithms’ takeover of tasks previously performed by workers will not be good news for displaced workers unless they can find well-paid new tasks.
Most of the struggling handweaving artisans during the 1810s and 1820s did not go to work in the new weaving factories, because the machine looms did not need many workers. Whereas the automation of spinning had created opportunities for more people to work as weavers, the automation of weaving did not create compensatory labor demand in other sectors. The British economy overall did not create enough other well-paying new jobs, at least not until railways took off in the 1830s. With few other options, hundreds of thousands of hand weavers remained in the occupation, even as wages fell by more than half.
"Today’s generative AI has huge potential. Unfortunately, the tech industry seems to have other uses in mind. The big companies developing and deploying AI overwhelmingly favor automation (replacing people) over augmentation (making people more productive)."
Another key problem, albeit not one that Ricardo himself dwelled upon, was that working in harsh factory conditions – becoming a small cog in the employer-controlled “satanic mills” of the early 1800s – was unappealing to handloom weavers. Many artisanal weavers had operated as independent businesspeople and entrepreneurs who bought spun cotton and then sold their woven products on the market. Obviously, they were not enthusiastic about submitting to longer hours, more discipline, less autonomy, and typically lower wages (at least compared to the heyday of handloom weaving). In testimony collected by various Royal Commissions, weavers spoke bitterly about their refusal to accept such working conditions, or about how horrible their lives became when they were forced (by the lack of other options) into such jobs.
Today’s generative AI has huge potential and has already chalked up some impressive achievements, including in scientific research. It could well be used to help workers become more informed, more productive, more independent, and more versatile. Unfortunately, the tech industry seems to have other uses in mind. As we explain in Power and Progress, the big companies developing and deploying AI overwhelmingly favor automation (replacing people) over augmentation (making people more productive).
That means we face the risk of excessive automation: many workers will be displaced, and those who remain employed will be subjected to increasingly demeaning forms of surveillance and control. The principle of “automate first and ask questions later” requires – and thus further encourages – the collection of massive amounts of information in the workplace and across all parts of society, calling into question how much privacy will remain.
"It would be naive to trust in the benevolence of business and tech leaders."
Such a future is not inevitable. Regulation of data collection would help protect privacy, and stronger workplace rules could prevent the worst aspects of AI-based surveillance. But the more fundamental task, Ricardo would remind us, is to change the overall narrative about AI. Arguably, the most important lesson from his life and work is that machines are not necessarily good or bad. Whether they destroy or create jobs depends on how we deploy them, and on who makes those choices. In Ricardo’s time, a small cadre of factory owners decided, and those decisions centered on automation and squeezing workers as hard as possible.
Today, an even smaller cadre of tech leaders seem to be taking the same path. But focusing on creating new opportunities, new tasks for humans, and respect for all individuals would ensure much better outcomes. It is still possible to have pro-worker AI, but only if we can change the direction of innovation in the tech industry and introduce new regulations and institutions.
As in Ricardo’s day, it would be naive to trust in the benevolence of business and tech leaders. It took major political reforms to create genuine democracy, to legalize trade unions, and to change the direction of technological progress in Britain during the Industrial Revolution. The same basic challenge confronts us today.
*Daron Acemoglu, Institute Professor of Economics at MIT, is a co-author (with Simon Johnson) of Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity
UNRWA Still Facing Attacks and Shortfall After Being Largely Cleared in Probe
Switzerland Continues to Withhold Funding
Israel has failed to provide evidence to back up its allegations that some aid workers from the UN’s main humanitarian agency in Gaza took part in Hamas terror attacks of October 7, an independent panel found this week. But Switzerland is among a few recalcitrant donors still receptive to heavy lobbying by Israel and allies, including the Geneva-based NGO ‘UN Watch’.
Sixteen countries froze funding after Israeli’s claims, leaving a $450 million shortfall just as famine set in among 2.3 million Palestinians living in the occupied Gaza Strip, where Hamas authorities say nearly 35,000 have died in seven months of Israeli bombing.
Switzerland, whose annual contribution to UNRWA was $20 million, is among a handful of donors whose funding remains suspended, along with Britain. Germany, UNRWA’s second biggest donor, announced on Tuesday it would resume contributions, following donors from Japan to the European Union.
The Swiss foreign ministry said it was “analyzing in detail” the independent report led by former French Foreign Minister Catherine Colonna, and would confer with the parliament’s foreign affairs committees on future funding. UNRWA’s director-general Philippe Lazzarini is scheduled to give a press conference in Geneva on April 30, but is not expected to meet with Swiss authorities, UNRWA said.
“Political Satellite”
Carlo Sommaruga, a Geneva MP from the Socialist Democratic Party serving on one of the Swiss parliament’s two committees on foreign policy, voiced outrage at the Swiss position of withholding funds from UNRWA.
“Switzerland is marginalizing itself as a political satellite of Israel,” Sommaruga told The Geneva Observer, also likening the Bern government to a “little poodle.”
“It’s scandalous—no funds to help people undergoing an armed genocidal aggression,” he said, citing the landmark ruling by the International Court of Justice in January, which found that Israel’s war of aggression could amount to genocide.
“[Swiss Foreign Minister Ignazio] Cassis has said UNRWA is part of the problem, not the solution. That is the narrative of Israel, they want UNRWA to disappear,” Sommaruga said. Referring to UN Watch, he added: “They deliver non-stop information to certain parliamentarians to maintain pressure on the Federal Council [the Swiss government]. They are received by Ignazio Cassis and his office.”
UN Watch has blocked this reporter on Twitter and not replied to previous inquiries, following our February story on their criticism of the ICRC.
Pierre-Andre Page, an MP from the right-wing Swiss People’s Party (UDC) also serving on the foreign policy committee, told Swiss public broadcaster RTS on Monday: “I think it was important to stop funding given that there was this accusation made by Israel. UNRWA’s director-general Philippe Lazzarini immediately fired 12 people, which, after all, is proof to me that there was a problem.”
“And now we have to shed full light. I have invited Mr. Hillel Neuer of UN Watch to address us, to have a counterpoint to this analysis so that we that we can really evaluate the situation. It is really important to have both viewpoints,” he said.
Page, asked whether he might reconsider funding after the UN-mandated report said that UNRWA is “irreplaceable and indispensable,” replied: “I think it is important to maintain transparency to ensure the neutrality of UNRWA. To my mind, that element is vital, before we transfer any funds. You cannot transfer sums to a terrorist association.” It was not clear if the MP was referring only to Hamas, or to UNRWA.
Israeli Lobbying
Laurence Fehlmann Rielle, a Geneva MP from the Social Democratic Party who also serves on the foreign policy committee, called for a resumption of Swiss funding in light of the report’s findings and the “catastrophic” situation in Gaza. Asked on RTS why the United States and other donors would not release funds, she said: “It’s the Israeli lobby that is playing a role. And it also plays a role in Switzerland with some elected officials. It is extremely strong in wanting to take down UNRWA.”
Fehlmann Rielle, asked whether some Swiss politicians, notably in the UDC, might be subjected to manipulation or lobbying, said: “Lobbying in any case. In fact, a kind of ideological blindness which makes associations that are completely ridiculous, that UNRWA equals Hamas.”
UNRWA commissioner-general Philippe Lazzarini told reporters in New York on Tuesday he had urged the Security Council to support “an independent investigation and accountability for the blatant disregard of UN premises, UN staff and UN operations in the Gaza Strip. […] As of today, it is 180 UNRWA staff that have been killed, we had more than 160 premises which have been damaged or completely destroyed, with 400 people at least having been killed while they were seeking the protection of the UN flag.”
The two-month review, carried out by Colonna and three Nordic research institutes, largely vindicated the UN Relief and Works Agency for Palestine Refugees (UNRWA), saying it had a “robust framework,” but that “neutrality-related issues persist.”
Lazzarini, who immediately sacked 12 employees implicated by Israel in late January, welcomed the Colonna report, saying his embattled agency would work to implement its 50 recommendations, which include strengthening internal audits and project oversight. “I hope that with this report and the measures that we will be putting in place that the last group of donors will get the necessary confidence to come back as a donor and partner of the agency,” the Swiss-Italian veteran humanitarian said.
A separate investigation by the UN’s Office of Internal Oversight Services (OIOS) is still underway into alleged UNRWA staff participation in the Hamas-led assault in Israel that killed 1,200 people, mostly civilians, and took hostage roughly 250.
Bridging the Funding Gap
The United States, UNRWA’s largest donor, has blocked its annual funding ($300 million to $400 million) until March 2025, Lazzarini noted, adding: “My task now is to try to bridge the gap left behind. Today I can say that we have funding […] until the end of June which will bring us into July.”
Grassroots digital contributions of $100 million have poured in since the war began, he said. Meanwhile UNRWA is seeking $1.2 billion to confront the unprecedented humanitarian crisis in Gaza.
Refugee Status
Israel’s Ministry of Foreign Affairs spokesperson Oren Marmorstein repeated its unsubstantiated allegations hours after Colonna’s report: “Hamas has infiltrated UNRWA so deeply that it is no longer possible to determine where UNRWA ends and where Hamas begins.”
“Israel calls on the donor countries to refrain from transferring their taxpayers’ money to UNRWA-Gaza, as these funds will go to the Hamas terrorist organization, and that violates legislation in the donor countries themselves,” he said.
UNRWA, whose 33,000 aid workers deployed across the Middle East—including 13,000 in Gaza—provide health, education, and social services to Palestinian refugees, has been challenged throughout its 75-year existence, but never with such virulence, Lazzarini said.
“The real intent behind the attack on UNRWA is of a political nature, because it has as an objective to strip Palestinians of the refugee status, to start with in Gaza, and then East Jerusalem and the West Bank. And basically, this is exactly what we heard at the [Security] Council from the Israeli ambassador. That this is indeed a stated objective today,” he said.
“The agency has never been a target of an open campaign for the total dismantlement of its activities in Gaza—and possibly beyond. So I think the [recent campaign] we have gone through is quite unique, I would say, in its ferocity but also in its scope,” Lazzarini said.
-SN
Today's Briefing: Philippe Mottaz - Stephanie Nebehay
Op-Ed: Daron Acemoglu - Simon Johnson
Editorial research and assistance: David Jenny
Edited by: Dan Wheeler
© The Geneva Observer 2024 and Project Syndicate - All rights reserved