DAVOS, Switzerland — Artificial intelligence is easily the biggest buzzword for world leaders and corporate bosses diving into big ideas at the World Economic Forum’s glitzy annual meeting in Davos. Breathtaking advances in generative AI stunned the world last year, and the elite crowd is angling to take advantage of its promise and minimize its risks.
In a sign of ChatGPT maker OpenAI’s skyrocketing profile, CEO Sam Altman made his Davos debut to rock star crowds, with his benefactor, Microsoft CEO Satya Nadella, hot on his heels.
Illustrating AI’s geopolitical importance like few other technologies before it, the word was on the lips of world leaders from China to France. It was visible across the Swiss Alpine town and percolated through afterparties.
Here’s a look at the buzz:
The leadership drama at the AI world’s much-ballyhooed chatbot maker followed Altman and Nadella to the swanky Swiss snows.
Altman’s sudden firing and swift rehiring last year cemented his position as the face of the generative AI revolution but questions about the boardroom bustup and OpenAI’s governance lingered. He told a Bloomberg interviewer that he’s focused on getting a “great full board in place” and deflected further questions.
At a Davos panel on technology and humanity Thursday, a question about what Altman learned from the upheaval came at the end.
“We had known that our board had gotten too small, and we knew that we didn’t have a level of experience we needed,” Altman said. “But last year was such a wild year for us in so many ways that we sort of just neglected it.”
Altman added that for “every one step we take closer to very powerful AI, everybody’s character gets, like, plus 10 crazy points. It’s a very stressful thing. And it should be because we’re trying to be responsible about very high stakes.”
From China to Europe, top officials staked their positions on AI as the world grapples with regulating the rapidly developing technology that has big implications for workplaces, elections and privacy.
The European Union has devised the world’s first comprehensive AI rules ahead of a busy election year, with AI-powered misinformation and disinformation the biggest risk to the global economy as it threatens to erode democracy and polarize society, according to a World Economic Forum report released last week.
Chinese Premier Li Qiang called AI “a double-edged sword.”
“Human beings must control the machines instead of having the machines control us,” he said in a speech Tuesday.
“AI must be guided in a direction that is conducive to the progress of humanity, so there should be a redline in AI development — a red line that must not be crossed,” Li said, without elaborating.
China, one of the world’s centers of AI development, wants to “step up communication and cooperation with all parties” on improving global AI governance, Li said.
China has released interim regulations for managing generative AI, but the EU broke ground with its AI Act, which won a hard-fought political deal last month and awaits final sign-off.
European Commission President Ursula von der Leyen said AI is “a very significant opportunity, if used in a responsible way.”
She said “the global race is already on” to develop and adopt AI, and touted the 27-nation EU’s efforts, including the AI Act and a program pairing supercomputers with small and midsized businesses to train large AI models.
French President Emmanuel Macron said he’s a “strong believer” in AI and that his country is “an attractive and competitive country” for the industry. He played up France’s role in helping coordinate regulation on deepfake images and videos created with AI as well as plans to host a follow-up summit on AI safety after an inaugural gathering in Britain in November.
The letters “AI” were omnipresent along the Davos Promenade, where consulting firms and tech giants are among the groups that swoop onto the main drag each year, renting out shops and revamping them into showcase pavilions.
Inside the main conference center, a giant digital wall emanated rolling images of AI art and computer-generated conceptions of wildlife and nature like exotic birds or tropical streams.
Davos-goers who wanted to delve more deeply into the technical ins and outs of artificial intelligence could drop in to sessions at the AI House.
Generative AI systems like ChatGPT and Google’s Bard captivated the world by rapidly spewing out new poems, images and computer code and are expected to have a sweeping impact on life and work.
The technology could help give a boost to the stagnating global economy, said Nadella, whose company is rolling out the technology in its products.
The Microsoft chief said he’s “very optimistic about AI being that general purpose technology that drives economic growth.”
Business leaders predicted AI will help automate mundane work tasks or make it easier for people to do advanced jobs, but they also warned that it would threaten workers who can’t keep up.
A survey of 4,700 CEOs in more than 100 countries by PwC, released at the start of the Davos meetings, said 14% think they’ll have to lay off staff because of the rise of generative AI.
“There isn’t an area, there isn’t an industry that’s not going to be impacted” by AI, said Julie Sweet, CEO of consulting firm Accenture.
For those who can move with the change, AI promises to transform tasks like computer coding and customer relations and streamline business functions like invoicing, IBM CEO Arvind Krishna said.
“If you embrace AI, you’re going to make yourself a lot more productive,” he said. “If you do not … you’re going to find that you do not have a job.”
During a session featuring Meta chief AI scientist Yann LeCun, talk about risks and regulation led to the moderator’s hypothetical example of “infinitely conversant sexbots” that could be built by anyone using open source technology.
Taking the high road, LeCun replied that AI can’t be dominated by a handful of Silicon Valley tech giants if it’s going to serve people around the world with different languages, cultures and values.
“You do not want this to be under the control of a small number of private companies,” he said.
Chan reported from London. AP Technology Writer Matt O’Brien contributed from Providence, Rhode Island.
This story has been corrected to show the U.K. AI safety summit was in November not October.