A Data-Agenda at Davos: Promoting the Promise of AI

In the buildup to this week’s World Economic Forum Annual Meeting in Davos, Switzerland, the talk of polycrisis becoming permacrisis painted a picture of impending doom. These terms have been used to describe the global condition today, citing the “cascading and connected crises” triggered by war and geopolitics, economic uncertainty, and environmental concerns, and their persistence. Yes, there are certainly many examples of hardships and horrors in the world today, but the promise of new AI technologies as a possible solution also infiltrated the conversation at Davos.

From left: Oliver Mellan, Karl Johan Stein, Anna Rose, Alex D’Anna, Jennifer Belissent, Ryan Green, Russell Smith

On Wednesday, Snowflake participated in the Swedish Lunch, the largest affiliated event in Davos during the World Economic Forum (WEF). Fortunately, the “lunch” actually spanned breakfast, a pre-lunch reception, lunch, and then discussion panels and networking into the late afternoon — a full day to promote the Swedish values of sustainability, equality, innovation and inclusive growth, all prerequisites to this year’s theme of Peace and Security.  

The three panels were introduced by the Swedish Minister for International Development, Cooperation and Foreign Trade, Johan Forssell, who ended his remarks with a call to build better resilience against crisis through more cooperation — a tearing down of trade barriers for goods, services and data flows. Yes, data flows!

The panel on Peace, Security and the Polycrisis addressed the interconnectedness of many of today’s global crises. It may be common knowledge that economic growth preserves stability, but I had no idea that the physiological effects of pollution could impact something like unpremeditated crime. I had to look that one up and, in fact, there is research that suggests that air pollution is correlated to violent crime. The world we live in is complex and requires that we rethink our definitions of security and the path to peace. The two are like the chicken and the egg; which comes first? But the key to understanding lies in looking more broadly at the diverse threats to security — national security, ecological security and human security. For example, climate change leads to an ecological crisis, which can lead to food insecurity, one of the principal elements of human insecurity, and potentially to war.

A second panel focused specifically on Human Security at the Crossroads of these multiple dimensions, like the example of food insecurity. A panelist, the CEO of a nonprofit committed to children’s empowerment through education, explained the role education plays in human security. Once again, it’s complex. Proper nutrition is a requirement for education and economic development. An underdeveloped brain cannot fulfill its potential. Without nutrition, a child may never get out of poverty. This interconnectedness makes solutions hard to find.

The promise and challenge of AI 

But there is hope. The third panel on the need for Patience, Capital and Leadership focused again on the interplay of forces. Peace and security has a huge impact on the economy and investment, and vice versa. Entrepreneurship and investment can contribute to stability and resilience, but they require both leadership and capital, or more specifically “patient capital,” a long-term approach to building value. And, somewhat ironically, in these times of polycrisis, investors are looking for disruption, which they have found in AI innovation. Eureka! 

New AI tools can help sort through the complexity and identify new solutions to persistent challenges. For example, the Swedish International Peace Research Institute (SIPRI) has been working with the UN and the private sector to understand roles and risks in conflict settings. How can they help in the pre-crisis stages? Can they identify risks and do something about them early on to prevent the crisis itself? Answering these questions with AI requires data and the tools and skills to identify patterns in it. As we’ve highlighted at Snowflake, thekey to effective and responsible AI is a robust data foundation.

Meanwhile, on the streets of Davos at the annual meeting itself, AI also took center stage. Data can help us address the larger topics of the annual meeting — achieving security and cooperation in a fractured world; creating jobs and growth for a new era; defining a long-term strategy for climate, nature and energy; and, of course, enabling AI itself as driving force for the economy and society — as well as Rebuilding Trust in the future and in each other, the overall 2024 theme.

But let’s look closer at “trust.” The definition of trust (yes, I looked it up) is a belief in the reliability, truth and ability of someone or something. Do we want to rely on “belief”? I’m not so sure. So, that’s how data and data collaboration can help. We can use data to better understand the interconnectedness of these issues, and identify ways to mitigate the challenges they bring. We can also use data to measure, analyze and demonstrate outcomes. Seeing is believing.

Wait a minute, you might be thinking. AI’s potential for disinformation and misinformation was also identified as one of three biggest risks facing the world. That might suggest a paradox of AI, enabling trust but not necessarily trusted itself.

Yet that’s no reason to throw up our hands and give up on the technology. As we learned during the era of the Cold War, we can “trust, but verify.” We must encourage and enable data diversity to mitigate bias. We need greater transparency in how models work, and the sources they use. Just as we hold journalists and academics accountable by requiring sources, the same should be required of an AI system and its users. New technologies and methods enable this accountability, which is why we shouldn’t put the brakes on AI development, as some have argued. Research and development will deliver tools that provide the guardrails and governance to help rebuild trust. Organizations such as the AI Governance Alliance of the WEF “champion responsible global design and release of transparent and inclusive AI systems.” 

Finally, we must remember that we have agency in how we apply and govern AI. An AI system is like an employee. We decide to “hire” it to do a job, and we must monitor and measure its performance. So let’s drop terms like polycrisis and permacrisis from our collective vocabularies and instead focus on the perma-promise of collaboration and innovation to build not just trust, but the future itself. 

Source