极乐视频

Artificial intelligence (AI) offers tremendous promise to solve problems and improve the quality of life across the globe. It is a transformative, general-purpose technology with the potential to influence entire economies and fundamentally change society.

From predicting the structure of proteins to controlling nuclear fusion reactions, the potential benefits are vast. This even extends to the creation of to pollinate plants. There are myriad of more mundane tasks that AI improves, including medical record keeping, customer service chatbots, and optimizing supply chains.

AI has accelerated drug discovery, enabled more accurate climate modeling, and aided in the detection of astronomical phenomena. Moreover, AI-powered language models are revolutionizing how we interact with computers, making it easier to access information, generate content, and automate repetitive tasks. While the full extent of AI's impact is still to be seen, its transformative potential across various domains is undeniable. Management consulting firm McKinsey that generative AI 鈥 the latest AI breakthrough 鈥 could add up to $4.4 trillion annually to the global economy. Amy Webb, CEO of Future Today Institute, described at SXSW 2024 how AI is the driver of a 鈥減otent and pervasive鈥 technology 鈥渟upercycle.鈥

While AI鈥檚 benefits are many, people are losing trust in the technology and the companies that produce it. This creates a paradox of progress, where AI's vast potential is shadowed by diminishing trust. The path to a brighter future lies in the collective efforts to rebuild this trust.

The AI trust tightrope

We are also seeing that as the flywheel of innovation speeds up, trust in innovation declines. This is particularly noticeable when it comes to AI innovation and the companies who build this technology. That is one conclusion that can be drawn from the 2024 极乐视频 Trust Barometer deep dive on the technology sector and AI.

The results show a clear decline of trust in AI companies since 2019, falling from 62 percent who trusted AI companies in 2019 to only 54 percent in 2024 as measured across 24 countries around the world. That is an 8-point fall in five years, from clearly trusted to neutral. It is even worse when looking only at the U.S., where trust in AI companies fell from 50 percent trust to 35 percent over this period, a 15-point decline into clearly mistrusted territory.

Why the decline in of trust in AI companies?

There are many causes for this slippage. The COVID-19 pandemic certainly did not help, as that contributed to declines in trust of science and technology and institutions of authority. On top of that, the five years from 2019 until now have seen dramatic advances in AI capabilities, but also heightened awareness of the risks that come with the technology.

For example, during this time our ability to trust what we see and hear has been continually eroded. Deepfakes 鈥 synthetically altered or generated images 鈥 first appeared in late 2017. Initially, it was not easy to create these fake images and their quality was poor, but this has become dramatically easier since, and the output is now photorealistic. Tests have shown that people have a tough time distinguishing real faces from those that are AI-generated. What is more, they respond more positively to the generated images. Five months after the of the AI-powered DALL-E image generator in 2021, 1.5 million people were generating 2 million images a day.

This was followed by Midjourney, a similar tool, which was used to produce the winning entry in an art competition in 2022. Even though the entrant disclosed that his submission was created with an AI tool, the judges found it convincing enough to award it first prize. From that point forward, the distinction between what is real and what is fake has become even more difficult. This was also the moment that started a against AI and the companies that develop the technology, the through line of which can be traced up to the actors and writers strikes in 2023.

It is not only AI images that erode trust. In recent elections, social media platforms were flooded with AI-produced misinformation. These were generated using the output from large language models such as ChatGPT. In 2023, the Washington Post that the use of AI by some 鈥渋s automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.鈥

It is difficult to trust what you see. With the rise of AI-powered speech generation, it is also now not always easy to trust what you hear. There have been scams that of children to extort money from concerned loved ones and similar schemes to from banks. Even more sophisticated was the deepfake scam that cost a firm $25 million.

Fortunately, policymakers are now aware of the deepfake challenges posed by AI. For example, in the U.S. the National Institute of Science and Technology (NIST) has of identifying AI-generated content and tracking its origin. are increasingly doing their part to counteract deepfakes through watermarking and provenance tracking.

Though there are still other trust challenges, exemplified by recent news headlines, such as 鈥淯.S. Must Move 鈥樷 and 鈥淚ntelligence officials warn .鈥 These headlines are effective for catching the attention of readers and contribute to the decline of trust in innovation and AI and the companies that develop the technology.

These concerns are in addition to ongoing issues of bias. There are the biases of the companies who build the AI technologies that are evident in the guardrails, the guidance developers provide about what text or images their product can and cannot produce.

Even more is the problem of bias in the underlying data used to train the models. Much of this data is scraped from across the internet and reflects all human biases, baking these into models once they are trained. Based on bias complaints, companies changed their models and are still trying to appropriately align the output with ethical standards and societal expectations, underscoring how advancements in AI can inadvertently magnify existing societal issues. Genuine interest in public welfare will improve trust.

Companies building and using AI must work to rebuild trust, especially as the technology becomes even more advanced and pervasive. Many companies have made positive efforts, including the establishment of codes for AI ethics that cover issues of transparency, accountability, fairness, privacy, and security. These are the basics for responsible AI. Trust can only be built by embedding these ethical principles into AI development processes, applications, and open communication with the public.

Yet the pace of advance and the nature of competition create pressures to release products before they are thoroughly evaluated and explained. If possible, it would be good to slow down the pace of product introduction until testing is complete. Short of that, companies should be as transparent as possible about how they assess and minimize harmful uses. Working closely with regulators and policymakers to develop sensible AI governance frameworks would help too, as individual privacy and public safety should be at least as important as profits.

With a greater collaborative ethos among all stakeholders 鈥 developers, policymakers, businesses, and the public 鈥 a commitment to responsible AI and an unwavering dedication to public engagement will lead to improved trust in the transformative power of AI. In doing so, we can create a world that reflects our highest aspirations.

Gary Grossman is Senior Vice President, Global Lead of 极乐视频 AI Center of Excellence.