Rebecca Finlay, CEO of the Partnership for AI, is on a mission to find a positive answer. In a recent podcast, she emphasized that AI isn’t inherently good, bad, or neutral. It’s a result of human choices influenced by market conditions and politics. AI is a product of our creativity and the conditions we set. That means we hold the power to shape how it develops and how it’s used.
Artificial Intelligence in the news:
- A drug discovered and designed completely by artificial intelligence to treat a chronic lung disease advanced to mid-stage human trials, according to its Chinese investor.
- In February ChatGPT passed the US Medical Licensing Exam with “understandable reasoning and valid clinical insights,” according to a research team at Massachusetts General Hospital.
- Open AI’s image-making model DALL-E-2 produced the first known completely AI-generated movie.
- Earlier this year, the Mauritshuis Museum in the Hague replaced Vermeer’s Girl with a Pearl Earring with an AI-generated version of the original.
- An artist called Ghostwriter977 released an AI-generated, completely fake Drake and the Weekend song that went viral before it was pulled from streaming services.
Every day seems to bring news of ever-more amazing applications of generative AI. It’s as if anything a human can do, AI can do better, faster, cheaper. But will humanity be in control as AI gets “smarter”?
Rebecca Finlay, CEO of the Partnership for AI, is focused on producing a positive answer. In a recent New Thinking for a New World podcast, she explained, “In my mind, AI is not good, and AI is not inherently bad—but it’s also not neutral. It is a technology that is developed based on human choices that respond to market conditions that reside within political systems and structures. It doesn’t exist outside of the conditions we place upon it, or the creativity that we instill within it.”
Therefore, she concludes, “We can make choices about how we use it and how we develop it and how we deploy it.”
Finlay insists that we aren’t—or, at least, ought not to be—talking about generally capable intelligence that is more or less human, even though all of us have a tendency to impute human motives and capabilities to intelligence systems. Rather, deep learning or reinforcement systems like ChatGPT or Bing AI or LaMDA “are not rules-based in the traditional sort of rules-based artificial intelligence prose. These systems are algorithms and statistical methodologies which are trained on top of data that allow them to be very capable of making predictions, depending on what they are optimized to do.” That allows AI “to augment human intelligence and capability and replace some of the tasks that [humans are] currently doing.”
Of course, she recognizes the inherent challenge, “How do you allow for the creative, positive and beneficial uses of artificial intelligence, and protect against the malicious and harmful uses of artificial intelligence?” Finlay believes not only that it’s possible to achieve that balance, but also that the scientists and engineers developing AI ought to prioritize its safe evolution.
How to start? “If we do not instill a culture of self-regulation and anticipatory risk assessment at the research stage of artificial intelligence, we’re not going to be able to get it in all the other stages.” She points to a report issued last year by the U.S. National Academies of Sciences as well as work done by the Partnership for AI to argue that such a culture can be built and sustained.
Finlay clearly disagrees with former Google CEO Eric Schmidt and others who argue that nothing more than self-regulation is possible. She recognizes that self-regulation is not enough and worries that the self-regulation versus regulation debate focuses incorrectly on the two extremes of hard law and no law. Instead, she advocates “a whole spectrum of ways in which public policymakers can be intervening and interacting together with responses that are non-governmental in nature in order to create a healthy ecosystem.”
By way of example, Finlay cites a current Partnership for AI initiative among large AI labs, academia and civil society to think collaboratively about safety protocols that are fit for purpose today with regard to these large scale AI models and also can evolve over time for future risk. “What are the governance systems and governance structures that need to be put around those protocols in order to manage them over time?”
She points out one possible governmental approach that is embedded in the recently approved European Union’s AI Act. That legislation is based on assessing varying degrees of risk for a given AI system and calculating accountability and oversight accordingly. Thus, Finlay says, “There are types [of AI] that are banned completely and there are some that are incredibly low-risk” and therefore are unrestricted.
Of course, there is always the risk of malicious actors, but she sees no reason why the kinds of frameworks and institutions that hold individuals and institutions liable and accountable in other areas—like pharmaceuticals—can’t be developed and deployed for AI.
At the same time, technologist Gregory Hinton, historian Yuval Noah Harari and even Henry Kissinger worry that self-perpetuating and self-generating AI has the potential to threaten mankind’s future existence. Finlay won’t engage; she wants to leave hypothetical future challenges for the future. Making sure that AI benefits people and society today is her immediate priority and she believes it can be done. “We need to think about who owns the system, who markets the system, who is…using it for an application which is malicious…then ensure that our legal systems and structures are in place to act.”
What does AI think about all this? We asked Bing:
Q. What do you call a law that regulates AI?
A. A suggestion
What do you think? Not about the joke, but about whether AI can be regulated. TELL US WHAT YOU THINK BY COMMENTING BELOW
Rebecca Finlay recently spoke with Alan Stoga as part of the Tällberg Foundation’s “New Thinking for a New World” podcast series. Listen to their conversation here or find us on a podcast platform of your choice (Apple podcast, Spotify, Google podcast, Youtube, etc).
ABOUT OUR GUEST
Rebecca Finlay is the CEO at Partnership on AI overseeing the organization’s mission and strategy. At PAI Rebecca ensures that our global community of Partners work together so that developments in AI advance positive outcomes for people and society. Most recently, she was a VP, Engagement and Public Policy at CIFAR where she founded the Institute’s global knowledge mobilization practice, bringing together experts in industry, civil society, and government to accelerate the societal impact of CIFAR’s research programs.