Can AI Be Regulated?

Jun 29, 2023

Rebecca Finlay, CEO of the Partnership for AI, is on a mission to find a positive answer. In a recent podcast, she emphasized that AI isn’t inherently good, bad, or neutral. It’s a result of human choices influenced by market conditions and politics. AI is a product of our creativity and the conditions we set. That means we hold the power to shape how it develops and how it’s used.


Artificial Intelligence in the news:

  • A drug discovered and designed completely by artificial intelligence to treat a chronic lung disease advanced to mid-stage human trials, according to its Chinese investor.
  • In February ChatGPT passed the US Medical Licensing Exam with “understandable reasoning and valid clinical insights,” according to a research team at Massachusetts General Hospital.
  • Open AI’s image-making model DALL-E-2 produced the first known completely AI-generated movie.
  • Earlier this year, the Mauritshuis Museum in the Hague replaced Vermeer’s Girl with a Pearl Earring with an AI-generated version of the original.
  • An artist called Ghostwriter977 released an AI-generated, completely fake Drake and the Weekend song that went viral before it was pulled from streaming services.

Every day seems to bring news of ever-more amazing applications of generative AI. It’s as if anything a human can do, AI can do better, faster, cheaper. But will humanity be in control as AI gets “smarter”?

Rebecca Finlay, CEO of the Partnership for AI, is focused on producing a positive answer. In a recent New Thinking for a New World podcast, she explained, “In my mind, AI is not good, and AI is not inherently bad—but it’s also not neutral. It is a technology that is developed based on human choices that respond to market conditions that reside within political systems and structures. It doesn’t exist outside of the conditions we place upon it, or the creativity that we instill within it.”

Therefore, she concludes, “We can make choices about how we use it and how we develop it and how we deploy it.”

Finlay insists that we aren’t—or, at least, ought not to be—talking about generally capable intelligence that is more or less human, even though all of us have a tendency to impute human motives and capabilities to intelligence systems. Rather, deep learning or reinforcement systems like ChatGPT or Bing AI or LaMDA “are not rules-based in the traditional sort of rules-based artificial intelligence prose. These systems are algorithms and statistical methodologies which are trained on top of data that allow them to be very capable of making predictions, depending on what they are optimized to do.” That allows AI “to augment human intelligence and capability and replace some of the tasks that [humans are] currently doing.”

Of course, she recognizes the inherent challenge, “How do you allow for the creative, positive and beneficial uses of artificial intelligence, and protect against the malicious and harmful uses of artificial intelligence?” Finlay believes not only that it’s possible to achieve that balance, but also that the scientists and engineers developing AI ought to prioritize its safe evolution.

How to start? “If we do not instill a culture of self-regulation and anticipatory risk assessment at the research stage of artificial intelligence, we’re not going to be able to get it in all the other stages.” She points to a report issued last year by the U.S. National Academies of Sciences as well as work done by the Partnership for AI to argue that such a culture can be built and sustained.

Finlay clearly disagrees with former Google CEO Eric Schmidt and others who argue that nothing more than self-regulation is possible. She recognizes that self-regulation is not enough and worries that the self-regulation versus regulation debate focuses incorrectly on the two extremes of hard law and no law. Instead, she advocates “a whole spectrum of ways in which public policymakers can be intervening and interacting together with responses that are non-governmental in nature in order to create a healthy ecosystem.”

By way of example, Finlay cites a current Partnership for AI initiative among large AI labs, academia and civil society to think collaboratively about safety protocols that are fit for purpose today with regard to these large scale AI models and also can evolve over time for future risk. “What are the governance systems and governance structures that need to be put around those protocols in order to manage them over time?”

She points out one possible governmental approach that is embedded in the recently approved European Union’s AI Act. That legislation is based on assessing varying degrees of risk for a given AI system and calculating accountability and oversight accordingly. Thus, Finlay says, “There are types [of AI] that are banned completely and there are some that are incredibly low-risk” and therefore are unrestricted.

Of course, there is always the risk of malicious actors, but she sees no reason why the kinds of frameworks and institutions that hold individuals and institutions liable and accountable in other areas—like pharmaceuticals—can’t be developed and deployed for AI.  

At the same time, technologist Gregory Hinton, historian Yuval Noah Harari and even Henry Kissinger worry that self-perpetuating and self-generating AI has the potential to threaten mankind’s future existence. Finlay won’t engage; she wants to leave hypothetical future challenges for the future. Making sure that AI benefits people and society today is her immediate priority and she believes it can be done. “We need to think about who owns the system, who markets the system, who is…using it for an application which is malicious…then ensure that our legal systems and structures are in place to act.”

What does AI think about all this? We asked Bing:

Q.  What do you call a law that regulates AI?
A.   A suggestion


What do you think? Not about the joke, but about whether AI can be regulated. TELL US WHAT YOU THINK BY COMMENTING BELOW

Rebecca Finlay recently spoke with Alan Stoga as part of the Tällberg Foundation’s “New Thinking for a New World” podcast series. Listen to their conversation here or find us on a podcast platform of your choice  (Apple podcast, Spotify, Google podcast, Youtube, etc).


Rebecca Finlay is the CEO at Partnership on AI overseeing the organization’s mission and strategy. At PAI Rebecca ensures that our global community of Partners work together so that developments in AI advance positive outcomes for people and society. Most recently, she was a VP, Engagement and Public Policy at CIFAR where she founded the Institute’s global knowledge mobilization practice, bringing together experts in industry, civil society, and government to accelerate the societal impact of CIFAR’s research programs.


  1. Haingo H Rajaonarison

    I think that AI can be regulated. Let us think about the past and the present. A calculator is a type of AI when it was used first, and then we have had graphing calculator, but they did not replace Math or calculus teachers. And computers make billions of calculations per second, but they have been well regulated,I guess.
    We can use suynergy and work together on the best way to regulate AI for the benefit of humankind.

  2. Jobert Ngwenya

    I think Ai can be regulated. AI technology continues to advance and permeate various aspects of society, concerns arise regarding its ethical implications, potential biases, privacy issues, and impact on education, among others. Regulating AI is a necessary step to address these concerns and ensure responsible development and deployment of AI systems.

  3. Tom Rogers Muyunga-Mukasa

    There are good and bad sides of AI. Just like medicines. We have to teach people what is beneficial and malicious. When do we begin to define AI? I am based in Uganda and work in rural communities where I am using my knowledge and skills to organise communities to participate in health seeking and life promoting practices. I made a conscience choice to relocate and work among peripheral communities that are not near built cities with attendant amenities. Here I still see persons who carry radios and own Television sets with diodes. I know of families where the radio is a well guarded instrument playing when the household head says so and tuned to paly that one favourite station. I have a case scenario of a household owning a radio with a variety of stations ranging from Medium Wave (MW), Frequency Modulated (FM) to Short Wave (SW). But, this particular household prefers a particular FM radio at a particular band. They are assured of switching it off and on and finding this favourite FM Station at that particular band. By the way, this is how it has been for the past 27 years. The frequency band FM 88.8 has never changed. I beg to be educated further but my thinking is that this a definition of AI at that level. A motion camera triggered by any movement is another example of AI at work. A computer that can read Caucasian facial features perfectly but not Negroid ones is an issue. It clearly shows that AI is manipulated by economic, cultural and political policies. Let us make AI that promotes peaceful co-existence and denounce that which is manipulative. I understand that more countries prefer USSD platforms now to the CHATGPT or Bots which they are increasingly censoring. I hope in the process they will not be stifling beneficial breakthroughs. Thanks for sharing this topic.

  4. Muhammad Amjad

    accordingly she saying that I won’t suppose to legalized partially equity’s inflation laws go back to recalls humans intelligence and psyche must behave general rules and regulations put on cart and it work with same consignment and order to being cause disease from one gentlemen to distract mind and simplifies magnetism to scolded behind the gender equality discrimination or disorder.while it supposed to do with human scholffield disease and tanquate desire disease to disiminate harbour contacts.therefore it might discrete miscalculate or misjudge the track record of other following natural or organic phenomena.AI subordinate the causing of corrosive meet and you agree to calculate environment health and safety precautions to reload on cost design and impactful disease.AI should be enchanted and chatbat to regulate and implement desire following regularities and irregularities in following laws showing pros and cons disease thanks.

  5. Muhammad Amjad Noor

    Partial inflation and suffering disease not much more care and calculate AI chatbat.

  6. sophie

    My suggestion is by the Law that regulates AI and the agreement of the system, because AI dont last longer energy, while human being or animals and we are animals who control the Computer sysytem or internet. And to be sustainable and not robotic and we helping one another for good.

  7. AHMED Fathy Mohamed ELSAYED

    Artificial Intelligence analysis

    1. The name of the law regulating artificial intelligence
    which corresponds to the main objective

    Via content analysis and measurement of codes
    It is the basis of the organization of artificial intelligence

    Which identifies and reveals the methodology of creating and generating content through human or artificial methods

    Measuring the level of complexity and overlap in content generation and tracking its source
    He is the one who determines who is responsible for removing it

    And creative solutions in analyzing and measuring the capabilities of artificial intelligence and its outputs

    2. Illegal threats to artificial intelligence cannot be completely controlled

    Especially in areas that are not restricted by nature, such as intelligence and wars

    It does not work with regulation or rationing

    3. Regulation is to ensure control of AI risks and not limitations of AI

    4. Protection from the risks of unforeseen competition and manipulation

    Which pose major challenges and circumvent the rules of protecting economic competition

    5. The deep dangers of artificial intelligence, in which systems and their outputs are misled

    Which constitutes a deeper challenge in the capabilities of analyzing and troubleshooting systems data

    Through additional control tools in artificial intelligence systems

  8. E. Chellappa

    Thiruvallvar an ancient Tamil poet said some two thousand years ago
    அரம் போலும் கூர்மையரேனும் மரம் போல்வர்
    மக்கட் பண்பில்லாதவர்
    Meaning : Though a man is as sharp as a razor he is just as good as a piece of wood if he does not have human kindness.
    The AI can even replace human intelligence but every scientific invention should aim at improving the quality of life. However efficient an AI may be the final word must be with the humans. The day when Ai thinks more profoundly than humans it may become the Doomsday for the human race. Already humans have done enough damage to themselves by inventions like dynamite, nuclear bombs and of late causing climate change. But the ill effects caused by all these threats can be controlled to some extent by right thinking humans. But when this thinking itself is jeopardized may God save the human race , Can He?


Submit a Comment

Your email address will not be published. Required fields are marked *

Related articles

Give Peace a Chance?

Give Peace a Chance?

We have to negotiate for an all-new security system for Europe, taking into account all sides of this problem. Russia does not feel itself to be secure. And we can laugh about this and say that we never had an aggressive approach towards Russia, but Russians think so....

Share via
Copy link
Powered by Social Snap