Article: When AI makers chant 'Regulate Us', who sets boundaries?


When AI makers chant 'Regulate Us', who sets boundaries?

While AI makers clamour for regulation and governments grapple with establishing frameworks, the quest for harmony in AI regulation is proving to be more complex and less linear than expected.
When AI makers chant 'Regulate Us', who sets boundaries?

In a twist of fate, the US Senate witnessed an extraordinary sight when a cohort of representatives from large corporations and the private sector voiced an unprecedented plea: 'Regulate us.' It is a rare spectacle when business leaders, typically known for their aversion to government interference, implore authorities to establish stringent regulations to govern their own creations. The convergence of interests signifies a profound recognition and shift among business leaders, policymakers, and Artificial Intelligence architects of the imperative for responsible governance amidst the advancements in AI tech.

The sheer pace of AI advances has prompted calls for an international body, akin to frameworks governing nuclear power. Leaders like Sam Altman from OpenAI and Sundar Pichai from Google acknowledge the transformative potential of AI while also emphasising the need for government intervention to mitigate its risks. 'If AI goes wrong, it can go quite wrong,' Sam warned the Senate.

As concerns surrounding the potential risks to humanity crescendo, several pressing questions arise. Who should be entrusted with the task of defining the boundaries? Is it practically feasible to establish a centralised governance and accountability mechanism to create guardrails for businesses that transcend borders? Because of the complexity involved in harmonising evolving use cases, managing the risks posed by potent AI models, and navigating the maze of regulatory frameworks and technical standards, developing such a mechanism presents a truly huge challenge.

This recognition marks a pivotal moment, particularly with technology behemoths such as Microsoft, Google, OpenAI, and IBM vying for AI dominance. This situation suggests that these key players are either poised for unprecedented success or are more vulnerable than they have been in recent years. 

Governments around the world are currently debating the merits of regulating, or even outright banning, particular applications of AI technologies—a subject that hardly even made lawmakers' radar screens a year ago.

While the Biden administration and congressional leaders in the US are implementing a blueprint for an AI Bill of Rights, European Parliament members have agreed to stricter regulations on generative AI tools like ChatGPT, and passed EU artificial intelligence legislation. The law proposes mandatory reviews for the commercial release of generative AI systems, including ChatGPT, and aims to ban real-time facial recognition.

French President Emmanuel Macron anticipates the beginnings of global AI regulation by year-end and expresses interest in collaborating with the US on rule development, suggesting platforms like the Group of Seven and OECD as ideal arenas for establishing comprehensive global regulations.

Regulators in China are also acting swiftly, both to incentivise the development of domestically produced AI products and to establish guidelines.

The recent G-7 summit in Japan echoed the call for a forward-thinking, risk-based approach to policy making and regulation, culminating in an agreement to establish an interoperable governance framework.

While partnerships with international organisations such as the OECD and UNESCO and multi-stakeholder initiatives such as GPAI hold great appeal, their adoption faces scepticism.

In an important move, New York City has become a pioneer in AI regulation. Last month, the city adopted specific rules focusing on implementing artificial intelligence in hiring and promoting decisions, following a law passed in 2021. The law mandates that companies using AI software in their recruitment processes inform candidates about its usage and undergo independent audits to address potential bias.

The Canadian Parliament is currently debating the AI and Data Act, a legislative proposal designed to balance AI laws across provinces, with a focus on risk mitigation and transparency. Meanwhile, several other countries are at various stages of developing their own regulatory approaches to governing AI.

Last week, UN Secretary-General Antonio Guterres lent support to a proposal by select AI executives advocating for the establishment of an international AI oversight body inspired by the model of the International Atomic Energy Agency (IAEA).

Amidst proactive AI regulation efforts, the need for collaboration among stakeholders across countries seems to be equally critical to reining in the perils of large language models, considering the potential complications that isolated regulatory measures may pose for businesses utilising such AI models.

During a recent event in Delhi as part of his world tour, Altman — while responding to a government official's query about his proactive advocacy for regulation — expressed his belief that "the world can find common ground on matters of significant import," and dismissed the notion that comprehensively auditing each node, network, and server for AI regulation is an insurmountable complexity.

Srivatsa Krishna, an IAS officer, drew attention to governments' inclination to regulate based on the lowest common denominator, lacking the capacity to grasp the intricate nuances. 

Altman acknowledged the challenges but underscored: "Should governments fail to act in unison, we will strive to get companies to cooperate, although we cannot exert absolute control." 

Krishna emphasised that governments are typically ill-equipped to handle subtleties and remarked on the obsolescence of various global entities, including the IAEA, with OpenAI envisioning a future model for AI regulation in their stead.

The clamour for AI regulation by industry leaders and the ongoing efforts of governments to establish frameworks reflect a growing recognition of the need for responsible governance in the face of rapid AI advancements. 

Finding answers to these questions requires collective effort, open dialogue, and a commitment to balancing the potential benefits of AI with the need for safeguards. It's critical to strike a balance and set boundaries that protect society while fostering innovation. However, there is a lot of work to be done. 

Meanwhile, one thing that businesses can do to prepare to seize new opportunities is to focus on equipping their workforce with the skills necessary for navigating such complexities.

Read full story

Topics: Technology, #Artificial Intelligence

Did you find this story helpful?



How do you envision AI transforming your work?

Your opinion matters: Tell us how we're doing this quarter!

Selected Score :