[Opinion] The AI Revolution: Is AI Innovation Moving too Fast for Regulators?
Crypto News

[Opinion] The AI Revolution: Is AI Innovation Moving too Fast for Regulators?

4 Minuten
1 year ago

Unlike the crypto industry, the AI sector has not seen much heat yet. But at the speed at which AI tools are proliferating throughout society, policymakers are left scrambling for regulations

[Opinion] The AI Revolution: Is AI Innovation Moving too Fast for Regulators?

The world is no stranger to artificial intelligence (AI), and one of the biggest trends to latch onto at the moment seems to be AI. Despite regulators cracking down on the crypto industry with a flurry of enforcement actions, the AI sector has not seen much heat yet.

However, with the emergence of ChatGPT and the speed at which AI tools are proliferating throughout society, policymakers and regulators around the world are grappling with how to ensure that the technology is used ethically, transparently, and in a way that benefits society as a whole.

There is a steady growth of AI adoption but there is no comprehensive federal legislation on AI, rather, there are patchworks of framework instead.

Just last month, the United States (US) Chamber of Commerce called for regulation on AI technology to ensure it does not hurt growth or become a national security risk. The report pointed out that policymakers and business leaders must promptly ramp up efforts to establish a "risk-based regulatory framework" to ensure AI is deployed responsibly.

The report mentioned that, "…governments struggle to match policies with technologies that are developing at an exponential pace, and workers are concerned about what AI means for them…European Union, are attempting to write the first regulations governing AI. All of these issues must be debated and addressed…to create appropriate policies that will provide the pathway for the development and deployment of AI in a responsible and ethical manner.”

Then recently, on 29 March, the United Kingdom (UK) government published a whitepaper titled "A pro-innovation approach to AI regulation" setting out how the UK will achieve its goals: the creation of a blueprint for future governance and regulation of AI.

The whitepaper noted that it is imperative that "we do all we can to create the right environment to harness the benefits of AI…That includes getting regulation right so that innovators can thrive and the risks posed by AI can be addressed." It further reiterates that AI's development and deployment "can also present ethical challenges which do not always have clear answers.”

The framework set out in the whitepaper is underpinned by five principles like safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. Below, is an illustration of their strategy for regulating AI.

But while it does outline the UK government's strategy at length and broke down the next steps into three phases, it missed the mark on delivering new legislation as they fear that it could stifle growth. It did however emphasise that "the framework…is deliberately designed to be flexible" so it is akin to a light tap on the wrist rather than a punch to the gut.

On the plus side, it "recognises the importance of collaboration with international partners" and that "the government is already working in collaboration with regulators to implement the Paper's principles and framework.”

What's more is that recently, notable figures like Elon Musk, Apple's Steve Wozniak, Skye co-founder Jaan Tallinn, and others, signed an open letter that called "on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
The open letter explained that the "pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium." At the time of writing, the open letter had already garnered over 5,500 signatures as seen below.

And what do you do if you cannot regulate it? Block it. That was what Italy did.

Just a couple of days ago, the Italian data-protection authority stated that there were privacy concerns relating to ChatGPT and that it would ban and investigate whether OpenAI compiled with General Data Protection Regulation (GDPR) "with immediate effect". ChatGPT is already blocked in countries like China, Iran, North Korea, and Russia.

“There are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people. These AI systems need greater public scrutiny, and public authorities must reassert control over them," warned Ursula Pachl, deputy director general of The European Consumer Organisation (BEUC).

The European Union (EU) is in the midst of working on the world's first legislation on AI but BEUC's concern is that the AI Act could take years to take effect.

The AI landscape is filled with boundless potential and possibilities, and as it continues to become more prevalent in our daily lives, it is crucial that policymakers and regulators around the world continue to work together to ensure that the technology is developed and deployed with transparency, accountability, and ethical considerations. There is still much work to be done as AI continues to evolve and new use cases emerge.

0 people liked this article