Unlike the crypto industry, the AI sector has not seen much heat yet. But at the speed at which AI tools are proliferating throughout society, policymakers are left scrambling for regulations
The world is no stranger to artificial intelligence (AI), and one of the biggest trends to latch onto at the moment seems to be AI. Despite regulators cracking down on the crypto industry with a flurry of enforcement actions, the AI sector has not seen much heat yet.
However, with the emergence of ChatGPT and the speed at which AI tools are proliferating throughout society, policymakers and regulators around the world are grappling with how to ensure that the technology is used ethically, transparently, and in a way that benefits society as a whole.
There is a steady growth of AI adoption but there is no comprehensive federal legislation on AI, rather, there are patchworks of framework instead.
The report mentioned that, "…governments struggle to match policies with technologies that are developing at an exponential pace, and workers are concerned about what AI means for them…European Union, are attempting to write the first regulations governing AI. All of these issues must be debated and addressed…to create appropriate policies that will provide the pathway for the development and deployment of AI in a responsible and ethical manner.”
The whitepaper noted that it is imperative that "we do all we can to create the right environment to harness the benefits of AI…That includes getting regulation right so that innovators can thrive and the risks posed by AI can be addressed." It further reiterates that AI's development and deployment "can also present ethical challenges which do not always have clear answers.”
The framework set out in the whitepaper is underpinned by five principles like safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. Below, is an illustration of their strategy for regulating AI.
But while it does outline the UK government's strategy at length and broke down the next steps into three phases, it missed the mark on delivering new legislation as they fear that it could stifle growth. It did however emphasise that "the framework…is deliberately designed to be flexible" so it is akin to a light tap on the wrist rather than a punch to the gut.
On the plus side, it "recognises the importance of collaboration with international partners" and that "the government is already working in collaboration with regulators to implement the Paper's principles and framework.”
And what do you do if you cannot regulate it? Block it. That was what Italy did.
Just a couple of days ago, the Italian data-protection authority stated that there were privacy concerns relating to ChatGPT and that it would ban and investigate whether OpenAI compiled with General Data Protection Regulation (GDPR) "with immediate effect". ChatGPT is already blocked in countries like China, Iran, North Korea, and Russia.
“There are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people. These AI systems need greater public scrutiny, and public authorities must reassert control over them," warned Ursula Pachl, deputy director general of The European Consumer Organisation (BEUC).
The European Union (EU) is in the midst of working on the world's first legislation on AI but BEUC's concern is that the AI Act could take years to take effect.
The AI landscape is filled with boundless potential and possibilities, and as it continues to become more prevalent in our daily lives, it is crucial that policymakers and regulators around the world continue to work together to ensure that the technology is developed and deployed with transparency, accountability, and ethical considerations. There is still much work to be done as AI continues to evolve and new use cases emerge.