As the crypto community celebrates AI's potential, should it consider the threat it poses to portfolios and the entire decentralized ecosystem?
Ever since the groundbreaking launch of ChatGPT, AI has been all the rage. Owing to this hype, GPU and AI chip producer Nvidia was one of the best-performing stocks in Q1 2024.
In crypto, meanwhile, AI tokens have been outperforming Bitcoin, with some names – like Akash Network (AKT) and Bittensor (TAO) delivering returns many times the growth of BTC over the past year.
Undoubtedly, the advent of AI and large language models (LLM) has been nothing short of revolutionary. They’ve already transformed nearly every industry, from healthcare to finance, injecting efficiency into processes and speeding up tedious manual tasks.
The Emerging Risks of AI
AI brings a raft of advantages to blockchain – there are clear synergies here. But behind this shiny veneer, dangers are lurking. A space that is already prone to hacks and exploits, the web3 ecosystem faces additional vulnerabilities when artificial intelligence enters the picture.
This is primarily because AI adds a layer of sophistication to existing attack vectors, acting as an additional tool in the would-be fraudster’s toolbox. And so it’s another bullet to dodge for investors navigating this exciting, but risk-fraught space.
For example, machine learning algorithms can be used to analyze investor behavior patterns or identify vulnerabilities in code or security systems. Then, AI automation can facilitate the execution of a lightning-fast, sophisticated attack. Warnings abound with regards to the growing threat of cyber attacks fuelled by machine learning.
ChatGPT and the Flaws of AI-Generated Code
The other concern is the increasingly prevalent use of LLMs like ChatGPT to write the underlying blockchain code. This obviously saves huge amounts of time between the idea stage and bringing a protocol to market, but is this code as secure from potential attacks?
Key among these is its apparent inability to pick up logical code bugs the same way a human can. As a result, jailbreak attacks are a prominent threat to large language models. These attacks exploit loopholes in AI’s programming to produce responses that bypass its ethical safeguards. In crypto, this could allow attackers to trick AI into generating malicious smart contracts or gaining unlawful access to encryption keys, for example.
Already, we’re seeing attackers turning to AI for their exploits. Microsoft recently warned that North Korean cyber actors have started using AI to increase the efficiency of their operations, for example, while finance firms have been warned to increase their security in preparation for AI-driven attacks.
And then, of course, AI is getting increasingly good at impersonating us, humans. It’s learning to get past humanity checks and fool people into believing that they’re interacting with another human being.
Recently, Warren Buffett was so disturbed by a deepfake that looked, moved, talked and dressed exactly like him, that he spent a large chunk of Berkshire Hathaway's annual general meeting lamenting the dangers of AI technology, comparing it to nuclear weapons. This could become increasingly problematic in the web3 world, which is already full of dubious social media interactions and fake Discord groups.
Protecting Against Emerging Threats
To protect against these increasingly sophisticated attacks, the blockchain industry – just like the rest of the world – should start with rigorous security audits – something every DeFi protocol should be regularly implementing anyway.
Today, these audits must include a specific focus on AI systems, while more sophisticated encryption methods can help protect smart contracts from AI-powered cyberattacks. Redundancy and backup controls also play an increasingly important role in ensuring there is always the option to manually shut down AI systems if suspicious activity is detected.
Regulation for a Digital World
While systems are important, regulators also have a key role to play in ensuring that legislative frameworks are fit for purpose in a world where artificial intelligence is gaining prominence. The good news is that we’re already seeing discussions across the globe about how to approach this novel technology. From India to the UK, governments are concerned about potential risks and looking for solutions.
But regulators must move faster because AI is developing at lightning speed. Policy, meanwhile, appears to be lagging behind. The EU AI Act, for example, took nearly three years from proposal to implementation. Governments and regulators can’t afford to take their time when it comes to new technologies like AI or blockchain – we need legislative frameworks now to protect users.
AI is perhaps the most exciting technological development we have seen in decades which has turned every industry on its head, including blockchain. However, as we integrate ChatGPT into our daily lives or get excited about a new AI token, it’s important to be mindful of the potential risks. Focusing on solutions to prevent these risks from materializing is the only way to avoid potentially devastating negative consequences.