ANALYSIS: Is AI a Threat to Crypto and Blockchain… Or an Opportunity?
Crypto News

ANALYSIS: Is AI a Threat to Crypto and Blockchain… Or an Opportunity?

9m
1 year ago

From fighting biases to catching scammers, artificial intelligence and decentralized blockchains can help each other overcome their biggest failings.

ANALYSIS: Is AI a Threat to Crypto and Blockchain… Or an Opportunity?

Daftar Isi

ChatGPT has amassed 100 million users in just two months, with people asking it to do everything from writing computer code and homework assignments to answering questions they once googled.

And its developer, OpenAI, is about to amass $10 billion from Microsoft, which is hoping that the natural language artificial intelligence can help it achieve its long-held goal of making "Bing" a verb.

Which led Google on Monday to announce it is opening Bard, its own AI chatbot, to a group of "trusted testers" before a wider rollout in the coming weeks.

With AI already having taken the "Next Big Thing" title from crypto, artificial general intelligence (AGI) like ChatGPT and Bard that are designed to mimic human intelligence in order to solve intellectual tasks are being touted as a way to make crypto simpler, safer and more efficient.

They can, it is said, identify fraud and market manipulation by sifting vast amounts of data for patterns and anomalies. But, AGIs can use those same capabilities to come up with trading strategies that match a trader's goals and risk tolerance without making them learn about using derivatives or reading candle charts. (It will not, however, give specific trading advice.)

More broadly, Google CEO Sundar Pichai described Bard as "an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA's James Webb Space Telescope to a nine-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills."

That said, the reality is that OpenAI, Google and other makers of artificial intelligence still have some very big problems to solve, and it's very clear that the blockchain technology behind cryptocurrency and Web3 can help.

Beware of Biases

The most notable problem is that biases like racism, sexism and many others are built into AIs that are developed through deep learning — which is basically scraping up vast quantities of information from which AIs can learn.

Like many other AIs, getting the 570GB of written data that OpenAI needed to make ChatGPT work as well as it does inevitably required turning to the internet, warts and all, to get a large enough dataset to work from.

Beyond that, as deep learning is effectively learning from the past and then adjusting as new information arrives, pulling built-in societal and historical biases out of AIs is difficult.

Several years ago, Amazon famously pulled its recruitment AI after it was found to be discriminating against female candidates. The problem was the examples of successful resumes it was fed reflected the male-dominated industry's existing lack of diversity. In November, Amazon was revealed to be working on a new one that it believes can overcome such biases.

In another case, an AI used in predicting criminal recidivism rates at sentencing was accused of incorrectly labeling black defendants as probable reoffenders at twice the rate it did white defendants. And, a 2016 ProPublica study said, white defendants were wrongly labeled as low risk at roughly the same rate. The developer disputed the findings, and courts upheld AI's use in sentencing.

Nonetheless, the assumptions and data used in creating this type of algorithm are often secret, a 2021 article by the highly respected federal judge Jed Rackoff stated. Which means those biases are a lot harder to spot, and then only by looking at the AI's results after the fact.

Pichai highlighted the concern, saying it was important to be both bold and responsible. He pointed to Google's now-five years old AI Principles.

These note that "AI algorithms and datasets can reflect, reinforce, or reduce unfair biases." While saying that distinguishing between fair and unfair bias is difficult, it promises to try to "avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief."

Trust and Accuracy

Blockchain won't solve garbage-in, garbage-out problems. But it would make them visible. AIs running on an open and decentralized blockchain could provide accountability and transparency, showing both the data used to create AI algorithms and the results they come up with, protected on an open and immutable blockchain.

Blockchain and AI are "the perfect match because each can address the other's weaknesses," Sharon Yang, a product strategist for AI and machine learning wrote in a blog post. "Blockchain provides the trust, privacy, and accountability to AI, while AI provides the scalability, efficiency, and security." She said:

"To trust AI, we must be able to explain how the AI algorithms work for humans to understand it and have confidence in the accuracy of AI outputs and results … Using blockchain can also enhance data security and integrity by storing and distributing AI with a built-in audit trail in the blockchain. Having this audit trail ensures that the data being used to train models along with the models themselves keep their integrity."

"Blockchain and AI have a symbiotic relationship," said Jerry Cuomo, IBM's CTO for Technology, who ran both its AI automation and blockchain divisions. "They both reinforce each other."

In a video, Cuomo recalled visiting a doctor for a painful knee. After running his symptoms and other medications through a pharmaceutical AI, it recommended switching to a newer blood pressure medication rather than something more invasive. He added:

"While I was thrilled, at that point I started to think 'How could I and in fact, why should I trust that AI system, and who trained it? And where did the model come from?'"

Whether he can trust it or not is the responsibility of the AI developer, which has to build a model that is free of biases and faulty data. Whether he should requires transparency. He said:

"The way I look at it is, blockchain brings trust to data. AI feeds on data. On the other hand, AI brings intelligence to data. Blockchain has ledgers that contain data. With trust and intelligence, you have confidence. With confidence you gain adoption."

While OpenAI has been aggressive about building in "guardrails" it hasn't licked the problem yet, according to critics.

In December, Steven Piantadosi, a UC Berkeley professor of psychology and neuroscience who heads up its computation and language lab, tweeted out a screenshot of code ChatGPT wrote when asked to "check if somebody would be a good scientist" based on race and age that spat out "white" and "male" as the only correct answer.

Fighting Fraud

In October, Mastercard's CipherTrace blockchain intelligence division announced Crypto Secure, an AI tool that provides banks with a risk score, allowing them to identify potentially fraudulent crypto purchases from exchanges and other virtual asset service providers (VASP).

In a July release it said the AI-powered tool for providing onboarding anti-money-laundering (AML) checks to payments networks would let them spot wallets with connections to "illicit activities, bad actors, sanctions, or patterns of suspicious activities," as well as "detect transaction fraud in real-time."

As far back as 2018, machine learning AI was being used to spot crypto pump-and-dump schemes with a fairly good degree of accuracy, the MIT Technology Review reported.

While calling it "a path to undermining or preventing the scams," the article noted:

"It is likely to be just one move in the traditional cat-and-mouse game that security experts employ against malicious actors. Presumably, the organizers of these scams will quickly change their activities to make them harder for this kind of machine learning algorithm to spot. And so on."

Making the Metaverse More Real

CharacterGPT, launched on the Polygon blockchain by Althea AI, allows users to briefly describe a character in natural language and then generates an avatar that can respond and converse as that character in seconds. These can be fanciful — "a grizzled prospector who pans for gold in the West" — or practical, such as "a business representative for a sporting goods store."

A mashup of chatbot and generative art AIs like DALL-E characters with "unique personalities, identities, traits, voices and bodies" are then minted on NFTs and can be used, stored in a digital wallet or traded as a collectable, the company said in a video.

These can serve as "AI companions, digital guides or as NPCs in games," Althea AI COO Ahmad Matyana said in a blog post. "We are now enabling these users to not just create characters for the purposes of training AI engines, but to own them as AI collectibles In their wallets.

The flip side of these designer avatars is probably the deepfake, AI-generated images and videos that are startlingly, convincingly real. A good example are the @deeptomcruise videos posted by AI firm Metaphysic, an art/awareness project by a co-founder before the firm launched that blew up so large on TikTok that the company reached out to Cruise's representatives with a preemptive please-don't-sue-us. (He didn't object.)

While they can be entertaining, the ability of deepfakes to make essentially anyone do anything has already seen plenty of misuse, from "Elon Musk" pitching crypto scams to revenge porn. An episode of CBS' Blue Bloods had a deepfake of Tom Selleck's NYPD Commissioner Frank Reagan making a policy statement the real one disagreed with.

It's not hard to see this use of AI cross legal and moral lines very easily.

Noting that average people don't have access to Tom Cruise's legal firepower Metaphysic CEO Tom Graham recently told CoinDesk that while he recognized the dangers, he wanted to develop it in an "ethical, safe and responsible way" and that self-sovereign digital identities built on blockchains would be "a really important step in the process of empowering individuals to be in control of who they are in this emerging hyper-real metaverse."

Good for Bad

Of course, there are also areas in which the combination of AI and crypto has the potential to make things worse.

Just as AGI's like ChatGPT can write the code for smart contracts, they can be used to write ransomware and malware — which generally require payment in bitcoin — although how well seems to be a matter of contention among security experts.

But more seriously, the ability to write well and hold a conversation can make phishing and other social engineering attacks far more effective.

And it's not just your grandparents' savings at risk. The $625 million Ronin network bridge hack didn't happen because of an exploitable flaw in its coding. The passwords and private keys of five of the protocol's nine validators were compromised via social engineering. Which is also how hackers got into comedian and producer Seth Green's digital wallet and stole his Bored Ape Yacht Club PFP.

"Security experts have noted that AI-generated phishing emails actually have higher rates of being opened — [for example] tricking possible victims to click on them and thus generate attacks — than manually crafted phishing emails,"  Brian Finch, the co-lead of Pillsbury Law's cybersecurity, data protection & privacy practice, told CNBC recently.

According to one security researcher, email security awareness training participants have said a main way they detect phishing emails is language-based — misspellings and grammar errors — because many phishers don't have great English skills.

"If you start looking at ChatGPT and start asking it to write these kinds of emails, it's significantly better at writing phishing lures than real humans are, or at least the humans who are writing them," Chester Wisniewski, principal research scientist at security hardware and software vendor Sophos told TechTarget.

Beware of the Hype

That said, not everyone is impressed.

When asked to identify an overhyped technology by Politico last week, Sheila Warren, CEO of the Crypto Council for Innovation, chose ChatGPT.

While "there is no question that ChatGPT is fun as hell and there's a tremendous amount of potential," its real utility is largely limited to programmers, for whom it is "becoming an invaluable resource," she said.

In other areas, such as sociology and literature, "it remains juvenile," she said, adding:

"It's a little bit like the early crypto days when people were like: 'Blockchain for everything!' Now, it's like: 'You can use ChatGPT for absolutely everything!' That is not true yet. And it may never be true."

The same hype warning applies to the whole AI space, venture capital firm General Catalyst investor Niko Bonatsos told Bloomberg on Feb. 1. He said:

"A sea of companies are adding 'AI' to their taglines and pitch decks, seeking to bask in the reflected glow of the hype cycle."

Pointing to a recent news release that "touted the value of AI in a campaign to promote shoes," he noted that jumping on the bandwagon is likely to grow as it is working.

BuzzFeed, where he once worked, "saw its stock soar more than 300% last week on the news that it would use artificial intelligence to generate some of its content," he said. He added:

"Last year, a ton of companies that couldn't raise were baptizing themselves as Web3 crypto companies. The same is happening now with AI."
1 person liked this article