Skip to main content

The dark side of AI: Here's how the tech can be used for scams, fraud

Share

When OpenAI launched the ground-breaking tool ChatGPT in November 2022, it changed the world forever, showing the artificial intelligence revolution is truly upon us. The application of AI has contributed to a number of technological advancements, such as the development of modern computer chips, generative media, and more.

Not everybody is excited about AI, though. Some may see it as a two-sided coin that has just as much potential for harm as it does for progress.

Today, almost everyone I know uses AI for one reason or another. If the technology becomes fully integrated into online search engines, queries such as, “What kind of vacuum cleaner should I buy?” or “Plan a trip to Mexico with a budget of $3,000,” could be met with an AI-generated answer. Microsoft’s Bing search engine has been testing this feature with users since February.

These integrated AI systems may also have a similar disclaimer – AI is experimental and could generate inaccurate information.

Despite this, AI has become a useful tool for many. From graphic designers who quickly want to edit designs, to writers looking for creative titles and AI-scheduling programs, it’s hard to deny that AI has a lot of creative and practical uses.

But when used improperly or for the wrong reasons, it can have disastrous consequences.

Just take the example of the two lawyers who used ChatGPT for legal research while preparing a court filing. Both ended up citing past court cases that were completely made up.

Even the founder of OpenAI, Sam Altman, and other “godfathers of AI” have serious concerns about the future of AI.

However, fake legal cases and inaccurate information are just the tip of the iceberg. Below, I’ll outline a few of the harms associated with AI and how it can be used for scams, fraud, and other malicious activity.

1. VOICE CLONING

Today, AI-driven programs are able to take samples of almost anybody’s voice and recreate them almost perfectly, using common phrases while matching the tone and even the accent of the original samples.

AI voice cloning has already been used in a number of financial crimes.

For example, voice cloning could bypass financial institutions' voice password authentication systems, allowing scammers to access private bank accounts.

In the U.S. state of Arizona, a scammer used AI to call a parent while impersonating their child. The scammer convinced the parent that their child had been kidnapped and was in serious danger before demanding a US$1 million ransom.

2. DEEPFAKES AND IMPERSONATION

“Deepfakes” are computer-generated videos that can be used to impersonate someone and spread misinformation. While deepfakes have circulated on the internet for years, they haven’t usually been the most realistic, and a close observer could tell a real video from a deepfake.

But with AI-generated video and audio cloning, deepfakes have become more realistic than ever.

Some resourceful YouTubers and online course creators are using the technology to help them produce content. However, there could be just as many people using the technology maliciously, creating deepfakes of celebrities and other notable figures that may hurt their reputations.

3. AUTOMATED HACKING

One of the most practical applications of AI is for coding. AI can generate entire programs in a fraction of the time it would take a programmer to do so manually. AI can also run through thousands of lines of code in seconds to identify errors.

But AI can also be used for automated hacking. Hackers can use tools such as ChatGPT, for example, to write malicious code or malware.

What would previously require a team of hackers working day and night can now be accomplished by a single AI model. That’s pretty scary.

4. CHATBOTS AND PRIVACY

Recently, there’s been some concern over the use of chatbots when it comes to accessing private data. Since chatbots such as ChatGPT are still in their “testing” phase, conversations are recorded and used to improve their accuracy and syntax.

Whenever users create an account with OpenAI or use Bing Chat, they agree that their data can be used for development purposes. So users shouldn’t share anything they don’t want to be recorded.

In early April of this year, Canada’s federal privacy commissioner launched an investigation into ChatGPT. It is based on an allegation that the company is collecting and using personal data without user consent.

JUST HOW DANGEROUS IS AI?

Much like the early days of the internet, AI comes with a lot of potential dangers. From deepfakes to automated hacking, the risks could almost be as great as the benefits.

When used responsibly, AI can be an invaluable tool. But it can also be used maliciously by those with the worst of intentions.

While some officials around the world are pursuing stricter regulation, such as the European Union coming up with an Artificial Intelligence Act, the reality is that AI is likely here to stay.

Ultimately, it’s up to you to protect yourself from the potential dangers of AI.

Christopher Liew is a CFA Charterholder and former financial advisor. He writes personal finance tips for thousands of daily Canadian readers on his Wealth Awesome website.

CTVNews.ca Top Stories

A one-of-a-kind Royal Canadian Mint coin sells for more than $1.5M

A rare one-of-a-kind pure gold coin from the Royal Canadian Mint has sold for more than $1.5 million. The 99.99 per cent pure gold coin, named 'The Dance Screen (The Scream Too),' weighs a whopping 10 kilograms and surpassed the previous record for a coin offered at an auction in Canada.

Local Spotlight