Insights

The Dark Side of Artificial Intelligence: Scams and Frauds

J.S. Held Strengthens Family Law Practice with Asset Acquisition of Luttrell Wegis

Read More close Created with Sketch.
Home·Insights·Articles

The material in this paper was researched, compiled, and written by J.S. Held. It was originally published by the Pennsylvania CPA Journal, a publication of the Pennsylvania Institute of Certified Public Accountants.

Introduction

If you checked the news lately, you have probably seen or heard about the latest advancement of Artificial Intelligence (AI). Notably, the advancement of AI and deepfake technology convinced the actor, filmmaker, and studio owner, Tyler Perry to indefinitely pause his $800 million movie studio expansion. [1] So what exactly is AI and deepfake technology and how will it impact everyday people? 

AI is technology that enables computers to simulate human intelligence and problem-solving skills, whereas deepfake is synthetic media that has been digitally manipulated to replace and mimic one person’s likeness.

AI has involved itself in nearly every aspect of modern daily life. We see it now on websites we visit, and we are greeted with AI-powered chat bots for customer service online. However, with the advancement and seemingly helpful progress of AI, there is a darker reality – the potential of AI for exploitation and manipulation through scams and frauds. As AI continues to advance, so too, do the tactics used by malicious actors to deceive and defraud unsuspecting individuals and organizations.

The Rise of AI-Powered Scams

AI, with its ability to analyze vast amounts of data and mimic human behavior, has become a powerful tool for scammers. These perpetrators leverage AI tools to orchestrate sophisticated scams that are often difficult to detect and combat these schemes. One prominent example is the use of AI-generated deepfake content, where scammers use machine learning techniques to manipulate audio and video recordings, creating convincing but entirely fabricated content. 

Specifically, AI voice cloning is on the rise. According to the computer security company McAfee, approximately 77% of AI voice scam victims lost money. [2] When we answer a phone and hear the voice of a loved one or close friend on the other end, we instantly know and trust that voice. It is reasonable to believe that if a loved one or close friend called, we would recognize their voice and speaking habits. However, according to McAfee, voice-cloning tools are capable of replicating how a person speaks with up to 95% accuracy. 

We unknowingly give cybercriminals the data to replicate our voice and likeness. With the use of video-based social media platforms, such as TikTok, Instagram, and others, users unknowingly create a template for fraudsters to clone our voice and likeness. Based on a survey conducted by McAfee, in the United States, approximately 52% of adults share their voice at least once per week, [3] Cybercriminals now have access to specifically target individuals by cloning their intended target’s voice, based on our own use of these social media platforms.

The rapid rise of AI and the advanced schemes from cybercriminals created the need for the Federal Trade Commission in 2023 to create a new division, the Office of Technology, specifically to identify and keep up with the advancements of AI and to maintain their ability to protect consumers. [4] The advancement of AI is generating enough noise and attention for the federal government to attempt to stay ahead. 

Manipulating Behavioral Data for Fraudulent Purposes

Deepfake technology and AI has been exploited in various ways, from impersonating public figures to creating fraudulent videos to trigger a panic. One example of how damaging and convincing AI has become is a deepfake viral photo circulating on the internet in 2023 showing the Pentagon in Washington D.C. on fire after an alleged explosion. This deepfake created a panic and a temporary dip in the stock market. [5] While this image was later determined to have been created by AI, the image was still convincing enough to cause panic and impact the stock market. 

Additionally, deepfake technology can be convincing and seemingly real enough to have an employee of a company to send millions of dollars to cybercriminals. An employee at a multinational firm received an email seemingly from the company’s Chief Financial Officer (CFO), to send a large sum of money in an unusual transaction. Rightfully so, this request raised some red flags to the employee, and it was thought of as a standard phishing email. However, the employee subsequently joined a video conference call with the “CFO” and “other members of staff” that the employee recognized based on their looks and the way they sound. After the video conference call, the employee felt comfortable to proceed with the unusual transaction. However, the alleged CFO and staff members that the employee spoke to were all part of an AI deepfake operation. The employee was convincingly duped to send approximately $25 million to the cybercriminals. [6]

AI can be a CPA’s Power Tool

With the rise of cybercriminals using AI and the tools available for their own personal gain, these same tools can be used by accountants and CPAs to assist with their day-to-day tasks. Some of the benefits associated with AI for CPAs and accountants, include bookkeeping automation, client communication, data analysis and fast tax research. [7] According to a study from the University of Pennsylvania, approximately 24% of top-performing client advisory services are using AI in their practices. [8] As AI continues to advance it will be reasonable to assume that more accounting and CPA firms will use AI in their practices. 

Safeguarding Against AI-Powered Scams

In considering the benefits of AI and accounting, here are some tips and good practices that should also be considered to protect ourselves from AI-powered scams: [9, 10, 11]

  • Set your social media pages to private: 
    • Do not accept profiles or connections you do not know.
  • Be mindful of what you post: 
    • AI-driven scams use our information posted on social media as a way to mimic their target.
  • Do not trust your caller ID: 
    • If you get a call from the “bank,” “credit card company,” “Internal Revenue Service,” or other callers, hang up or decline the call. Wait for a voicemail, if any. Then look up the phone number, located on the back of your credit card, bank statement or other reputable source and call directly.
  • Do not be click happy:
    • Do not click on a link in an email or text message without first verifying it is something you are expecting, and the sender is legitimate.
  • Using safe words with family: 
    • In the unfortunate event you receive a phone call from someone that is claiming to be a family member, utilize a safe word. In your inner circle, (i.e., parents, spouse, children) have a unique word that they can use to fully convince you that they are indeed your relative / loved one.
  • Implement anti-deepfake technology:
    • There is technology that makes it more difficult for cyber criminals to steal your voice. [12]

Conclusion

As technology continues to advance, we must adapt quickly to protect ourselves from the changing environment. The use of AI and the benefits that stem from it can be useful for businesses and people to enhance our daily routines. However, with the benefits of AI there will always be the opposite. As this technology continues to evolve, we must continue to stay at the forefront and not become victims to these schemes. 

Acknowledgments

We would like to thank Jacob R. Hough, CFE, for providing insights and expertise that greatly assisted this research.

Jacob R. Hough, CFE, is a Senior Consultant in J.S. Held’s Economic Damages & Valuations practice. Jake joined J.S. Held in January of 2024 as part of J.S. Held's acquisition of Forensic Resolutions, Inc. Jake has nearly a decade of experience quantifying economic claims in dispute. He has extensive experience analyzing financial and nonfinancial data, conducting thorough analyses, and assessing economic damages in personal injury, wrongful death, and commercial damages, among other areas. He has comprehensive experience determining economic damages in numerous jurisdictions throughout the United States.

Jacob can be reached at [email protected] or +1 856 433 6433.

Find your expert.

This publication is for educational and general information purposes only. It may contain errors and is provided as is. It is not intended as specific advice, legal, or otherwise. Opinions and views are not necessarily those of J.S. Held or its affiliates and it should not be presumed that J.S. Held subscribes to any particular method, interpretation, or analysis merely because it appears in this publication. We disclaim any representation and/or warranty regarding the accuracy, timeliness, quality, or applicability of any of the contents. You should not act, or fail to act, in reliance on this publication and we disclaim all liability in respect to such actions or failure to act. We assume no responsibility for information contained in this publication and disclaim all liability and damages in respect to such information. This publication is not a substitute for competent legal advice. The content herein may be updated or otherwise modified without notice.

noun_Download_747989_000000 Created with Sketch. Download PDF
You May Also Be Interested In
Perspectives

Detecting Fraud Using Emerging Technology: Don’t Be Afraid to Innovate

This article focuses on how advances in AI and machine learning can aid forensic investigations procedures and further bring the detection of fraud and other financial crimes into the digital age....

Perspectives

Generative AI in eDiscovery: Now, Soon, or Never

Examining five applications of GenAI to eDiscovery and how the new technology benefits the legal review process....

Perspectives

Inside the Healthcare Industry: Understanding Emerging Data Privacy & Security Risks and Regulations

In this Q&A conversation, Healthcare expert Magi Curtis, and Digital Investigations & Discovery experts George Platsis and Antonio Rega, discuss the intersection of cybersecurity and data privacy with government regulation as it applies to the...

 
INDUSTRY INSIGHTS
Keep up with the latest research and announcements from our team.
Our Experts