Admin

YOU WILL Get Suckered By An AI-enabled DeepFake

By
Services for Real Estate Pros with IDTheftSecurity.com Inc

Why? Because human beings trust by default. Without trust, we wouldn’t survive as a species. From the day we come out of our mama, we simply want and need to trust others in order to survive. And when you mix media in any form, media that we have grown to trust over a lifetime, and then weave in financial fraud, the likelihood of someone, somewhere falling for the AI ruse, is, inevitable. Artificial intelligence based “social engineering” scams are quickly becoming the purest, most effective form of psychological manipulation.

Deepfake artificial intelligence scams are the digital equivalent of a sociopathic – psychopathic – narcissistic – gaslighting – violent predator.

PT Barnum Was Wrong

It is said, “There’s a sucker born every minute”. I’m pretty sure there’s approximately 250-ish people born every minute. And by my calculations, every single one of them are suckers. Me, and you included. What does this mean? It means all of us are capable of being deceived. And I’ll bet, all of us have been deceived, or “suckered”. That’s simply a hazard of “trusting by default”.

Just take a look at the evolution of the simple “phishing email scam”. Over the past 20 years, this ruse has evolved from a blanketed broadcasted “scammer grammar” communication, to an advanced persistent threat that targets specific individuals by understanding and leveraging all aspects of their personal and professional lives.

In this era of rapid technological progress and AI integration, staying informed about the latest scams is imperative for everyone. The preceding year, witnessed a tumultuous landscape in cybersecurity, marked by major corporations falling victim to malware attacks, ransomware, and the proliferation of opportunities for cybercriminals due to the advancements in AI. Regrettably, the forecast indicates a further escalation in the sophistication and prevalence of cyber threats and scams, making it essential for individuals to remain vigilant and proactive in safeguarding their digital assets.

Consider Deepfake AI “Havoc Wreaked”

The rapid proliferation of deepfake websites and apps is wreaking havoc, unleashing a wave of financial and personal fraud that uniquely threatens individuals and businesses alike.

The proliferation of deep fakes represents a troubling trend, fueled by the accessibility and sophistication of AI technology. Even the average technology user possesses tools capable of impersonating individuals given sufficient videos or images. Consequently, we must anticipate a surge in the utilization of both video and audio deep fakes within cyber scams. It’s already happening. Scammers exploit deep fake videos and/or audio to pose as superiors, soliciting urgent information.

Similarly, in personal spheres, these manipulative tactics may involve impersonating family members or friends to deceive individuals into divulging sensitive information or extracting funds from one bank account to pay for a kidnapping ransom. As ridiculous as that sounds, if you heard your daughters voice screaming in the background on a distant, cellular phone call, you’d likely cough up the cash if you thought your loved one was being held captive.

The rise of AI-enabled deep fakes presents a formidable challenge in combating financial fraud, as it provides cybercriminals with unprecedented capabilities. With the aid of AI, cybercrime syndicates can swiftly update and enhance traditional wire transfer fraud tactics, alongside sophisticated impersonation schemes. This rapid evolution jeopardizes the reliability of verification and authorization processes within the financial sector, thereby undermining trust and confidence in financial systems at large.

This Is Just the Beginning.

CNN reports A finance worker fell victim to a $25 million payout following a video call with a deepfake ‘chief financial officer’.

In a sophisticated scheme detailed by Hong Kong police, a finance worker from a multinational corporation fell prey to deepfake technology, resulting in a staggering $25 million payout to impostors posing as the company’s chief financial officer.

The elaborate ruse unfolded during a video conference call, where the unsuspecting employee found himself surrounded by what appeared to be familiar faces, only to discover they were all expertly crafted deepfake replicas. Despite initial suspicions sparked by a suspicious email, the worker’s doubts were momentarily quelled by the convincing likeness of his supposed colleagues.

This incident underscores the alarming effectiveness of deepfake technology in perpetrating financial fraud on an unprecedented scale. Despite believing that all participants on the call were genuine, the worker consented to transferring a staggering sum of $200 million Hong Kong dollars, equivalent to about $25.6 million. This incident is emblematic of a series of recent occurrences where perpetrators utilized deepfake technology to manipulate publicly available videos and other materials to defraud individuals.

Additionally, police noted that AI-generated deepfakes were utilized on numerous occasions to deceive facial recognition systems by mimicking the individuals. The fraudulent scheme involving the fabricated CFO was only uncovered after the employee reached out to the corporation’s head office for verification.

America’s Sweetheart was AI Sexually Violated

Authorities globally are sounding alarms over the advancements in deepfake technology and its potential for malicious exploitation. In a recent incident, AI-crafted pornographic images featuring the renowned American artist Taylor Swift flooded various social media platforms, highlighting just one of the perilous ramifications of artificial intelligence. These explicit images, depicting the singer in sexually provocative poses, garnered tens of millions of views before swift removal from online platforms.

Swift, a seasoned celebrity, undoubtedly, feels a sense of violation. Attribute similar circumstances to a quiet, 16-year-old highschooler, and he, or she, may implode under the pressure. These technologies have real life, and, even death, consequences.

The deepfake market delves into the depths of the dark web, serving as a favored resource for cybercriminals seeking to procure synchronized deepfake videos with audio for a range of illicit purposes, including cryptocurrency scams, disinformation campaigns, and social engineering attacks aimed at financial theft. Within dark web forums, individuals actively seek deepfake software or services, highlighting the high demand for developers proficient in AI and deepfake technologies, who often cater to these requests.

Don’t Expect Your Goberment to Fix This Problem

While the creation of deepfake software itself remains legal, the use of someone’s likeness and voice operates in a legal gray area due to the abundance of publicly available information. Although defamation suits against developers or users of deepfake content are plausible, locating them poses challenges similar to those encountered in identifying cybercriminals orchestrating other types of attacks. Legal frameworks surrounding deepfake apps vary by jurisdiction and intent, with the creation or dissemination of deepfake content intended for harm, fraud, or privacy violation being universally illegal.

Although not as prevalent as ransomware or data breaches, instances of deepfake incidents are on the rise, constituting a multi-billion dollar enterprise for cybercriminals.

Last year, McAfee reported a significant increase in deepfake audio attacks, with a staggering 77% of victims succumbing to financial losses. As cybercriminals refine their deepfake techniques, organizations must enhance user education and awareness, incorporating training programs that emphasize the risks associated with deepfake technology and the importance of verifying information through multiple channels.

Efforts to develop advanced AI-based detection tools capable of identifying deepfakes in real-time are ongoing, though their efficacy remains a work in progress, particularly against more sophisticated deepfake creations. However, criminals, using AI for fraud are always two steps ahead and awareness training is often two steps behind due to lack of implementation.

Protect Yourself and Your Organization:

When encountering a video or audio request, it’s essential to consider the tone of the message. Does the language and phrasing align with what you’d expect from your boss or family member? Before taking any action, take a moment to pause and reflect. Reach out to the purported sender through a different platform, ideally in person if possible, to verify the authenticity of the request. This simple precaution can help safeguard against potential deception facilitated by deepfake technology, ensuring you don’t fall victim to impersonation scams.

1. Stay Informed: Keep abreast of the latest developments in AI technology and its potential applications in scams. Regularly educate yourself about common AI-related scams and tactics employed by cybercriminals.

2. Verify Sources: Be skeptical of unsolicited messages, especially those requesting sensitive information or financial transactions. Verify the identity of the sender through multiple channels before taking any action.

3. Use Trusted Platforms: Conduct transactions and communicate only through reputable and secure platforms. Avoid engaging with unknown or unverified sources, particularly in online marketplaces or social media platforms.

4. Enable Security Features: Utilize security features such as multi-factor authentication whenever possible to add an extra layer of protection to your accounts and sensitive data. Implementing multi-factor authentication within a secure portal environment for sensitive actions, such as financial transactions or the release of confidential information, serves as a crucial defense against fraudulent requests facilitated by deepfake technology.

5. Update Software: Keep your devices and software applications up to date with the latest security patches and updates. Regularly check for software updates to mitigate vulnerabilities exploited by AI-related scams.

6. Scrutinize Requests: Scrutinize requests for personal or financial information, especially if they seem unusual or come from unexpected sources. Cybercriminals may use AI-generated content to create convincing phishing emails or messages.

7. Educate Others: Share knowledge and awareness about AI-related scams with friends, family, and colleagues. Encourage them to adopt safe online practices and be vigilant against potential threats.

8. Verify Identities: Before sharing sensitive information or completing transactions, verify the identity of the recipient using trusted contact methods. Beware of AI-generated deepfake videos or audio impersonating trusted individuals.

9. Be Wary of Unrealistic Offers: Exercise caution when encountering offers or deals that seem too good to be true. AI-powered scams may promise unrealistic returns or benefits to lure victims into fraudulent schemes.

10. Report Suspicious Activity: If you encounter suspicious AI-related activity or believe you have been targeted by a scam, report it to relevant authorities or platforms. Prompt reporting can help prevent further exploitation and protect others from falling victim to similar scams.

None of the above, by itself, will solve this problem. And I cannot stress this enough, organizations and their staff must engage in consistent and ongoing security awareness training, now more than ever.

And that does not mean simply deploying phishing simulation training by itself. While phishing simulation training is necessary for “check the box” compliance, it only addresses one aspect of fraud prevention and social engineering. Phish sim doesn’t come close to solving the problem of artificially intelligent psychological manipulation.

Robert Siciliano CSP, CSI, CITRMS is a security expert and private investigator with 30+ years experience, #1 Best Selling Amazon.com author of 5 books, and the architect of the CSI Protection certification; a Cyber Social Identity and Personal Protection security awareness training program. He is a frequent speaker and media commentator, and CEO of Safr.Me and Head Trainer at ProtectNowLLC.com.

Comments(26)

Show All Comments Sort:
Paddy Deighan MBA JD PhD
http://www.medicalandspaconsulting.com - Vail, CO
Paddy Deighan J.D. Ph.D
This is so true….. it is already evident. So many believe memes and posts that are clearly AI generated fakes My saying in business…. “Trust everyone yet trust no one “
Feb 15, 2024 02:25 AM
Peter Mohylsky,
Property Management International-Destin - Inlet Beach, FL
Call me at 850-517-7098

Good article, hold on for the ride because it will get interesting here going forward. 

Feb 15, 2024 03:36 AM
Patricia Feager
Flower Mound, TX
Inactive/Semi Retired Real Estate Agent

Robert Siciliano - Thank you for this excellent article. AI is a threat to all; yet few will listen. To over trust can easily leave you in the dust. 

Feb 15, 2024 03:37 AM
Laura Cerrano
Feng Shui Manhattan Long Island - Locust Valley, NY
Certified Feng Shui Expert, Speaker & Researcher

My friend says that if you don’t say hello, how do you say something else they give themselves right away.

Feb 15, 2024 04:56 AM
Dr. Paula McDonald
Beam & Branch Realty - Granbury, TX
Granbury, TX 936-203-0279

All of this is gravely concerning. It is going to come down to being able to verify information thoroughly. 

Feb 15, 2024 07:24 AM
Margaret Goss
@Properties - Winnetka, IL
Chicago's North Shore & Winnetka Real Estate

Very scary! Verifying someone's identity right as you think you're being scammed is a good idea. And I'm definitely passing your post on to family members. 

Feb 15, 2024 07:51 AM
Dorie Dillard Austin TX
Coldwell Banker Realty ~ 512.750.6899 - Austin, TX
NW Austin ~ Canyon Creek and Spicewood/Balcones

Good afternoon Robert,

You always point out what we need to know and this is scary stuff! Its always been said you need to verify information ...over trusting is a humans fault and wer must be diligent about securing outr emails and sites.

Feb 15, 2024 11:59 AM
John Juarez
The Medford Real Estate Team - Fremont, CA
ePRO, SRES, GRI, PMN

I am not sure if I will be able to sleep tonight after reading your post, Robert.

I have received numerous offers from unknown beneficiaries who wish to share millions of dollars with me if I will help then get it out of a bank account overseas. I have gotten texts from my broker asking me to run out and buyer gift cards on his behalf since he is in a meeting and unable to do so. I receive daily notifications that I have won something or other and only need to claim my prize. Those simplistic schemes are easy to see through. More refined and expertly crafted attempts to hoodwink me are surely on the way.

Feb 16, 2024 07:01 AM
Wayne Martin
Wayne M Martin - Oswego, IL
Real Estate Broker - Retired

Good morning Robert. You are 100% correct, the government will not fix the problem given they are the problem. Enjoy your day.

Feb 17, 2024 04:53 AM
Kat Palmiotti
eXp Commercial, Referral Divison - Kalispell, MT
Helping your Montana dreams take root

Thank you for this information. The video call with deepfake replicas of people that were known is crazy. Yikes.

Feb 17, 2024 05:48 AM
Kathy Streib
Cypress, TX
Retired Home Stager/Redesign

Feb 17, 2024 06:28 PM
Lise Howe
Keller Williams Capital Properties - Washington, DC
Assoc. Broker in DC, MD, VA and attorney in DC

This is so frightening - and I am glad that Kathy featured it - Thanks to you both. 

Feb 18, 2024 05:27 AM
Michael J. Perry
Fathom Realty - Lancaster, PA
Lancaster, PA Relo Specialist

AI will ultimately be used for more harm than good . As our youth becomes addicted to what appears to be a toy .

Feb 18, 2024 07:09 AM
Joyce M. Marsh
Luxury Home Couture - Daytona Beach, FL
Joyce Marsh Homes & Design

More than ever we need to be on guard in every facet of our lives. Thanks for a very informative post.

Feb 18, 2024 07:50 AM
Leanne Smith
Dirt Road Real Estate - Golden Valley, AZ
The Grit and Gratitude Agent

So true about government and some believing they can "fix" the nefarious behaviors of others. Law abiding citizens follow the law.

Feb 18, 2024 11:00 AM
Laura Cerrano
Feng Shui Manhattan Long Island - Locust Valley, NY
Certified Feng Shui Expert, Speaker & Researcher

I agree that this happens so often with the Robo calls and the programed everything. It gets a little crazy

Feb 18, 2024 05:27 PM
Matthew Sturkie, CRS, GRI 909-969-3805
Action Realty - Apple Valley, CA
CRS, GRI 909-969-3805

Thanks for the great post. The internet has made me so untrusting and borderline paranoid that even legitimate people are paying the price.  I don't trust anyone over the phone, email, or text. 

Feb 19, 2024 10:18 AM
John Henry, Florida Architect
John Henry Masterworks Design International, Inc. - Orlando, FL
Residential Architect, Luxury Custom Home Design

Excellent article with ample warning signs.  Of course all of this was possible without AI, simply faking images in photoshop and lying over the phone, etc.  But AI is in the hands of miscreants world wide and very serious reactions to fake images, comments, and announcements may end in dire situations.  Thanks

Feb 20, 2024 10:50 AM
Pat Starnes-Front Gate Realty
Front Gate Real Estate - Brandon, MS
601-991-2900 Office; 601-278-4513 Cell

Thank you. We have to remain sharp and aware of enticing offers that sound too good to be true.

Feb 20, 2024 11:06 AM
Debra Leisek
Bay Realty,Inc Homer Alaska - Homer, AK

They say the road to hell is paved with good intentions.... looks like we are on a 12 lane AI Highway.....

Feb 22, 2024 12:31 AM