AI threats

AI is rapidly advancing and changing how Vancouver and Lower Mainland businesses operate. While that’s exciting, it’s also alarming when you consider attackers have the same access to AI tools as you do. If you’re wondering what’s worth worrying about, here are a few ‘monsters’ worth spotlighting, paired with practical habits your managed IT services and IT support teams can help enforce within your broader cybersecurity program.

Doppelgängers in Your Video Chats — Watch Out for Deepfakes

AI-generated deepfakes have become scarily accurate, and threat actors are using them in social engineering attacks against businesses. For example, a security vendor observed an incident where an employee of a cryptocurrency foundation joined a Zoom meeting with several deepfakes of known senior leadership within their company. The deepfakes told the employee to download a Zoom extension to access the Zoom microphone, paving the way for a North Korean intrusion. For Vancouver businesses, these scams can turn existing verification processes upside down. To identify them, watch for red flags such as:

  • Facial inconsistencies
  • Long silences
  • Strange lighting

Creepy Crawlies in Your Inbox — Stay Wary of Phishing Emails

Phishing emails have been a problem for years, but now that attackers can use AI to write messages for them, many of the obvious tells, like bad grammar or spelling errors, aren’t reliable anymore. Threat actors are also integrating AI tools into their phishing kits to translate landing pages or emails into other languages, helping them scale campaigns. However, many of the same security measures still apply to AI-generated phishing. Extra defenses like multifactor authentication (MFA) make it much harder for attackers to get through, since they’re unlikely to also have access to an external device like your cell phone. Security awareness training remains extremely useful for reducing employee risk, teaching people to spot other red flags, such as messages expressing urgency.

Skeleton AI Tools — More Malicious Software Than Substance

Attackers are riding the popularity of AI to trick people into downloading malware. Threat actors frequently tailor lures around popular events or seasonal fads like Black Friday. It’s no surprise to see malicious ‘AI video generator’ sites or fake, malware-laden AI tools. These ‘tools’ contain just enough legitimate software to look real to an unsuspecting user, but underneath, they’re chock-full of malware. For instance, a TikTok account reportedly posted videos about installing ‘cracked software’ to bypass licensing or activation requirements for apps like ChatGPT using a PowerShell command. In reality, the account was operating a malware distribution campaign, later exposed by researchers. A reliable way to protect your business is to ask your MSP or managed IT services provider to vet any new AI tools you’re interested in before you download them.

Ready to Chase the AI Ghosts Out of Your Business?

AI threats don’t have to keep you up at night. From deepfakes to phishing to malicious ‘AI tools,’ attackers are getting smarter, but the right defenses will keep your business one step ahead. Schedule your free discovery call today and let’s talk through how to protect your team from the scary side of AI… before it becomes a real problem.

Serving Vancouver, Richmond, and the Lower Mainland, our managed IT services, responsive IT support, and practical cybersecurity guidance help local teams work safer every day.