OPINION
I begin, as every strong article should, with a caveat: Technical security controls are critically important. Deploy them all — the SOAR playbooks, the SIEM log ingestions, the EDR clients — and use as many as you have budget and time and manpower to use. And, for the love of all that’s secure, don’t stop tuning them.
However, those same technical controls can’t stop a growing category of cyberattacks that are specifically engineered to evade or abuse real systems and trusted employees to do their dirty work.
For these cases, your best (and sometimes only) defense isn’t another dashboard or detection; it’s an employee who knows what they’re looking at and what they can do to stop it.
A new report analyzing last quarter’s human threat landscape found a total of 10 key cyber threats on track to outpace security control deployments.
What struck me about these findings is not just how the attack worked, but how consistently the most effective countermeasure in each case came back to human behavior — not as a stopgap while better tech gets built, but as a genuinely irreplaceable compensating control.
I pulled out four cyber trends that I thought were especially relevant in a quick retrospective, as security teams start to analyze exploited human behaviors and attack trends in the first quarter of 2026.
BEC: The Social Engineering Attack Controls Can’t Stop
Business email compromise is, statistically, the most efficient attack in the modern threat landscape.
According to the 2025 “Microsoft Digital Defense Report,” BEC attacks represented just 2% of attempted attacks last year, yet accounted for 21% of all successful ones. To compare, ransomware made up only 16% of successful attacks—despite receiving substantially more attention and security investment.
Why is BEC so effective? Because it’s a pure social engineering attack. There’s no malware to detect, no malicious link to block, no payload to sandbox.
The attacker tricks an authorized employee into intentionally moving money as part of “normal” business processes, or even coaching them to bypass a technical security control for an “urgent” payment. They’ll pretend to be an impatient internal executive or a well-meaning external vendor, each with legitimate-sounding urgency.
No EDR flags known-good business processes. No email security gateway catches all attempts, since some come in as voice-phishing phone calls outside of the inbox. These technical controls are working exactly as designed: from their perspective, nothing unusual happened.
The human solution here isn’t complicated, but it requires investment to actually work: make, train, and enforce policy adherence for out-of-band requests — and critically, don’t punish employees who pump the brakes on a wire transfer request because it came from the CEO’s email on a Friday afternoon.
That pause is the control working. Treat it that way.
Because when CrowdStrike’s “2026 Global Threat Report” says that 83% of its incidents were caused by “malware-less” infections? We’ve got more attacks like this one coming.
Shadow AI: The Data Violation No DLP Tool Sees Coming
Shadow AI — where employees connect unauthorized generative AI tools to work systems — was one of last quarter’s top risk drivers across Fable customer environments. Those findings are supported by a growing body of research quantifying the true risk of uncontrolled, unauthorized AI tools at work.
One survey, for example, found 51% of employees had connected unauthorized AI tools to work systems, and almost a third of those employees had uploaded proprietary financial information to said unmonitored AI tool.
For this cyber-risk, the technical control challenge is structural. Data loss prevention (DLP) tools are trained to evaluate what content is, not whether it’s appropriate to share in a given context. They’re notoriously hard to tune, with an average 47% false positive rate that makes security teams reluctant to act aggressively on alerts.
Meanwhile, the employee uploading a contract summary to an unsanctioned AI tool isn’t doing it maliciously; they’re trying to do their job faster and just don’t know better. The human layer here can accomplish two things technical controls can’t:
-
Data sensitivity labeling — employees who understand what’s sensitive, and can actually classify it, enable the downstream controls to work better.
-
Understanding which tools are sanctioned and why isn’t a compliance checkbox; it’s the decision point that happens before any DLP alert ever fires.
This risk becomes even more pronounced with agentic AI tools, where autonomous systems act on sensitive data and inherit a previous user’s permissions. In the past several months, we’ve heard tons of terrifying stories about what happens when people drive AI agents without guardrails. Two of the worst I’ve seen lately include a coding agent (allegedly) causing 13-hour service interruptions in AWS, and threat actors hacking password managers because a browsing AI agent ingested a malicious prompt.
And that’s just some of the horror stories that made it to press. Ask a CISO, SOC lead, or auditor in a conference hallway, and you’ll hear more and worse shadow AI governance issues whispered about. Those whispers will only grow louder until we teach our human employees what they’re giving to their AI agent sidekicks:
-
What sensitive data looks like contextualized per individual, not tossed over the fence like we expect Sam in Customer Service to know what “sensitive data” means for him versus Dave the Developer.
-
What access do AI agents have, both individually and cumulatively across data sets and applications? Is it read only or edit permissions?
Natural language tools will always have natural language vulnerabilities to some degree, no matter what technical controls promise. Your employees are the only patch that can address both.
MFA Bypass with Voice Phishing: Expensive Tech, Simple Human Fix
In January 2026, the ShinyHunters threat group demonstrated a bypass technique that compromised authentication apps and tokens across more than 100 organizations.
Researchers publishing about these attacks prior to attacker attribution noted, “There is no substitute for enforcing phishing resistance for access to resources,” going on to list Yubikeys, an identity access manager of some sort, or passwordless solutions.
But these solutions often are prohibitively expensive or take time to roll out. ShinyHunters, and anyone who buys that phishing kit off the Dark Web, are tricking people right now.
Thankfully, the human control costs almost nothing to teach and immediately protects everywhere, even when the solution is unavailable or still being deployed. No legitimate IT or security team member will ever ask an employee for a one-time password or authentication code. Full stop. If someone asks, over email, over the phone, over Slack, in a ticket, by singing telegram, that’s the attack. The employee who knows that and refuses is a more reliable control than the authentication layer the attacker just bypassed.
This attack — and more and more like it — aren’t cases where training can be your fallback checkbox solution. In fact, for a meaningful portion of your user base, it’s your primary defense.
The Quantum Distraction and What Attackers are Actually Doing
Quantum computing gets a lot of airtime in security conversations as an impending threat to encryption. Now, “Q-Day” is definitely a long-term concern. If you’re in the middle of government contract renewals or otherwise store sensitive data that nation-state level spies want, you’ll want to invest.
However, quantum decryption is almost certainly not your most pressing problem right now.
You know what is a problem right now? Previously leaked data. Last year, 85% of targeted usernames in data incidents appeared in previous credential leaks. Why would attackers bother with breaking encryption to access data, when they can just log in as “real” users with bought credentials off the Dark Web from that password leak three years ago?
Look, quantum-resistant cryptography is a real investment category. But teaching employees to use a password manager correctly — randomized, long, unique, updated after a breach — is a cheaper, faster, and more immediately impactful control for the attacks actually happening at scale right now. Don’t let the futuristic threat crowd out the mundane one.
Ultimately, for Dark Web credential leaks, shadow AI, and other cyber-risks evading technical controls, your employees can’t stay your organization’s last line of defense. They’re the only line that was ever in a position to stop them from the start.
Don’t miss the latest Dark Reading Confidential podcast, How the Story of a USB Penetration Test Went Viral. Two decades ago Dark Reading posted its first blockbuster piece — a column by a pen tester who sprinkled rigged thumb drives around a credit union parking lot and let curious employees do the rest. This episode looks back at the history-making piece with its author, Steve Stasiukonis. Listen now!

No responses yet