I used to think I had my crypto security figured out. Hardware wallet, strong passwords, never click strange links. The basics. Felt pretty good about it.
Then I read through the hack data for the first quarter of 2026 and found that my setup – and probably yours too – had some serious blind spots.
Here’s what changed my mind: $400 million was stolen from crypto users in the first three months of 2026. Not from questionable DeFi forks. Not from carpet pulls. From people who thought they were being careful. One person lost $282 million from a hardware wallet through a phishing call. Another log – Drift on Solana – was obtained drained for $285 million on April 1 after passing two separate security clearances.
And on April 5, Ledger CTO Charles Guillemet shared CoinDesk that AI tools make any type of attack cheaper, faster and harder to detect.
So yes. It’s time to rethink what “good security” actually means.
The problem is no longer the code, but us
Let me share some numbers from the Chainalysis 2026 Crypto Crime Report That really caught my attention.
In 2025, crypto theft totaled over $3.4 billion. That’s a lot. But it’s the breakdown that matters:
- Compared to last year, the number of identity theft scams increased by 1,400%
- Scams using AI tools made 450% more money than the old-fashioned way
- 158,000 individual wallets were compromised – that’s 80,000 real people who lost a total of $713 million
- The three largest hacks alone were responsible for 69% of all losses
The pattern is clear. The attackers have stopped trying to crack the code. They started breaking people.
Security researcher Juan Amador summed it up CoinDesk piece from January: “As the code becomes less and less exploitable, people will be the main attack surface in 2026.”
He also said something that really worried me: over 90% of crypto projects still have serious vulnerabilities and less than 1% use any type of on-chain firewall.
Fewer. As. One. Percent.
What AI has actually changed
I keep seeing people talk about “AI threats” in vague, fuzzy terms. So let me elaborate on what has actually changed.
Before AI tools were ubiquitous, carrying out a crypto scam required real effort. You had to write phishing emails manually (and they usually read like garbage). You had to find smart contract bugs by reading the code yourself. Developing compelling malware took time and skill.
AI has overcome all these barriers.
Guillemet from Ledger explained it clearly. He said developers are now producing AI-generated code that has security flaws from the start. His exact words: “There is no ‘Make it secure’ button.” He also described malware that silently scans your phone for seed phrases – no pop-ups, no warnings, nothing. You won’t notice until your balance goes to zero.
And here’s the kicker: traditional security checks can’t keep up. Both Trail of Bits and ClawSecure audited the Drift protocol before the hack. Both have logged out. The attacker still stole $285 million by manipulating price oracles with a fake token and a compromised admin key.
Guillemet’s advice? Stop relying solely on audits. Formal verification – the use of mathematical proofs to validate code – is the only approach that does not rely on a human discovering the right bug on the right day.
5 things I actually changed about my own safety
After going through all of this, I made some real changes. No theoretical stuff. Things I actually did.
1. I stopped trusting any link in an email
January’s $282 million phishing loss occurred when someone clicked on a link and entered their seed phrase on a fake support page. The BONK.fun hack in March was even crazier – Attackers have hijacked the actual website domain and swapped it out for a wallet drainer. People who connected their wallets and signed what looked like a terms of service pop-up lost everything.
PeckShield’s March data showed this is part of a larger trend. They tracked it Losses of $52 million in 20 incidents this month – almost double the number in February – and warned of something called “shadow contagion,” in which a compromised piece of infrastructure brings down multiple platforms.
What I do now: I enter URLs manually. Every time. I have bookmarked my most used DApps and never click a link in an email or DM to interact with anything crypto-related. When a protocol sends me an email, I open the app directly.
If you’re building a product, consider adding per-user verification codes to your emails so people in your app can verify that a communication is genuine:
JavaScript
const crypto = require('crypto');
function makeEmailCode(userId, timestamp) {
const hmac = crypto.createHmac('sha256', process.env.EMAIL_SECRET);
hmac.update(`${userId}:${timestamp}`);
return hmac.digest('hex').slice(0, 8).toUpperCase();
}
// Put this code in every email: "Your code: A3F8B2C1"
// Users verify at yourapp.com/verify
2. I use an address book for each repeat transfer
You probably know that it is clipboard hijacking – malware that replaces the address you copied with the attacker’s doppelganger model. The first 4 characters match, the last 4 match. At first glance it looks good.
AI made matters worse because generating these lookalike addresses used to take several days of GPU time. Now it takes minutes. The NOMINIS report from February 2026 documented that a victim lost $100,000 in USDT doing exactly this.
What I do now: Any address I send to more than once, I save it as a contact in my wallet the first time (after triple checking). Then I select from my contact list. I never include raw addresses for repeat transfers.
If you are a developer, build the following into your product:
solidity
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
contract AddressBook {
mapping(address => mapping(string => address)) private contacts;
function save(string calldata name, address wallet) external {
contacts[msg.sender][name] = wallet;
}
function sendTo(string calldata name) external payable {
address to = contacts[msg.sender][name];
require(to != address(0), "Not found");
(bool ok,) = to.call{value: msg.value}("");
require(ok, "Failed");
}
}
No clipboard required. Attack vector eliminated.
3. I have locked my development environment
This is for anyone who writes Web3 code. Fake NPM packages are everywhere – ethers-v6-utils, web3-connector-v2 – Names so close to real packages that you might install them on autopilot. They scan yours .env files and send your keys to the attacker.
AI has made this ten times worse. Attackers can now automatically generate README files, spoof download counts, and create realistic-looking source code that passes casual inspection.
What I do now: I pin exact dependency versions. I use npm ci in CI/CD. And I added a pre-commit hook that blocks any commit that contains anything that looks like a private key:
bash
#!/bin/bash
echo "Checking for secrets..."
if grep -rn "0x[a-fA-F0-9]\{64\}" --include="*.ts" --include="*.js" \
--include="*.env" . 2>/dev/null | grep -v node_modules; then
echo "BLOCKED: Possible private key found"
exit 1
fi
npm audit --audit-level=high 2>/dev/null || echo "WARNING: vulnerabilities found"
echo "Clear."
4. I treat any “urgent” request as hostile
Voice cloning is real. Thirty seconds of audio from a podcast or Twitter Space is enough to clone a person’s voice. Attackers use this to call multi-signature signers, spoof an urgent request, and get transactions approved.
The Chainalysis report directly linked this to the 1,400 percent increase in identity theft scams. The US Department of Justice recently arrested a team that did this: they posed as Ledger support staff and stole cryptocurrencies from users who shared their recovery phrases. The Feds get it 600,000 USD back in USDTbut that is only a tiny fraction of what was lost.
What I do now: If someone asks me for approval through one channel – phone, direct message, email – I check through another channel before taking action. Call from a co-signer? I text them separately. DM via Telegram? I’m checking signal. If they can’t verify through a second, independent channel, I won’t sign. Period.
For teams building multi-sig tools: Add mandatory 24-hour timeouts. No bypass for “emergencies”. Real emergencies can wait a day. Fake ones can’t do that.
5. I don’t just trust wallet simulations
Modern wallets show what a transaction entails before signing. That’s helpful. However, some malicious contracts can detect when they are being simulated and show you a different result than what is actually happening on the chain.
During the simulation: “You will receive 500 USDC.” On the actual blockchain: Your approved tokens will be emptied.
What I do now: For every transaction that involves a contract that I have never interacted with, I review it Tender independent. I am also conducting mental health checks – does this transaction make sense? Why would a random contract give me 500 USDC? If it seems too good, it probably is.
When you create wallets, add an automatic simulation of several conditions:
JavaScript
async function checkTx(txData, provider) {
const prices = [0n, 1n, 20000000000n];
const results = [];
for (const gp of prices) {
try {
results.push(await provider.call({ ...txData, gasPrice: gp }));
} catch (e) {
results.push("err");
}
}
if (new Set(results).size > 1) {
console.error("Contract behaves differently under different conditions. Don't sign.");
return false;
}
return true;
}
The good news (there is some)
It’s not all bleak. When the Venus Protocol was attacked last year, it was detected by their security monitoring tool (Hexagate). 18 hours early. They paused the protocol, liquidated the attacker’s position, recovered every dollar, and even made the attacker lose money through a governance vote. This is how it should work.
The Ethereum Foundation also funded Security Alliance (SEAL) to specifically search for wallet drainers. More resources for defense is always a good sign.
But on an individual level, if you sign a bad transaction, no one is coming to rescue you. The best protection is still to do boring things consistently: enter your URLs, use address books, lock down your dependencies, check across multiple channels, and assume that every unexpected request is an attack.
AI has made the bad guys faster. We just have to be more careful.
The code examples in this article are simplified for educational purposes and have not been tested for production. Do not deploy them without proper verification. This is not financial advice. Do your own research.
\