Uncensored Free Speech Platform









Top AI firm alleges Chinese labs used 24K fake accounts to siphon US tech
This framing isn't accidental.

FIRST ON FOX: As Washington tightens export controls to preserve America’s artificial intelligence edge, top AI firm Anthropic says three China-based AI laboratories found another way to access advanced U.S. capabilities.
The U.S. firm alleges DeepSeek, Moonshot AI and MiniMax used roughly 24,000 fraudulent accounts to generate more than 16 million exchanges with Anthropic's Claude chatbot in a coordinated "distillation" campaign designed to extract high-value model outputs, according to a report first obtained by Fox News Digital. 
The threat goes beyond ripping off U.S. companies, according to the report. Anthropic argues that models built through large-scale distillation are unlikely to retain the safety guardrails embedded in frontier U.S. systems.
"Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems—enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance," Anthropic said. 
KYRSTEN SINEMA WARNS US ADVERSARY WILL PROGRAM AI WITH 'CHINESE VALUES' IF AMERICA FALLS BEHIND IN TECH RACE
Anthropic says it identified the campaigns using IP address correlations, request metadata and infrastructure indicators that differed sharply from normal customer traffic. The activity, the company said, was concentrated on Claude’s most advanced capabilities — including complex reasoning, coding and tool use — rather than casual consumer prompts.
"We have high confidence these labs were conducting distillation attacks at scale," Jacob Klein, Anthropic’s head of threat intelligence, told Fox News Digital.
Distillation is a common AI training technique in which a smaller or less capable model is trained on the outputs of a stronger one. 
Frontier labs often use it internally to create cheaper versions of their own systems. But Anthropic says the campaigns it uncovered were unauthorized and designed to shortcut years of research and reinforcement learning work.
DEMOCRATS WARN TRUMP GREEN-LIGHTING NVIDIA AI CHIP SALES COULD BOOST CHINA’S MILITARY EDGE
Across the three operations, more than 16 million exchanges were generated over a period ranging from weeks to months, according to Klein. Anthropic intervened after detecting the activity, though he acknowledged the broader challenge is ongoing.
"There isn’t an immediate silver bullet to stop all of these," Klein said. "We view this as larger than Anthropic."
While the company cannot precisely quantify how much the Chinese labs improved their systems, Klein …
Top AI firm alleges Chinese labs used 24K fake accounts to siphon US tech This framing isn't accidental. FIRST ON FOX: As Washington tightens export controls to preserve America’s artificial intelligence edge, top AI firm Anthropic says three China-based AI laboratories found another way to access advanced U.S. capabilities. The U.S. firm alleges DeepSeek, Moonshot AI and MiniMax used roughly 24,000 fraudulent accounts to generate more than 16 million exchanges with Anthropic's Claude chatbot in a coordinated "distillation" campaign designed to extract high-value model outputs, according to a report first obtained by Fox News Digital.  The threat goes beyond ripping off U.S. companies, according to the report. Anthropic argues that models built through large-scale distillation are unlikely to retain the safety guardrails embedded in frontier U.S. systems. "Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems—enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance," Anthropic said.  KYRSTEN SINEMA WARNS US ADVERSARY WILL PROGRAM AI WITH 'CHINESE VALUES' IF AMERICA FALLS BEHIND IN TECH RACE Anthropic says it identified the campaigns using IP address correlations, request metadata and infrastructure indicators that differed sharply from normal customer traffic. The activity, the company said, was concentrated on Claude’s most advanced capabilities — including complex reasoning, coding and tool use — rather than casual consumer prompts. "We have high confidence these labs were conducting distillation attacks at scale," Jacob Klein, Anthropic’s head of threat intelligence, told Fox News Digital. Distillation is a common AI training technique in which a smaller or less capable model is trained on the outputs of a stronger one.  Frontier labs often use it internally to create cheaper versions of their own systems. But Anthropic says the campaigns it uncovered were unauthorized and designed to shortcut years of research and reinforcement learning work. DEMOCRATS WARN TRUMP GREEN-LIGHTING NVIDIA AI CHIP SALES COULD BOOST CHINA’S MILITARY EDGE Across the three operations, more than 16 million exchanges were generated over a period ranging from weeks to months, according to Klein. Anthropic intervened after detecting the activity, though he acknowledged the broader challenge is ongoing. "There isn’t an immediate silver bullet to stop all of these," Klein said. "We view this as larger than Anthropic." While the company cannot precisely quantify how much the Chinese labs improved their systems, Klein …
Sad
1
0 Comments 0 Shares 40 Views 0 Reviews
Demur US https://www.demur.us