• Home
  • News
  • AI
  • Cyber
  • GRC
  • Blogs
  • Live CVE
No Result
View All Result
Sumtrix
  • Home
  • News
  • AI
  • Cyber
  • GRC
  • Blogs
  • Live CVE
No Result
View All Result
Sumtrix
No Result
View All Result
Home AI

US Lawmakers Call for AI Safety Institute to Mitigate China’s AI Expansion

Jane Doe by Jane Doe
June 4, 2025
in AI
US Lawmakers Call for AI Safety Institute to Mitigate China’s AI Expansion
Share on FacebookShare on Twitter

At the same time, prominent US lawmakers seem to be making a major bipartisan push to expand the role of AISI to directly tackle the growing national security threat from China’s big leaps in AI. This as concerns mount about Beijing’s state-backed drive to control world AI, and with it threats to data security, IP, and strategic military advantage.

House Select Committee on the Strategic Competition Between United States & China (CCP) ranking member Raja Krishnamoorthi (D-IL) and chairman John Moolenaar (R-MI) called on the Department of Commerce to expand AISI’s roles within the next month.

Pointing to a “strong national security need for greater understanding, prediction and preparation for PRC’s AI progress,” they wrote in a letter to Commerce Secretary Howard Lutnick.

Read

App Store Power and Censorship: How Apple and Google Shape Your Digital Future

Google Sets Sights on Defying Gravity with Antigravity Project

Among the lawmakers’ key “wake-up calls” was the DeepSeek company’s big language model, R1, which was scheduled to be deployed in January 2025. A Committee investigation into DeepSeek, reportedly revealed “multiple national security risks” associated with the service “including the funneling of Americans’ private data to [the PRC], manipulation of the model’s outputs according to [PRC] law, and the potential theft of U.S. AI technology through model distillation.”

As AI matures, the congressmen argue, the importance of seeing around corners and out-anticipating PRC AI capabilities while avoiding strategic surprise will only increase. They call for a whole-of-government approach to guarantee continued American leadership in modern AI innovation, and name AISI’s “unique technical expertise, strong industry partnerships, and testing and evaluation experience” as essential assets.

They suggested several ways in which AISI could help secure U.S. national security, from identifying the strengths and weaknesses of the top PRC AI models and establishing a baseline against which to measure U.S. models to assistance for private sector attempts to stymie the theft of U.S. AI technology.

The bipartisan effort reflects Washington’s mounting unease with China’s aggressive approach to AI, which is closely tied to China’s military-civil fusion doctrine and to the country’s global expansion efforts including the Digital Silk Road, as a direct threat to U.S. technological leadership and to the vital needs and security interests of the United States.

Previous Post

ROHM Introduces advanced MCUs: Transforming Electronics with AI Integration

Next Post

Why Meta is Pivoting to Nuclear Power for its AI Revolution

Jane Doe

Jane Doe

More Articles

MMaDA-Parallel: Advanced Multimodal Model Revolutionizing Content Generation
AI

MMaDA-Parallel: Advanced Multimodal Model Revolutionizing Content Generation

MMaDA-Parallel is a cutting-edge framework for multimodal content generation that departs from traditional sequential models by enabling parallel processing of...

by Jane Doe
November 19, 2025
ServiceNow AI Agents Vulnerable to Sophisticated Prompt Injection
AI

ServiceNow AI Agents Vulnerable to Sophisticated Prompt Injection

Attack Method: Researchers found second-order prompt injection attacks exploiting ServiceNow's Now Assist AI agents, leveraging their agent-to-agent discovery for unauthorized...

by Mayank Singh
November 19, 2025
European Union Introduces New Regulations Changing Data Privacy Landscape
AI

European Union Introduces New Regulations Changing Data Privacy Landscape

The European Union is implementing significant updates to its regulatory framework governing data privacy and automated decision-making. These new regulations,...

by Sumit Chauhan
November 19, 2025
Google Show Gemini 3: New Frontier in AI
AI

Google Show Gemini 3: New Frontier in AI

Google has officially launched Gemini 3, its latest leap forward in generative artificial intelligence technology. Positioned to compete at the...

by Sumit Chauhan
November 19, 2025
Next Post
Why Meta is Pivoting to Nuclear Power for its AI Revolution

Why Meta is Pivoting to Nuclear Power for its AI Revolution

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

I agree to the Terms & Conditions and Privacy Policy.

Latest News

China Accuses US of Cyberattacks Using Microsoft Email Server Flaws

China Accuses US of Cyberattacks Using Microsoft Email Server Flaws

August 1, 2025
Online Scam Cases Continue to Rise Despite Crackdowns on Foreign Fraud Networks [Myanmar]

Online Scam Cases Continue to Rise Despite Crackdowns on Foreign Fraud Networks [Myanmar]

June 30, 2025
Stay Safe from Ransomware Using Skitnet Malware Techniques

Stay Safe from Ransomware Using Skitnet Malware Techniques

May 20, 2025
MMaDA-Parallel: Advanced Multimodal Model Revolutionizing Content Generation

MMaDA-Parallel: Advanced Multimodal Model Revolutionizing Content Generation

November 19, 2025
Anthropic Blocks AI Misuse for Cyberattacks

Anthropic Blocks AI Misuse for Cyberattacks

August 28, 2025
New VoIP Botnet Targets Routers Using Default Passwords

New VoIP Botnet Targets Routers Using Default Passwords

July 25, 2025
Aflac Incorporated Discloses Cybersecurity Incident

Aflac Incorporated Discloses Cybersecurity Incident

June 20, 2025
Sumtrix.com

© 2025 Sumtrix – Your source for the latest in Cybersecurity, AI, and Tech News.

Navigate Site

  • About
  • Contact
  • Privacy Policy
  • Advertise

Follow Us

No Result
View All Result
  • Home
  • News
  • AI
  • Cyber
  • GRC
  • Blogs
  • Live CVE

© 2025 Sumtrix – Your source for the latest in Cybersecurity, AI, and Tech News.

Our website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.