• Home
  • News
  • AI
  • Cyber
  • GRC
  • Blogs
  • Live CVE
No Result
View All Result
Sumtrix
  • Home
  • News
  • AI
  • Cyber
  • GRC
  • Blogs
  • Live CVE
No Result
View All Result
Sumtrix
No Result
View All Result
Home AI

Global Tech Leaders Launch Innovative Open-Source AI Safety Consortium

Jane Doe by Jane Doe
May 28, 2025
in AI
Global Tech Leaders Launch Innovative Open-Source AI Safety Consortium
Share on FacebookShare on Twitter

A group of major tech corporations announced that it was forming the Open-Source AI Safety Consortium (OSAISC), a step toward combating the existential risks that can come along with highly advanced artificial intelligence.

The effort opts in executives in Silicon Valley, Europe and Asia who share a commitment to fund and coordinate work focused on driving the responsible evolution of AI.

At its heart, the consortium is an organization that brings industry partners together to jointly fund and work on research, tools, and best practices for helping keep AI systems in check.

Read

App Store Power and Censorship: How Apple and Google Shape Your Digital Future

Google Sets Sights on Defying Gravity with Antigravity Project

Acknowledging AI’s rapid progress, especially in domains such as large language models and autonomous agents, the founding members underscored the pressing need for a collective and open approach, as a necessary complement to innovation, to ensure that when AI systems are deployed, they are designed and used in ways that specify and align with social goals.

“AI is a technology with tremendous potential to help humanity, but it’s power requires responsible stewardship,” said Anya Sharma, CEO of NovaTech and one of the main architects of the consortium.

“Together, by sharing our combined knowledge and resources in an open-source manner, we believe we can build a more powerful safety net, one that will help address the world’s most challenging safety problems and promote the collective knowledge of all AI researchers and developers.

The OSAISC is expected to start with multiple key initiatives such as building open source libraries to help practitioners recognize and mitigate the various forms of bias (such as sample selection bias), creating standardized evaluation platforms to measure AI safety risks, and developing practical, sensible and straightforward protocols for AI practitioners to disclose potential vulnerabilities in AI systems.

The consortium will also fund independent research into some of these long-term AI safety challenges, and hold workshops and educational programs for those developing AI, to help raise awareness and capability around the issues.

Founding members include executives at companies focused on AI, cloud computing, and software. The consortium is specifically open-source for that reason, and to ‘help prevent bad actors from finding ways to exploit the technology,’ says the team, who are keen that there is ‘maximum participation and testing’ of the technology by the AI community around the world.

By sharing their research and tools with the public, the leaders are hoping to create a more vigorous race toward robust safety systems, as well as a culture of transparency and support in the field.

“We feel that AI safety cannot be a competitive advantage, but an area where we have to cooperate,” said Kenji Tanaka, CTO of Global AI Solutions and another founding member. “By enabling open-source principles for the world to participate, the benefits of our research will be accessible to humanity at large so we will all work together to turn the tide on this pandemic.

Our commitment to open source will ensure that the work is available on a broad spectrum of platforms and can be adapted to new medical needs quickly , allowing individuals and corporations to focus on wielding the work we release on attacking COVID-19 and not get bogged down in tools and process.”

The OSAISC announcement has been received well by the academic, policy, and civil society communities, which have advocated for more collaboration and standardization in efforts to improve AI safety.

The consortium will convene its first research summit in the fall of this year (2015) to bring together top experts to set initial research priorities and a roadmap for future progress. The move is a key milestone toward ensuring that the transformative potential of AI is used to its fullest, in a safe and beneficial way for all humanity.

Previous Post

How AI is Reshaping Labor Markets in South America: Concerns and Opportunities

Next Post

Exploring AI’s Critical Role in Climate Change at the G7 Summit

Jane Doe

Jane Doe

More Articles

MMaDA-Parallel: Advanced Multimodal Model Revolutionizing Content Generation
AI

MMaDA-Parallel: Advanced Multimodal Model Revolutionizing Content Generation

MMaDA-Parallel is a cutting-edge framework for multimodal content generation that departs from traditional sequential models by enabling parallel processing of...

by Jane Doe
November 19, 2025
ServiceNow AI Agents Vulnerable to Sophisticated Prompt Injection
AI

ServiceNow AI Agents Vulnerable to Sophisticated Prompt Injection

Attack Method: Researchers found second-order prompt injection attacks exploiting ServiceNow's Now Assist AI agents, leveraging their agent-to-agent discovery for unauthorized...

by Mayank Singh
November 19, 2025
European Union Introduces New Regulations Changing Data Privacy Landscape
AI

European Union Introduces New Regulations Changing Data Privacy Landscape

The European Union is implementing significant updates to its regulatory framework governing data privacy and automated decision-making. These new regulations,...

by Sumit Chauhan
November 19, 2025
Google Show Gemini 3: New Frontier in AI
AI

Google Show Gemini 3: New Frontier in AI

Google has officially launched Gemini 3, its latest leap forward in generative artificial intelligence technology. Positioned to compete at the...

by Sumit Chauhan
November 19, 2025
Next Post
Exploring AI’s Critical Role in Climate Change at the G7 Summit

Exploring AI's Critical Role in Climate Change at the G7 Summit

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

I agree to the Terms & Conditions and Privacy Policy.

Latest News

China Accuses US of Cyberattacks Using Microsoft Email Server Flaws

China Accuses US of Cyberattacks Using Microsoft Email Server Flaws

August 1, 2025
Online Scam Cases Continue to Rise Despite Crackdowns on Foreign Fraud Networks [Myanmar]

Online Scam Cases Continue to Rise Despite Crackdowns on Foreign Fraud Networks [Myanmar]

June 30, 2025
Stay Safe from Ransomware Using Skitnet Malware Techniques

Stay Safe from Ransomware Using Skitnet Malware Techniques

May 20, 2025
MMaDA-Parallel: Advanced Multimodal Model Revolutionizing Content Generation

MMaDA-Parallel: Advanced Multimodal Model Revolutionizing Content Generation

November 19, 2025
Anthropic Blocks AI Misuse for Cyberattacks

Anthropic Blocks AI Misuse for Cyberattacks

August 28, 2025
New VoIP Botnet Targets Routers Using Default Passwords

New VoIP Botnet Targets Routers Using Default Passwords

July 25, 2025
Aflac Incorporated Discloses Cybersecurity Incident

Aflac Incorporated Discloses Cybersecurity Incident

June 20, 2025
Sumtrix.com

© 2025 Sumtrix – Your source for the latest in Cybersecurity, AI, and Tech News.

Navigate Site

  • About
  • Contact
  • Privacy Policy
  • Advertise

Follow Us

No Result
View All Result
  • Home
  • News
  • AI
  • Cyber
  • GRC
  • Blogs
  • Live CVE

© 2025 Sumtrix – Your source for the latest in Cybersecurity, AI, and Tech News.

Our website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.