• Home
  • News
  • AI
  • Cyber
  • GRC
  • Blogs
  • Live CVE
No Result
View All Result
Sumtrix
  • Home
  • News
  • AI
  • Cyber
  • GRC
  • Blogs
  • Live CVE
No Result
View All Result
Sumtrix
No Result
View All Result
Home AI

Global Tech Leaders Launch Innovative Open-Source AI Safety Consortium

Jane Doe by Jane Doe
May 28, 2025
in AI
Share on FacebookShare on Twitter

A group of major tech corporations announced that it was forming the Open-Source AI Safety Consortium (OSAISC), a step toward combating the existential risks that can come along with highly advanced artificial intelligence.

The effort opts in executives in Silicon Valley, Europe and Asia who share a commitment to fund and coordinate work focused on driving the responsible evolution of AI.

At its heart, the consortium is an organization that brings industry partners together to jointly fund and work on research, tools, and best practices for helping keep AI systems in check.

Read

PAGERDUTY Confirms Data Breach After Salesforce Account Compromise

Thailand’s PDPA Crackdown 2025: Major Fines and Lessons from Latest Enforcement

Acknowledging AI’s rapid progress, especially in domains such as large language models and autonomous agents, the founding members underscored the pressing need for a collective and open approach, as a necessary complement to innovation, to ensure that when AI systems are deployed, they are designed and used in ways that specify and align with social goals.

“AI is a technology with tremendous potential to help humanity, but it’s power requires responsible stewardship,” said Anya Sharma, CEO of NovaTech and one of the main architects of the consortium.

“Together, by sharing our combined knowledge and resources in an open-source manner, we believe we can build a more powerful safety net, one that will help address the world’s most challenging safety problems and promote the collective knowledge of all AI researchers and developers.

The OSAISC is expected to start with multiple key initiatives such as building open source libraries to help practitioners recognize and mitigate the various forms of bias (such as sample selection bias), creating standardized evaluation platforms to measure AI safety risks, and developing practical, sensible and straightforward protocols for AI practitioners to disclose potential vulnerabilities in AI systems.

The consortium will also fund independent research into some of these long-term AI safety challenges, and hold workshops and educational programs for those developing AI, to help raise awareness and capability around the issues.

Founding members include executives at companies focused on AI, cloud computing, and software. The consortium is specifically open-source for that reason, and to ‘help prevent bad actors from finding ways to exploit the technology,’ says the team, who are keen that there is ‘maximum participation and testing’ of the technology by the AI community around the world.

By sharing their research and tools with the public, the leaders are hoping to create a more vigorous race toward robust safety systems, as well as a culture of transparency and support in the field.

“We feel that AI safety cannot be a competitive advantage, but an area where we have to cooperate,” said Kenji Tanaka, CTO of Global AI Solutions and another founding member. “By enabling open-source principles for the world to participate, the benefits of our research will be accessible to humanity at large so we will all work together to turn the tide on this pandemic.

Our commitment to open source will ensure that the work is available on a broad spectrum of platforms and can be adapted to new medical needs quickly — allowing individuals and corporations to focus on wielding the work we release on attacking COVID-19 and not get bogged down in tools and process.”

The OSAISC announcement has been received well by the academic, policy, and civil society communities, which have advocated for more collaboration and standardization in efforts to improve AI safety.

The consortium will convene its first research summit in the fall of this year (2015) to bring together top experts to set initial research priorities and a roadmap for future progress. The move is a key milestone toward ensuring that the transformative potential of AI is used to its fullest, in a safe and beneficial way for all humanity.

Previous Post

How AI is Reshaping Labor Markets in South America: Concerns and Opportunities

Next Post

Exploring AI’s Critical Role in Climate Change at the G7 Summit

Jane Doe

Jane Doe

More Articles

UN Creates Two Mechanisms for Global Governance of AI
AI

New AI-powered stethoscope developed by UK doctors can detect heart problems ‘in seconds’

A groundbreaking new device, an AI-powered stethoscope, developed by a team of UK doctors, is set to revolutionize cardiac care...

by Jane Doe
September 3, 2025
UN Creates Two Mechanisms for Global Governance of AI
AI

Why AI Overregulation Could Kill the World’s Next Tech Revolution

As artificial intelligence becomes more powerful and pervasive, governments around the globe are grappling with how to regulate it. While...

by Jane Doe
September 3, 2025
UN Creates Two Mechanisms for Global Governance of AI
AI

AI won’t eliminate jobs, but it will eliminate the unskilled

The global conversation about artificial intelligence has been dominated by a single, pervasive fear: that AI will lead to mass...

by Jane Doe
September 3, 2025
UN Creates Two Mechanisms for Global Governance of AI
AI

Lenovo Joins Coalition for Sustainable AI to Address Intersection of AI and Sustainability

Lenovo, a global technology leader, has announced its membership in the Coalition for Sustainable AI, a new industry-led initiative aimed...

by Jane Doe
September 3, 2025
Next Post
Exploring AI’s Critical Role in Climate Change at the G7 Summit

Exploring AI's Critical Role in Climate Change at the G7 Summit

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

I agree to the Terms & Conditions and Privacy Policy.

Latest News

Hacking AI the Right Way: A Guide to AI Red Teaming

Hacking AI the Right Way: A Guide to AI Red Teaming

May 27, 2025
Researchers Cracked the Encryption Used by DarkBit Ransomware

Researchers Cracked the Encryption Used by DarkBit Ransomware

August 12, 2025
Researchers Cracked the Encryption Used by DarkBit Ransomware

High-severity WinRAR 0-day exploited for weeks by 2 groups

August 12, 2025

Transforming App Development with AI, Part 3: Challenges and Ethical Considerations

March 19, 2025
Exploring AI’s Critical Role in Climate Change at the G7 Summit

Exploring AI’s Critical Role in Climate Change at the G7 Summit

May 28, 2025
Are We Ready for the Next Cyber Storm? Why Staying Passive Is the Greatest Risk

Are We Ready for the Next Cyber Storm?

April 26, 2025
Researchers Cracked the Encryption Used by DarkBit Ransomware

Ghanaian Nationals Extradited for Roles in $100M Romance and Wire Fraud

August 12, 2025
Sumtrix.com

© 2025 Sumtrix – Your source for the latest in Cybersecurity, AI, and Tech News.

Navigate Site

  • About
  • Contact
  • Privacy Policy
  • Advertise

Follow Us

No Result
View All Result
  • Home
  • News
  • AI
  • Cyber
  • GRC
  • Blogs
  • Live CVE

© 2025 Sumtrix – Your source for the latest in Cybersecurity, AI, and Tech News.

Our website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.