CIO Influence
CIO Influence News Security

Fortanix Joins NIST AI Safety Institute to Advance Trustworthy AI

Fortanix Joins NIST AI Safety Institute to Advance Trustworthy AI

Confidential Computing Pioneer to Contribute its Knowledge of Real-World Applications to Support the Evaluation of Methods and Techniques Used to Secure New AI Technologies

Fortanix a leader in data-first cybersecurity and pioneer of Confidential Computing, announced that it joined more than 200 of the nation’s leading artificial intelligence (AI) stakeholders to participate in a Department of Commerce initiative to support the development and deployment of safe and trustworthy AI. Established by the Department of Commerce’s National Institute of Standards and Technology (NIST), the U.S. AI Safety Institute Consortium (AISIC) will bring together AI creators and users, academics, government and industry researchers, and civil society organizations to meet this mission.

Recommended: From Hype to Reality: AI’s Role in Augmenting Digital Transformation

The Consortium will help equip and empower the collaborative establishment of a new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote the development and responsible use of safe and trustworthy AI, particularly for the most advanced AI systems, such as the most capable foundation models.

The AISIC brings together the largest collection of AI developers, users, researchers, and affected groups in the world. Members include Fortune 500 companies, academic teams, non-profit organizations, and other U.S. Government agencies, all focused on the R&D necessary to enable safe, trustworthy AI systems and underpin future standards and policies. The members will help NIST implement, iterate on, sustain, and extend priority projects in research, testing, and guidance on AI safety. The Consortium is a critical pillar of the U.S. AI Safety Institute (USAISI) and will ensure that the Institute’s research and testing work is integrated with the broad community working on AI safety around the country and across the world.

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” said Gina Raimondo, U.S. Secretary of Commerce. “Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

“Participation in the NIST AISIC represents an opportunity for Fortanix to collaborate with leading organizations engaged in AI research and implementation. The objectives established by NIST are entirely consistent with work conducted by Fortanix and our industry partners since 2019 that has delivered secure and trustworthy AI systems for the benefit of our customers and end-users,” said Dr. Richard Searle, Vice President of Confidential Computing at Fortanix. “As adoption of AI systems increases across different industry domains, it is vital that appropriate attention is given to individual data privacy, systemic safety and security, and the interoperability of data, models, and infrastructure. We look forward to contributing to the valuable work of the Consortium and collaborating with fellow AISIC members in support of the USAISI mission.”

Recommended: Role of LLMs and Advanced AI in Cybersecurity: Predictions from HP Inc. Executives

As a pioneer in the development of Confidential Computing, Fortanix has gained extensive experience helping customers to secure critical AI systems and protect the privacy of data used for training, testing, and inference purposes. Using hardware-based trusted execution environments (TEEs) with attestation of secure deployment, Confidential Computing provides integrity validation for application software and shields AI models and the data that they rely on when in use. By isolating AI computation from untrusted system components, Confidential Computing is ideally suited to the protection of AI systems from the sophisticated cyber threat vectors observed today.

As part of its commitment to the NIST AISIC initiative, Fortanix will contribute knowledge gained in the application of Confidential Computing to such diverse application contexts as radiology image classification for health care, human genome analysis, anti-money laundering and identity fraud detection in financial services, secure object detection and image classification, radio frequency spectrum analysis, and signal rectification for renewable energy infrastructure. This experience, combined with the proven capability of Fortanix Data Security Manager™ to protect data at rest, in transit, and in use, will support evaluation of the methods and techniques required to ensure public confidence in the safety and security of new AI technologies.

Fortanix looks forward to contributing to the important work of the AISIC, alongside NIST and the other member organizations, to develop the foundations for the safe and trustworthy AI systems that will influence every aspect of our economy and society.

Recommended: How CIOs Can Foster an AI-Inclusive Culture

[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]

Related posts

SealingTech Selected to Represent the United States at Cyber Mission Asia, Advancing Global Cybersecurity

Business Wire

Resilience Raises $100 Million Series D Round, Led by Intact Ventures with Participation from Lightspeed Venture Partners

GlobeNewswire

Roster of Cybersecurity Experts and Hollywood’s Matthew McConaughey to Speak at Connectivity Network

CIO Influence News Desk