CIO Influence
CIO Influence News Machine Learning Natural Language Security

NRI Secure Launches “AI Blue Team,” a Security Monitoring Service for Systems using Generative AI

NRI Secure Launches “AI Blue Team,” a Security Monitoring Service for Systems using Generative AI

NRI SecureTechnologies, Ltd. (Head Office: Chiyoda Ward, Tokyo; President: Shunichi Tatewaki, “NRI Secure”), launched a new Service named “AI Blue Team,” a Service which provides Security Monitoring for Systems using Generative AI.

Utilizing AI Blue Team in conjunction with AI Red Team, a Security Assessment Service released in December 2023, identifies existing system-specific vulnerabilities, enabling comprehensive and continuous monitoring of security measures on systems utilizing Large Language Models (LLM). [*]1 [*]2

Risks Introduced by Generative AI

In recent years, AI use has been increasing in various fields. As more platforms implement AI in innovative ways, new vulnerabilities specific to AI developments are emerging.

With the increase in use of Generative AI and Large Language Models (LLMs) in emerging services, especially those focused on operational efficiency, special considerations and new security measures must be taken. LLMs face a plethora of Threats such as Prompt Injection, Prompt Leaking, Hallucination, Sensitive Information Disclosure, Bias Risk, and Inappropriate Content Output [*]3 [*]4 [*]5 [*]6

Overview and Features of AI Blue Team

NRI Secure places utmost importance on accurate detection of Vulnerabilities and associated Risks, and on the continuous accumulation of Threat Intelligence, information collected and analyzed about Security Threats for application in monitoring operations. As more Threat Intelligence is gathered, analyzed, and processed, AI Blue Team can respond to new attack techniques and vulnerabilities discovered with increasing accuracy. By pairing the AI Blue Team service with the AI Red Team service, specialized Threat Intelligence from systems using Generative AI can be gathered. The purpose of this Service is to support LLM-associated Risk Management through continuous monitoring, so that companies and organizations can focus on improving operational efficiency and business transformation using LLM securely.

Before introducing AI Blue Team to a new organization, an AI Red Team Service Assessment is performed first. By applying intelligence gathered from the Assessment Results of the AI Red Team Service on a customer’s system to the AI Blue Team Service, effective countermeasures can be taken against Threats that are difficult to handle with other AI defense solutions. The two main features of this Service are as follows.

Recommended: Top Hybrid Cloud Storage Trends for CIOs in 2024
1. Avoidance of Widespread and Novel AI Risks by Continuous Monitoring of Generative AI Systems

Information on Input/Output between Generative AI and the system it is built upon is linked to the detection APIs provided by the AI Blue Team Service. When harmful Input/Output is detected, appropriate parties within the organization utilizing the AI Blue Team Service are notified. [*]7

In addition to monitoring and response to Threats specific to LLMs, NRI Secure analysts carefully study attack trends from Assessment Results to accumulate Threat Intelligence, to in turn respond more effectively to emerging threats and attack methods and provide continuous updates and enhancements to the AI Blue Team Service. The monitoring dashboard that an NRI Secure analyst reviews and monitors can also be accessed by the customer using the AI Blue Team Service, so it is possible to directly confirm detection status by both parties in real-time.

2. Defense against System-specific Vulnerabilities Detected by AI Red Team and Enhancement of Protection

In the development field of systems utilizing generative AI, system-specific vulnerabilities can be introduced depending on the ways AI is utilized and the levels of authority delegation. These types of vulnerabilities cannot be addressed by general defense solutions and require individualized countermeasures.

In response to the specific AI system vulnerabilities that are detected by the AI Red Team Service’s Security Assessments, the AI Blue Team Service accumulates Threat Intelligence, both specific to a customer, as well as general-purpose Threat Intelligence, to customize and implement the most effective security measures. This customized approach is expected to further strengthen the protection level of a customer’s entire system by protecting that system from specialized Threats and attacks, and from exploitation of inherent and latent vulnerabilities.

Also Read: Security as a Business Enabler: How Collaboration Between IT and Business Teams Strengthens Data Protection

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Related posts

AVCtechnologies Announces Significant Projected Revenue Growth of Kandy, its Cloud Communications Platform

CIO Influence News Desk

Cloudflare Collaborates with Microsoft and Major Search Engines to Help Improve Websites’ Search Results

CIO Influence News Desk

CT Event Asia to Host 5G TECH 2021

CIO Influence News Desk