consulting-hero-bg

Security Monitoring Service to Protect Companies from AI-Specific Risks

Supporting AI-powered Operational Efficiency and Business Transformation with AI Blue Team (ABT), a proprietary solution that provides continuous monitoring of AI Systems.

Does One of These Reflect Your Current Situation?

I want to make use of Generative AI, but I don’t know all of the risks. Moreover, I have concerns about how comprehensive security measures are.

I am looking to implement defensive solutions similar to conventional security measures, but I am unsure about what to use for AI-specific risks.

I want to stay updated with the latest protections from threats to AI, but as I don’t have resources to research the current AI Threat Landscape, nor monitor usage of my AI System, I am concerned about how secure it is.

feature-image03

AI Blue Team is a Security Monitoring Service that specializes in AI-powered systems.

In recent years, the use of AI has been increasing in various fields. As more platforms implement AI in innovative ways, new vulnerabilities specific to AI developments are emerging. The AI Blue Team is designed to account for the nature of AI and its inherent risks and is tailor-made to monitor systems for security risks that conventional security monitoring solutions cannot cover.

*Presently, AI Blue Team monitors for Risks within AI used in LLMs only.

With the increase in use of Generative AI and Large Language Models (LLMs) in emerging services, especially those focused on operational efficiency, special considerations and new security measures must be taken. LLMs face a plethora of Threats such as Prompt Injection, Prompt Leaking, Hallucination, Sensitive Information Disclosure, Bias Risk, and Inappropriate Content Output (See Figure 1).

23031218-01en

Figure 1: Threat Flow Diagram for Systems Using LLM Engines
Example: Retrieval-Augmented Generation (RAG) Case

 

NRI Secure places utmost importance on accurate detection of Vulnerabilities and associated Risks, and on the continuous accumulation of Threat Intelligence, information collected and analyzed about Security Threats for application in monitoring operations. As more Threat Intelligence is gathered, analyzed, and processed, AI Blue Team can respond to new attack techniques and vulnerabilities discovered with increasing accuracy. By pairing the AI Blue Team service with the AI Red Team service, specialized Threat Intelligence from systems using Generative AI can be gathered. When pairing the AI Red Team and AI Blue Team services, more effective countermeasures can be taken against Threats that are difficult to handle with other AI defense solutions.

What the Power of AI Blue Team Can Do For You

01

Continuous Monitoring of Generative AI Systems to Prevent Threats to AI

Information on input and output between the Generative AI and the system it is built upon is linked to the Detection APIs provided by AI Blue Team. When harmful input or output is detected, notifications are immediately sent to the customer. NRI Secure analysts carefully study attack trends from Assessment and Detection Results to accumulate Threat Intelligence, to in turn respond more effectively to emerging threats and attack methods and provide continuous updates and enhancements to the AI Blue Team service. The monitoring dashboard that an NRI Secure analyst reviews and monitors can also be accessed by a customer using the AI Blue Team Service, so it is possible to directly confirm detection status by both parties real-time.

02

Enhanced Protection from System-specific Threats Identified by the AI Red Team

In the development field of systems utilizing generative AI, system-specific vulnerabilities can be introduced depending on the ways AI is utilized and the levels of authority delegation. These types of vulnerabilities cannot be addressed by general defense solutions and require individualized countermeasures.

In response to the specific AI system vulnerabilities that can be found by the AI Red Team Service’s Security Assessments, the AI Blue Team Service accumulates data about these findings into Threat Intelligence. This allows the AI Blue Team service to provide updates to system-specific Threat Intelligence as well as general-purpose Threat Intelligence to customize and implement the most effective security measures. This customized approach is expected to further enhance protection for AI-powered systems by protecting it from specialized Threats and Attacks, and from exploitation of inherent and latent vulnerabilities (See Figure 2).

ai-blue-team
Figure 2: Measures Strengthening Generative AI System Security when both AI Red Team and AI Blue Team are Used

Pricing

Pricing may vary. Contact NRI Secure for an estimate.

Quotes and Estimates may vary depending on number of target systems, users, detection API call count, and other details.
For more information, Contact Us using the form below.

Frequently Asked Questions

Q. How can the AI Blue Team Service be used?
A.

This service is currently only available in a SaaS format. Input and output between your system and the Generative AI must be provided to the AI Blue Team Detection API over the Internet. The AI Blue Team dashboard can also be accessed over the Internet.

Please feel free to contact us if other forms of use may be needed.

Q. Will I need to register for the AI Red Team service first?
A.

AI systems new to the AI Blue Team service are required to undergo an AI Red Team Security Assessment first, to create the body of specialized Threat Intelligence that will be applied to your system. In doing so, you can expect enhanced addressing of Threats that are specific to your implementation and that would otherwise be difficult to address with other AI defense solutions.

Q. What do I need to get started?
A.

To get started, a few lines of code are necessary to call the AI Blue Team Detection API from your application. This requires a preparation period, in which we discuss implications and perform paperwork with you, so allow several weeks to one month before actual monitoring begins.

Q. I have concerns about Detection API call delay.
A.

On average, there is latency of about 0.5 to 1.0 seconds for each Detection API call. However, the Detection API has various options. Latency may vary due to options selected and size of request to monitor.

Q. What about False Positives or False Negatives? Can detections be missed?
A.

The Detection API has a mode for Detection and a mode to Block unwanted requests. To get the most benefit from AI Blue Team, tune monitoring at the start while configured for Detection mode. Once the Detection API is tuned to optimal reporting, it can be switched to Blocking mode.

It will use a multi-layered set of Detection functions, so that even if a single Detection function cannot detect a Threat, likely another Detection function will.

Our analysts carefully study all Threat incidences for trend information, which is used to update Threat Intelligence on behalf of the customer continuously to improve Detection Performance.