mss-hero-bg

AI Red Team Service

AI Security Assessment Service

Does one of these reflect your current situation?

icon_01

I am looking to conduct a comprehensive assessment of the security risks associated with our AI-powered system, particularly during its release or update phases.

icon_02

I am seeking assistance to evaluate the security risks of our AI systems, as my current understanding of their internal mechanisms is limited, leading to concerns regarding potential vulnerabilities.

icon_03

I am looking to assess the suitability of the roles and authorizations assigned to the AI integrated within our service.

feature-image03

AI Red Team: Specialized Security Assessment Service for AI-Enabled Systems

In recent years, the use of AI has been increasing in various fields. As more platforms implement AI in innovative ways, new vulnerabilities specific to AI developments are emerging. AI Red Team service is designed to account for the nature of AI and its inherent risks and is tailor-made to assess security risks that cannot be covered by conventional assessments.

* Presently we offer assessment services for AI used in LLMs only.

Service Overview

Our AI Red Team service will identify vulnerabilities in your AI-powered LLM system through a two-stage assessment.

Standalone Prompt Assessment

The Standalone Prompt Assessment identifies vulnerabilities of prompts used with AI.

Assessment Items (Example) Explanation
Prompt Injection A technique that intentionally sends special questions or commands to an AI system to cause unintended results for the developer.
Prompt Leaking A technique that bypasses an AI's original prompt and causes it to reveal its original prompt either in whole or in part.

 

Comprehensive LLM System Assessment

A system using LLM does not operate independently with LLM alone. It is linked with various peripheral systems. Therefore, any system using LLM has not only risks associated with an LLM itself, but also various peripheral risks associated with each of the pieces it is composed of.

 

Threat Examples in Systems Using LLM
EX Retrieval-Augmented Generation (RAG)

23031218-01en

 

The Comprehensive LLM System Assessment assesses the entire system, not just the LLM itself. AI Red Team's approach takes the perspective of problem emergence to assess the following:

  • Scenario-Based Prompt Injection Vulnerability
  • Risk of Agent Abuse
  • Supply Chain Risk
  • Information Leakage
    etc.

 

reason-image01

Why Customers Choose Us

1

Efficient, Comprehensive, and High-Quality Assessments

Our automated assessment testing is an exclusive, innovative development that efficiently and comprehensively identifies vulnerabilities. Our LLM security expert engineers conduct assessments manually to detect specific issues that cannot be covered by automated testing and to delve deeper into identified vulnerabilities.

2

Assessments with Complete Coverage of a System and its Components

We leverage our extensive expertise in traditional security assessments to uncover vulnerabilities caused by AI across the entire platform, addressing the OWASP LLM Top 10 Vulnerabilities. This comprehensive approach is possible because we evaluate not just the AI, but the entirety of the system implementing it, including every component. This allows us to fully understand all perspectives from which a vulnerability may be discovered.

*Comprehensive assessments are comprised of our AI Red Team service in conjunction with our web application and platform assessment services, to cover the entirety of a system or platform.

3

AI Blue Team Service Collaboration

In response to this dilemma, it's crucial to implement protective measures against new types of attacks, even post-release. To tackle these challenges, we are developing a complementary AI Blue Team service focused on the continuous monitoring of AI applications, scheduled for release in April 2024. When integrated with the AI Red Team service, the AI Blue Team will utilize insights from the Red Team's attack simulations to enhance its monitoring capabilities, based on observed successful attack strategies. This synergy will bolster the security of AI services, leveraging the strengths of both services for comprehensive protection.

Service Flow

ai-red-team-serviceflow

 

Information about LLMs, APIs, and assessment prompts used in the system will be collected on our initial pre-assessment consultation. Prior to starting assessment activities, we will confirm the assessment scenario and any system configuration necessary in advance.

Pricing

Price may vary depending on scope of assessment.
Estimates depend on the duration and scope of support, the number of assessment prompts and systems to be assessed, etc. Contact us with a request for more information using the form below.

Frequently Asked Questions

Q. Is it possible to re-evaluate after reviewing the settings?
A. One free re-assessment is provided as part of the service.
Q. I am looking to continuously manage the security of AI applications.
A. We plan to release the AI Blue Team service, which will perform regular monitoring of AI applications. We are looking for companies that can respond to this Proof-of-Concept. Please submit a request for more information if your organization is interested in participating in our Proof-of-Concept pilot program.
Q. Are there any target LLM services?
A. We support major LLM services such as OpenAI. More information available upon request.
Q. Is it possible to assess AI for other uses other than LLM?
A. Our services are currently specialized in LLM assessment, but please feel free to consult us for assessment of other AI.