Model Rating Report
Overview
Claude 4 Family
Claude 4 Family
Claude Opus 4 is a state-of-the-art text and code generation model, with sustained performance on complex, long-running tasks and agent workflows. Claude 4 models are hybrid models offering two modes: near-instant responses and extended thinking for deeper reasoning.
Developer
Anthropic
Country of Origin
USA
Systemic Risk
Open Data
Open Weight
API Access Only
Ratings
Overall Transparency
58%
Data Transparency
37%
Model Transparency
30%
Evaluation Transparency
74%
EU AI Act Readiness
50%
CAIT-D Readiness
36%
Transparency Assessment
The transparency assessment evaluates how clear and detailed the model creators are about their practices. Our assessment is based on the official documentation lists in Sources above. While external analysis may contain additional details about this system, our goal is to evaluate transparency of the providers themselves.
Sources
System Card: https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad13df32ed995.pdf
Announcements:
https://www.anthropic.com/claude/opus
Basic Details
Claude Opus 4 and Claude Sonnet 4 were released on May 22, 2025. The announcement was made on this date, with both models becoming available to users at that time.
Date of Release
The Date of Release is the date when the model was made available to the public. Publishing these dates is especially important when multiple versions of the model are released over time.
EU AI Act Requirements
Annex XI Section 1.1c: date of release
CAIT-D Requirements
California’s AI Training Data Transparency Act- (11) The dates the datasets were first used during the development of the artificial intelligence system or service.
Claude 4 Opus and Claude 4 Sonnet are both available through multiple distributions including include web browser interface, mobile apps (iOS and Android), and through APIs including Anthropic's own API, Amazon Bedrock, and Google Cloud's Vertex AI. Opus 4 is available to Pro, Max, Team, and Enterprise users, while Sonnet 4 is also available to free users.
Methods of Distribution
The Methods of Distribution specify all the ways a model can be accessed. Common methods include: a direct model download, access through an API or a hybrid option where a private API endpoint is externally hosted.
EU AI Act Requirements
Annex XI Section 1.1c: methods of distribution
Input options include image and text, but only text generation for output. Both models feature hybrid reasoning capability with both standard and extended thinking modes.
Modality
The Modality specifies the types of data that the model can process and output. Common domains include text, images, video, audio and tabular data.
EU AI Act Requirements
Annex XI Section 1.1e: modality (e.g., text, image)
200K token context window for inputs and up to 64k output tokens for Sonnet 4 and 32K output tokens for Opus 4.
Input and Output Format
The Input and Output Format are specifications for how the data should be provided to the model and the exact format output by the model. If applicable, the documentation should include maximum size of the data (e.g. context window length).
EU AI Act Requirements
Annex XI Section 1.1e: format of the inputs and outputs and their maximum size (e.g. context window length, etc.)
Proprietary.
License
The License is the terms under which the model is released. It indicates whether the model can be used for commercial purposes and whether it can be modified and redistributed. Models released exclusively through an API may not have a license, but still be governed through a "Terms of Service".
EU AI Act Requirements
Annex XI Section 1.1f: the license
The formal documentation provides general guidance on use cases for each model, and the API documentation provide both high-level and low-level instructions for use.
Instructions for Use
Instructions for Use provide guidance for using the model. Ideally, these include specific examples and recommendations. If applicable, the instructions should specify any software/hardware dependencies needed to use the system, and how the model can interact with hardware/software that is not part of the model itself.
EU AI Act Requirements
Annex XI Section 1.2a: the technical means (e.g. instructions of use, infrastructure, tools) required for the general-purpose AI model to be integrated in AI systems.
Annex XII 1.d: how the model interacts, or can be used to interact, with hardware or software that is not part of the model itself, where applicable;
The documentation extensively covers model capabilities, applications, and some technical specifications. It includes information about how to understand the model's capabilities and proper use, but lacks detail about data and model design.
Documentation Support
Documentation Support evaluates the accessibility and usefulness of the model's documentation. For a high score in this category, the key details required across the categories need to both exist and be easily accessible.
Rating Guide
Documentation is not available or limited to several broad sentences.
Documentation touches many key topics with some detail, but certain areas (e.g. training data) are missing entirely.
Documentation covers all topics that are necessary to use and evaluate the model, but some areas are described vaguely. Excellent documentation may be places in this category, if it is difficult to find and navigate.
Documentation covers almost all or all topics in detail and is easy to navigate.
The developers provide a changelog for the app, the api and versioned system prompts here.
Changelog
A Changelog is an artifact that lists out versions of the model with changes that were added in each version. Entries in the changelog should make it clear to a user how the system has changed, and what modifications need to be made for effective use. If a model was released once and no changes will be applied, the documentation should make this clear.
Policy
https://www.anthropic.com/legal/aup
Acceptable Use Policy
The Acceptable Use Policy specifies how a model can/can not be used. When a model is released under a fully open-source license, this policy may not be necessary.
EU AI Act Requirements
Annex XI Section 1.1b: acceptable use policies applicable
The materials mention that Anthropic uses data from Claude users who have opted in to have their data used for training, indicating that user data may be collected and used for model training with user consent.
User Data
Model developers should clearly state whether user data is used to train models. For models that are not accessed via an API, the documentation should make it clear, if user data from other products offered by the developer were used. Listing out an explicit set of external datasets is an allowed alternative to a “user data” statement, but it should be very clear that whether user-related data is in this set.
Anthropic had a detailed data takedown and privacy policy. (Article on copyright infringement).
Data Takedown
Model providers should provide a clear mechanism for submitting takedown claims for copyrighted or personal data. The mechanism can include an online form, an email or an in-app button.
Anthropic uses a Responsible Scaling Policy and has a constitution used during model training.
AI Ethics Statement
Model providers should publish an AI Ethics statement that captures the principles used during model development. Alternatively, a company can publish a set of RAI objectives.
Incidents can be reported by emailing usersafety@anthropic.com. In addition, a Responsible Disclosure Policy is documented here.
Incident Reporting
Model Providers should provide a clear mechanism for submitting model feedback and/or for incident reporting.
Model and Training
The documents detail numerous tasks that Claude 4 models excel at, including coding (both models leading on SWE-bench), agentic search, AI agent applications, content creation, customer-facing AI assistants, visual data extraction, robotic process automation, and knowledge Q&A. Opus 4 is particularly noted for its ability to handle complex, long-running tasks.
In terms of limitations, the model can hallucinate, reinforce disparate treatment (e.g. produce responses that favor certain populations) and be susceptible to prompt injections and jailbreaks (at rates lower than Claude-3.7).
Task Description
The task description should clearly describe the intended uses for the model. Detailed documentation should, also, cover limitations and out-of-scope uses.
Transparency around model capabilities allows users to properly assess if the model is suitable for their task.
Trustible Rating Explanation
This model received a High rating because capabilities and limitations are described in detail with specific examples of prompts and outputs for exemplary cases.
EU AI Act Requirements
Annex XI Section 1.1a: the tasks that the model is intended to perform and the type and nature of AI systems in which it can be integrated
Rating Guide
Model capabilities are not documented.
A general description of model capabilities is provided. For example, the documentation only states that the model can be used for "coding, math and reasoning" tasks.
Intended uses of the model are described in detail with examples. Some general limitations are mentioned.
Both model capabilities and limitations are described in detail and with examples.
No information is provided in any of the available documentation.
Number of Parameters
The Number of Parameters indicates how large the model is.
EU AI Act Requirements
Annex XI Section 1.1d: number of parameters
Claude 4 models are described as "hybrid reasoning models" that offer two modes: near-instant responses and extended thinking for deeper reasoning. They feature extended thinking with tool use capabilities, allowing them to alternate between reasoning and tool use to improve responses. In addition, the AI system provides summaries of long thought processes, generated by an additional smaller model, instead of showing the whole trace (developers can opt-out of this process).
Model Design
The Model Design should cover key components of the model and explain how the inputs get transformed into the outputs. Transparency around model architecture can help users understand the suitability of the model for a particular task.
Trustible Rating Explanation
This model received a Low rating because the model design is only described in very high level terms. For example, in the 123 page System Card, the "Model training and characteristics" section is less than two pages, and there is no information about many key components of model design.
EU AI Act Requirements
Annex XI Section 1.2b: the design specifications of the model...the key design choices including the rationale and assumptions made
Rating Guide
Model design is not documented.
Model architecture is discussed in general terms.
Key components of the model are documented.
Model components are described in detail. Rationales and assumptions are documented.
Claude Opus 4 and Claude Sonnet 4 were trained with a focus on being helpful, honest, and harmless. They were pre-trained on large, diverse datasets to acquire language capabilities and used human feedback, Constitutional AI (based on principles such as the UN's Universal Declaration of Human Rights), and training of selected character traits.
Training Methodology
The Training Methodology should cover the key steps involved in training the model. This should involve both high-level steps and details of the process. For example, Foundation Models are often trained in multiple phases: pretraining, supervised fine-tuning and alignment with human preference/safety. Each step can be implemented via different techniques (e.g. alignment can be done via RLHF or Constitutional AI). Documenting this process can help the users understand the strengths and weaknesses of a particular model.
Trustible Rating Explanation
This model received a Low rating because training procedures are only mentioned in general terms. Target objectives are given more attention, but no detail on how those goals are attained by the ultimate training procedure, or detailing the process itself.
EU AI Act Requirements
Annex XI Section 1.2b: the design specifications of...training process, including training methodologies and techniques, the key design choices including the rationale and assumptions made; what the model is designed to optimise for
Rating Guide
Training methodology is not documented.
Training procedures and/or target objectives are mentioned in general terms.
Main steps of the training process are described in detail, including objectives.
Training process is described in detail, including a rationale for the design and any assumptions.
The materials provided do not disclose the computational resources used to train Claude 4 models.
Computational Resources
Computational Resources can include training times, FLOPs (floating point operations) and other details that can be used to assess the magnitude of resources used to train the model.
EU AI Act Requirements
Annex XI Section 1.2d: the computational resources used to train the model (e.g. number of floating point operations – FLOPs-), training time, and other relevant details related to the training;
No information is provided on the carbon footprint or specific mitigations for energy consumption beyond general claims of "model efficiency".
Energy Consumption
Energy Consumption refers to the carbon emission associated with training the model. It can be approximated based on GPUs used and training time (Calculator: https://mlco2.github.io/impact/#compute).
EU AI Act Requirements
Annex XI Section 1.2e: known or estimated energy consumption of the model...
With regard to [this] point, where the energy consumption of the model is unknown, the energy consumption may be based on information about computational resources used.
Claude 4 models are hybrid reasoning models with an "extended thinking mode", but no specific architecture details are provided.
System Architecture
The System Architecture description should explain how the model is connected to the end-to-end system. For example, an LLM could be connected to a separate content filtering models for its inputs and/or outputs.
This rating only applies to API-only systems.
EU AI Act Requirements
Annex XI Section 2.3: Where applicable, a detailed description of the system architecture explaining how software components build or feed into each other and integrate into the overall processing.
Data
The materials provided do not disclose the size of the datasets used to train Claude 4 models.
Dataset Size
Dataset Size indicates how much data was used to train the model. This can be specified in terms of the number of documents, tokens or other measures.
EU AI Act Requirements
Annex XI Section 1.2c: the number of data points
CAIT-D Requirements
California’s AI Training Data Transparency Act- (3) The number of data points included in the datasets, which may be in general ranges, and with estimated figures for dynamic datasets.
Claude Opus 4 and Claude Sonnet 4 were trained on a proprietary mix of publicly available information on the Internet, non-public data from third parties, data provided by data-labeling services and paid contractors, data from Claude users who opted in to have their data used for training, and internally generated data at Anthropic. Numeric analysis about the characteristics of the dataset is not available.
Dataset Description
The Dataset Description provides an overview of the data used for training. It should include a description of individual datapoints, distinct subpopulations and the corpus as a whole. The characteristics described can include low-level properties like number of tokens from each data source and semantic properties like percent of documents in each language. While the Data Sources category focuses on the origins of the data, this category is used to review transparency of the final dataset.
EU AI Act Requirements
Annex XI Section 1.2c: information on the data used for training, testing and validation, including...the number of datapoints, their scope and main characteristics
CAIT-D Requirements
California’s AI Training Data Transparency Act- (4) A description of the types of data points within the datasets. For purposes of this paragraph, the following definitions apply: (A) As applied to datasets that include labels, “types of data points” means the types of labels used. (B) As applied to datasets without labeling, “types of data points” refers to the general characteristics.
Rating Guide
No description or analysis of the dataset is available.
Dataset is described in general terms.
Dataset is described in terms of multiple characteristics with some numeric analysis.
Dataset is analyzed across multiple dimensions, including a separate analysis for different subpopulations of the data (e.g. different sources).
Training data sources include publicly available information on the Internet, non-public data from third parties, data from data-labeling services and paid contractors, data from Claude users who have opted in to have their data used for training, and data generated internally at Anthropic. For web data, the web followed industry-standard practices with respect to "robots.txt" instructions and did not access password-protected pages or those requiring sign-in or CAPTCHA verification.
Data Sources
The Data Source documentation covers the types of data used to train the model and how they were collected. Transparency around sources can help users to assess if a model is suitable for their task (e.g. was it trained on their language) and to gauge the risk profile of the model (e.g. was it trained on unrestricted internet data). The documentation should make it clear if the dataset was purchased or licensed.
When creating new datasets, documentation should cover the steps involved in creation and limitations, like missing data. In addition, documentation should state when the dataset was collected. If synthetic data was generated for the dataset, the documentation should make the process involved clear.
Trustible Rating Explanation
This model received a Medium rating because data source classes (web) are enumerated, but details about specific sources (which websites) is not described, nor is any discussion of how missing or biased data may systematically influence the resulting models.
EU AI Act Requirements
Annex XI Section 1.2c: information on the data used for training, testing and validation ... including type and provenance of data ... how the data was obtained and selected. ... [and] all other measures to detect the unsuitability of data sources
CAIT-D Requirements
California’s AI Training Data Transparency Act- (1) The sources or owners of the datasets
- (2) A description of how the datasets further the intended purpose of the artificial intelligence system or service.
- (6) Whether the datasets were purchased or licensed by the developer
- (12) Whether the generative artificial intelligence system or service used or continuously uses synthetic data generation in its development. A developer may include a description of the functional need or desired purpose of the synthetic data in relation to the intended purpose of the system or service.
Rating Guide
Very few or no details are provided about the data sources. When this rating is applied, the user has little ability to determine if the data sources used are appropriate to use for their task.
The data sources and process for collecting them are described in general terms.
Data sources are enumerated, and the collection process is described in some detail. Justifications and limitations surrounding data curation are not addressed.
Data sources are documented in detail, including justifications and limitations for the choices. Data collection process is described in detail, including a discussion of any missing data, limitations and/or assumptions.
Anthropic partners with data work platforms to engage workers who help improve their models through preference selection, safety evaluation, and adversarial testing. They state they only work with platforms that align with their belief in providing fair and ethical compensation to workers and are committed to safe workplace practices regardless of location. Anthropic publicized the Inbound Services Agreement that crowd workers abide agree to.
Data Collection - Human Labor
This category assesses the transparency surrounding the human labor involved in the generation of training data. For human labor, we refer to individuals outside of the development team that were employed to create, annotate or review datasets. Datasets include both original pre-training data and human preference data that is used iteratively during post-training. Transparency in this category can help assess potential biases in the data and hold developers accountable to fair labor practices.
If no manual annotation or review was used when constructing the dataset, this category may be marked as Not Applicable. This may occur if the dataset is composed entirely of off-the-shelf data from the Internet.
Trustible Rating Explanation
This model received a Medium rating because they do give this issue attention in the documentation, note general geographic and industry background of workers, but do not provide specific wages and more detailed demographic information for all data workers.
Rating Guide
No information is provided about labor used for dataset construction.
The documentation acknowledges that human labor was involved in the data collection or annotation process but lacks specific details.
The documentation describes the role of human labor in dataset creation, annotation, or review, including methods and scale (e.g., "data annotated by X number of workers from platform Y"). Some information about contributors demographics, compensations and biases/limitations is included but lacks comprehensiveness.
The human labor process is fully documented, including the number of contributors, their geographic or demographic diversity, and the specific tasks performed. The documentation includes the sourcing of data (e.g., platforms, partnerships) and a thorough description of labor practices, including payment rates and working conditions. Potential biases introduced by human labor practices are discussed.
Anthropic employed several data cleaning and filtering methods during the training process, including deduplication and classification.
Data Preprocessing
The Data Preprocessing documentation covers how data sources were preprocessed for training. This should include both filtering steps (i.e. how were datapoints excluded) and transformation steps (i.e. how were source datapoints modified before training). A clear description of this process is important for assessing risks. For example, it can indicate how personally identifying information (PII) was handled: were documents containing PII removed and/or were PII words, like emails, replaced with a placeholder. In addition, this documentation will enable users to set-up additional data correctly for fine-tuning.
Trustible Rating Explanation
This model received a Low rating because the filtering and cleaning steps are described in very broad terms.
EU AI Act Requirements
Annex XI Section 1.2c: information on the data used for training, testing and validation, where applicable, including ... curation methodologies (e.g. cleaning, filtering etc)
CAIT-D Requirements
California’s AI Training Data Transparency Act- (9) Whether there was any cleaning, processing, or other modification to the datasets by the developer, including the intended purpose of those efforts in relation to the artificial intelligence system or service.
Rating Guide
No data preprocessing techniques are documented.
Data filtering and/or cleaning is mentioned in very broad terms.
Detailed description of data preprocessing is provided. For filtering datapoints and/or excluding entire sources, procedures are clearly described. If data filtering is not performed, the documentation should provide a clear justification for the choice.
Detailed description of data preprocessing is provided with justifications and a discussion of limitations. Documentation should include a discussion of the data filtering criteria across multiple risk criteria (e.g removing duplicate data, handling toxic language and checking for data poisoning). While filtering may not be implemented for each dimension, the developers should show that they considered them and provide an explanation for their choice.
The materials provided do not specifically address how Anthropic detected or addressed biases in their training data beyond data collection and processing for alignment, broadly.
Data Bias Detection
The Data Bias Detection category assesses how developers reviewed the data for potential biases. We use Bias to refer to an incorrect or incomplete representation of human subpopulations (i.e. people of a certain race, gender or religion). Bias can appear in data both as text or images containing stereotypes and harmful content and/or as a lack of representation of a particular group. For this evaluation, we focus specifically on the training dataset, not on mitigations implemented during training during training like safety alignment.
EU AI Act Requirements
Annex XI Section 1.2c: information on the data used for training, testing and validation, including...methods to detect identifiable biases, where applicable
Rating Guide
No bias analysis was conducted, and potential biases in the data are not discussed.
The potential or realized biases in the training dataset are discussed, but no quantitative analysis is included. Biases towards individuals/groups should be referenced explicitly (e.g. an overall mention to unsafe/low-quality content is not sufficient).
In-depth bias analysis is conducted. For example, a demographic analysis was combined with some sentiment/bias analysis.
In-depth analysis for bias is combined with: a documented procedure to reduce bias, explanation for why bias is sufficiently low or justification for not modifying the dataset.
No explanation provided for this rating.
Data Deduplication
The Data Deduplication category assesses whether the documentation discusses how duplicate entries were treated in the training data.
The materials provided do not specifically address how toxic and hateful language was handled in the training data.
Data Toxic and Hateful Language Handling
The Data Toxic and Hateful Language Handling category assesses whether the documentation discusses how toxic and hateful entries were treated in the training data. The developers may chose to not remove such language, but they should provide a clear explanation for their decision (e.g. better performance or allowing customization of the final model).
No information is provided about IP handling in Data.
IP Handling in Data
This category assesses whether the documentation discusses how copyrighted entries and other types of IP were treated in the training data.
CAIT-D Requirements
California’s AI Training Data Transparency Act- (5) Whether the datasets include any data protected by copyright, trademark, or patent, or whether the datasets are entirely in the public domain.
The materials provided do not specifically address how personally identifiable information (PII) was handled in the training data. They do note that Claude users who have allowed share access to their usage data has been incorporated in some way to model training, but do not discuss how or whether these data are fully de-identified.
Data PII Handling
The Data PII (Personally identifiable information) Handling category assesses whether the documentation discusses how PII entries were treated in the training data. While in general developers should take care to remove such data, a clear justification of why such data was not removed will suffice for this category.
CAIT-D Requirements
California’s AI Training Data Transparency Act- (7) Whether the datasets include personal information, as defined in subdivision (v) of Section 1798.140.
- (8) Whether the datasets include aggregate consumer information, as defined in subdivision (b) of Section 1798.140.
The materials mention that the models were trained on publicly available information on the Internet as of March 2025, indicating that this was the cutoff date for the training data, though no start data is provided.
Data Collection Period
Model documentation should clearly the state last date used for the model's training data (e.g. April 2023). This information is necessary to assess the accuracy of the model outputs.
Evaluation
The materials provide extensive benchmark results showing Claude 4 models' performance on coding (SWE-bench, Terminal-bench), reasoning (GPQA Diamond), multilingual Q&A (MMMLU), visual reasoning (MMMU), and high school math competition (AIME 2020). Both models show significant improvements over previous versions and competitive performance against other leading models.
Performance Evaluation
The Performance Evaluation covers the quantitative and qualitative analysis of model capabilities. While many of the models considered can be used for many different applications, there exist many benchmarks and protocols for reviewing the overall capabilities of the model.
The following key documentation dimensions are reviewed:
The choice of metrics/benchmarks used is clearly explained.
Metrics on multiple dimensions of model performance are reported in an externally reproducible fashion (Links to evaluation code or externally hosted benchmarks are provided where possible, but are not required).
Qualitative examples are included to supplement the user’s understanding of model performance.
Gaps in analysis and/or an error analysis are documented to further enhance the user’s understanding of the model’s performance.
Trustible Rating Explanation
While the performance evaluation is comprehensive in terms of testing along several benchmarks, specific implementation details are not provided. It is not clear if these reported metrics were from single tests, multiple tests with multiple testing adjustments, or otherwise.
EU AI Act Requirements
Annex XI Section 2.1: A detailed description of the evaluation strategies, including evaluation results, on the basis of available public evaluation protocols and tools or otherwise of other evaluation methodologies. Evaluation strategies shall include evaluation criteria, metrics.
Rating Guide
No quantitative metrics are reported.
Some quantitative metrics are reported, but evaluation methods are underspecified.
The documentation excels in one of the key documentation dimensions, but has significant gaps in other areas.
Documentations give the reader a clear and comprehensive sense of the model’s abilities. Almost all of the key documentation dimensions are discussed.
The System Card includes an extensive section on bias evaluations that assess the models' treatment of political topics and potential discriminatory bias, among other topics. Claude Opus 4 and Claude Sonnet 4 demonstrated bias levels similar to or less than Claude Sonnet 3.7.
Evaluation of Limitations
This category reviews the types of quantitative evaluations that were reported regarding the limitations of this model. For general--purpose models, limitations are multi-faceted and can include both traditional modes (e.g. misclassification) and novel ones (e.g. generating bias content).
This rating considers the breadth of analyses considered. The following categories key categories should be considered by most LLM developers, but are not a comprehensive list:
- Bias/Fairness (e.g. using DiscrimEval, BBQA, DecodingTrust or a custom benchmark)
- Factuality/Hallucination
- Safety (e.g. likelihood of generating content that violates an acceptable-use policy or evaluation related to a cybersecurity threat)
- Incorrect Refusal Rates (used to quantify the balance of safety and helpfulness)
Note: For this rating, we review whether the developers considered common limitations and published quantitative results for these categories. The broader risk assessment and adversarial testing procedure is evaluated by the ‘Adversarial Testing Procedure’ category.
Trustible Rating Explanation
This model received a High rating because limitation evaluation is provided in depth. A majority of the 123 page System Card are related to limitations.
EU AI Act Requirements
Annex XI Section 2.1: Detailed description of ...the methodology on the identification of limitations.
Rating Guide
No quantitative analysis of limitations is performed.
Evaluations on 1-2 metrics are reported, but details of the analysis or an explanation for not including additional criteria are not documented.
Evaluation related to 2-3 types of limitations is reported; details surrounding choice of metrics, implementation process and down-stream implications are limited.
Evaluation on at least 3 types of limitations is reported. Details of the implementation process and an explanation of results is included. If a major category of limitations is not assessed, an explanation is given for the reasoning.
No explanation provided for this rating.
Evaluation with Public Tools
Evaluations on benchmarks should be conducted using public tools. For many benchmarks, small changes in implementation can influence the metrics and result in figures that are not directly comparable to those published for other models.
EU AI Act Requirements
Annex XI Section 2.1: [Evaluation is conducted] on the basis of available public evaluation protocols and tools
The System Card details single-turn violative request evaluations, ambiguous context evaluations, multi-turn testing, and jailbreak resistance testing using the StrongREJECT benchmark. The alignment assessment section also describes various adversarial testing procedures, including alignment faking assessment and reward hacking evaluations.
Adversarial Testing Procedure
Adversarial Testing is the process of intentionally evaluating the risks associated with the model. For general-purpose AI, the focus is usually on the likelihood of models producing harmful outputs. The testing may involve a predetermined set of inputs that are likely to produce bad outputs, manual testing by experts (i.e. red-teaming) or model assisted approaches. For this transparency evaluation, we focus on the depth of documentation. A developer may not be able to assess all risks, but they should clearly document the limitations of the implemented adversarial testing process.
The following aspects of documentation should be considered for this evaluation:
- Set of risks tested is documented and justified
- Testing process (e.g. benchmarks used or types of human red-teamers employed)
- Results from adversarial testing process are presented
- Discussion on implications of the findings and/or on limitations of the process is included.
Note: There is a small overlap between this rating and “Evaluation of Limitations”. This rating focuses on the transparency of the process; while the other one evaluates the transparency of metrics. Quantitative results from a red-teaming exercise can contribute to increased ratings in both categories, but the rest of the considerations are different.
Trustible Rating Explanation
This model received a High rating because the process is described in detail along several dimensions of potential vulnerabilities.
EU AI Act Requirements
Annex XI Section 2.2: Where applicable, a detailed description of the measures put in place for the purpose of conducting internal and/or external adversarial testing (e.g., red teaming).
Rating Guide
No adversarial testing efforts are disclosed.
The adversarial testing process is described in broad terms OR the absence of an adversarial testing process is acknowledged, but no justification is provided.
The adversarial testing process is described in some detail including the types of risks that were evaluated and the general approach for testing. However, some details are missing for documentation that make it difficult to ascertain the full extent of testing. A model with no adversarial testing can earn this rating, if the decision is clearly justified and implications for downstream users are documented.
A detailed description of the adversarial testing process is included and covers all four aspects outlined above. To achieve 'High Transparency' the documentation should allow an external party to assess risk across multiple dimensions.
The System Card describes the iterative model evaluations throughout training to understand how catastrophic risk-related capabilities evolved over time. They tested multiple different model snapshots and implemented appropriate safeguards, with Claude Opus 4 being deployed with ASL-3 safeguards and Claude Sonnet 4 with ASL-2 safeguards. The actual mitigation techniques are described in broad terms; they included post-training using Constitutional AI and "training for specific characteristics".
Model Mitigations
Model mitigations are steps taken to reduce risks associated with a model. For example, a model can specifically be fine-tuned to recognize inappropriate inputs and refuse to respond. Understanding implemented adaptations is important for recognizing risks associated with the model.
The exact set of risks will depend on the type of model. Since risks will evolve over time, we consider if some set of mitigated and unmitigated risks was considered. We do not evaluate against a specific set of risks.
Because this is a transparency rating, we evaluate documentation for a description for clarity around both implemented mitigations and remaining risks. If adaptations were not implemented, developers should clearly disclose that and provide guidance to downstream users.
Trustible Rating Explanation
This model received a Medium rating because model mitigations are a well documented feature of the current model release, but the documentation focuses mostly on the conclusion that the mitigations work, to some extent beyond pre-mitigation models, but how this process works, and how effective they are, is not reported.
EU AI Act Requirements
Annex XI Section 2.2: Where applicable, a detailed description of ...model adaptations, including alignment and fine-tuning.
Rating Guide
No mitigations are documented, and no justification is given.
Implemented model mitigations are described in general terms. For example, the use of RLHF is mentioned, but no additional details are provided.
Specific model mitigations are documented but the effect of the adaptations is not measured. For risks that are not addressed by adaptations, some guidance is provided downstream users. A model with no adaptations can earn this rating, if the documentation clearly states that no adaptations were implemented and briefly makes the user aware of the implications.
Modal mitigations are documented in detail, and the effect of these adaptations is evaluated through examples and/or quantitative analysis. For risks that are not addressed by mitigations, detailed guidance is provided for downstream users. A model with no adaptations can earn this rating, if the documentation clearly states that no adaptations were implemented AND provides a detailed justification and guidance for downstream users.