Purpose of the Statement

RadarFirst is committed to transparency and responsible AI use in all its products. This statement explains why and how we use large language models (LLM) in our applications, ensuring clients and internal stakeholders understand our rationale and safeguards. We believe that openly sharing this information builds and reinforces trust and accountability, aligning with our values of security, privacy, and ethical technology use.

Model Identification

  • Model: Anthropic’s Claude 4.0 Sonnet
  • Provider: Accessed via AWS Bedrock
  • Capabilities: Supports up to 200,000 tokens, enabling long-context reasoning and structured output
  • Hosting & Security: AWS-managed cloud API ensures enterprise-grade infrastructure, encryption, and compliance

Use Case Overview

Radar leverages Claude 4.0 Sonnet in two key ways within our platform:

1. Radar AI Risk™ – assists users in evaluating AI system risk to organizations, mapping laws and regulations, and flagging high-stakes compliance concerns through AI chatbot interaction and structured analysis. 

2. Radar Controls™ – enables intelligent mapping of privacy and security controls to laws, rules, or regulations (LRRs); and enables identification and research of applicable regulations using an AI chatbot.

In both applications, the AI is strictly assistive. It suggests, extracts, or aligns information but does not make authoritative or final decisions. Human users remain accountable for verifying outputs before submission

Rationale for Model Selection

All reasoning here applies to both AI Risk and Controls: 

  • Ease of Integration: Using Claude 4.0 via AWS Bedrock allowed seamless embedding into our existing AWS-hosted platform without developing custom AI infrastructure. 
  • Strong Context Retention: The model’s large memory window is essential for processing lengthy incident reports (AI Risk) or dense regulatory texts (Controls). 
  • Reliable Structured Output: Both products rely on the model’s ability to return results in consistent, structured formats (e.g., JSON, summaries, rule mappings). 
  • Fast Deployment Timeline: The combination of Anthropic’s pre-trained model and AWS Bedrock’s API allowed rapid rollout of AI-enhanced features. 

Safety Considerations

Both products share a common architecture and safety approach: 

  • Built-in Guardrails: Claude 4.0 incorporates Anthropic’s Constitutional AI, designed to reject harmful, biased, or unethical instructions. 
  • Prompt Injection Mitigation: We sanitize inputs and use Bedrock’s built-in abuse detection to prevent manipulation. 
  • Session/User Limits: Usage caps (planned) prevent abuse or prolonged misdirection. 
  • Layered Protections: Includes secure prompt engineering, content moderation, access controls, and AWS Bedrock processes our requests in a virtual private cloud (VPC) isolated from the public internet. 

Ethical Considerations

  • Bias Mitigation: Claude’s Constitution draws from human rights frameworks to reduce discriminatory output. Both AI Risk and Controls use tightly scoped prompts with grounded legal/regulatory input to reduce reliance on the model’s generative imagination. 
  • Fairness Audits: Anthropic conducted red-teaming; no internal audits yet but planned. 
  • Ethical Boundaries: Model restricted to product-specific domains (privacy/compliance). Requests for off-topic or unethical guidance are rejected. 
  • Accountability: Final responsibility for ethical use and accuracy lies with the RadarFirst AI Steering Committee and designated product owners. 

Privacy and Data Handling

Applies equally to both AI Risk and Controls: 

No Model Training on Customer Data: Neither Radar nor Anthropic uses client data to train the model. All prompts and completions are ephemeral. 

Encryption and Secure Hosting: Communications occur over HTTPS, processed in a VPC via AWS Bedrock, with KMS-managed encryption. 

Compliance: Claude 4.0 Sonnet, when used via Amazon Bedrock, inherits compliance with ISO 27001, SOC 2, HIPAA (with a signed BAA), and GDPR. In AWS GovCloud (US), Claude 4.0 Sonnet is also authorized for FedRAMP High and DoD IL4/5 workloads, subject to appropriate customer configuration and usage. 

Access Controls: AI-generated outputs are stored securely, with strict role-based access enforced on a per-user, per-incident basis. 

Annual Penetration Test: Radar will perform annual penetration testing on an industry-standard pen testing program design, with the first assessment scheduled for September 2025. 

Evaluation and Monitoring

Shared across products:

  • Feedback Loops: Users have the final say; they can confirm the accuracy of data collection summaries and are empowered to descope and/or override results generated by the AI. 
  • Quality Audits: Staff compare AI-assisted vs. non-AI outputs to detect errors or misinterpretations. 
  • Feedback: Feedback is collected from users to track value and usability. 
  • Automated Logging and Drift Detection: Logs are analyzed to monitor behavior and detect output drift, using tools like Langfuse. Since the model is not trained on user input, drift will only occur if the model is upgraded. 
  • Misuse Detection: Monitored for anomalies and abuse; AWS Bedrock adds an extra detection layer. 

Limitations

Both AI Risk and Controls face the same model-level risks:

  • Hallucinations: May produce confident but incorrect responses; mitigated through human oversight and prompt engineering. 
  • Context Management: Long sessions may degrade consistency; session limits and periodic recaps are planned. 
  • Misinterpretation: The model may misclassify or mislabel input; editable suggestions and human review ensure final correctness. 
  • Human Oversight: Both features are built with a “human-in-the-loop” model — users must validate and approve AI-generated outputs before action. 

Future Intentions

Shared roadmap:

  • Model Flexibility: We may migrate to future Claude versions or other models via simple API changes in AWS Bedrock. 
  • Cost Optimization: Potential for lightweight models for low-risk tasks (e.g., data extraction). 
  • Custom Fine-Tuning (Optional): Under exploration, contingent on data privacy, regulatory requirements, and client opt-in. 
  • Ongoing Safety Alignment: Continuous monitoring of NIST, ISO, and vendor guidance to evolve practices responsibly. 

RadarFirst is integrating Claude 4.0 Sonnet into the Radar AI Risk™ and Radar Controls™ products with rigorous attention to transparency, safety, privacy, and ethical use. The model enhances our platform’s value without compromising user trust or data security. This statement outlines how and why we use AI in these products and will evolve as our implementations grow.

We welcome feedback and remain committed to clarity, caution, and continuous improvement in our use of AI technologies.

Ready to simplify regulatory risk management?

Schedule a Demo