
We are thrilled to announce a major breakthrough in AI security with the release of Upwind GenAI Security. AI is transforming industries at an unprecedented pace, but without the right security measures, it becomes an ungoverned risk. Organizations need purpose-built protections that evolve with the complexity of AI workloads.
This is a first-of-its-kind solution that bridges the critical security gaps in AI workloads, where traditional tools fall short. While existing cloud security solutions focus on infrastructure and application security, they lack the depth to address AI-specific risks such as unauthorized model access, data leakage, and AI-powered cyber threats. Upwind GenAI Security changes the game by delivering deep runtime visibility through eBPF, AI-specific threat detection, and proactive risk mitigation – delivering AI security coverage no other solution can match.

With dynamic baselining, Upwind continuously learns normal AI workload behaviors, enabling it to detect anomalies and potential threats with greater accuracy, reducing false positives and ensuring security teams focus on real risks.
This expansion of the Upwind platform empowers organizations with purpose-built security to protect GenAI services & AI workloads from unauthorized AI communications, and potential AI resource misuse.
Why GenAI Workload Security Matters
As organizations increasingly adopt GenAI services, there has been an emergence of new security challenges that traditional security tools fail to address. AI workloads introduce unique risks that require specialized security controls, including:
- Data Leakage & Exposure: AI models often process sensitive data, and improper configurations or interactions with external AI services can result in unintentional data exposure.
- AI Model Manipulation & Tampering: Threat actors can attempt to manipulate or poison AI models, leading to biased outputs, security vulnerabilities, or unauthorized model modifications. For example, our research team showed over a year ago how they could take control of LLMs in a GenAI chatbot and execute a command & control attack, effectively using the compromised AI to relay commands, extract sensitive data, and bypass traditional security measures undetected.
- Uncontrolled AI API Usage: Organizations need visibility into which workloads interact with AI services to prevent unauthorized data flows and potential misuse of AI-powered applications.
- Cloud-Native AI Security Gaps: Existing cloud security solutions often overlook AI-specific threats, such as insecure model endpoints, excessive IAM permissions for AI workloads, and improper access controls.
Upwind GenAI Security addresses these challenges by providing deep visibility, threat detection, and compliance monitoring tailored specifically for GenAI services.
How Upwind Protects GenAI Workloads
Upwind’s GenAI Security solution solves these challenges by providing deep visibility through the Upwind eBPF sensor and providing additional AI-specific security controls. Upwind provides comprehensive GenAI workload protection through four key capabilities:
- GenAI-Specific Communication Path Visibility: Map and visualize outbound AI service communications across cloud environments, monitoring which workloads interact with external AI models such as OpenAI, AWS Bedrock, Azure OpenAI, and GCP Vertex AI. This ensures visibility into AI data flows and prevents unauthorized access.
- GenAI Cloud Security Posture Management (CSPM): Detect misconfigurations in AI-related services that could lead to data leaks, insecure deployments, or unauthorized AI model access. This includes identifying publicly accessible AI endpoints, insufficient IAM role restrictions, and lack of version control for AI models.
- GenAI-Specific Threat Detection: Upwind identifies real-time threats to GenAI workloads and detects abnormal activities such as unauthorized AI model modifications, suspicious outbound AI API usage, and unexpected workload process executions.
- Sensitive Data Discovery in AI Service Interactions: Inspect outbound API request payloads to detect potential data leaks when interacting with external AI services. Through regex-based scanning and AI-powered payload analysis, Upwind ensures that sensitive data—such as PII, API keys, or proprietary information—is not inadvertently exposed.
With this unique combination of GenAI-specific capabilities and controls, Upwind protects AI workloads from misconfigurations, unauthorized access, emerging threats, and potential data exposure risks. Below, we will dive into each of these new capabilities further, highlighting relevant use cases and security capabilities.
1. Visualizing GenAI-Specific Communication Paths
Understanding how your workloads interact with external AI services is crucial for maintaining a secure environment. Through the use of our high-performance eBPF sensor, Upwind maps and visualizes real-time communication paths between your cloud resources and GenAI services. Upwind identifies and categorizes traffic to GenAI services in real time, providing deep visibility into AI-related traffic flows. This enables organizations to easily:
- Identify Unauthorized AI Usage: Detect abnormal or unauthorized interactions with AI services that could pose security or compliance risks.
- Assess Dependencies and Data Flows: Understand how data moves between workloads and AI services to ensure sensitive information isn’t inadvertently exposed.

Upwind provides this real-time visualization of communication paths by performing DNS-based mapping, which is particularly effective because it allows for a lightweight, scalable approach to tracking AI service interactions without requiring intrusive network inspection. This approach provides organizations with real-time insight into their AI data flows without the overhead of full packet inspection, ensuring both visibility and operational efficiency We identify the DNS ranges of AI services – such as OpenAI, AWS Bedrock, Azure OpenAI, and GCP Vertex AI – and analyze outbound requests. This approach allows organizations to quickly identify which workloads are interacting with AI services across AWS, Azure, and GCP ensuring real-time visibility into AI-related communications while reducing security blind spots.
2. GenAI-Specific Cloud Security Posture Management (CSPM) Findings
Misconfigurations in AI services can lead to unauthorized access, data leaks, or insecure deployments. Upwind’s GenAI Security addresses these challenges by identifying and mitigating misconfigurations specific to GenAI services across AWS, Azure, and GCP. Some common misconfigurations detected include:
- Publicly Accessible AI Model Endpoints: For instance, an Amazon SageMaker or GCP Vertex AI model endpoint that is publicly accessible increases the risk of unauthorized inference requests and data exposure. Upwind detects these exposures and recommends restricting access to private networks or authenticated users.
- Lack of Version Control in AI Models: For example, an AI model in SageMaker without version control in the model registry could make it challenging to audit changes or revert to stable releases, leading to compliance risks and unauthorized modifications. Upwind identifies these instances and advises implementing model versioning and maintaining an audit trail.
- Unrestricted IAM Roles for AI Services: For example, deploying AWS Bedrock services without IAM role restrictions can allow unauthorized AI resource usage and data exposure. Upwind highlights these configurations and suggests applying least privilege access policies to restrict access to foundation models.

By detecting common GenAI-related misconfigurations, Upwind enhances organizations’ cloud security posture management, ensuring that AI services are securely configured and compliant with industry standards.
3. GenAI-Specific Threat Policies
To protect against advanced threats targeting AI services, Upwind’s GenAI Security includes tailored threat detection policies. These detections build on Upwind’s extensive existing policy mechanisms, incorporating AI-specific threat indicators to enhance risk assessment and improve detection accuracy. These include detections such as:
- eBPF-Based Detections: For example, monitoring workloads running AI-related processes (e.g., Spark, Llama) that unexpectedly spawn new reverse processes, which could indicate remote code execution or AI model tampering via shell access.
- Network Detections: Identifying workloads communicating with external AI services, which could suggest data exfiltration, unauthorized model queries, or API abuse. Upwind’s dynamic baselining capabilities monitor for abnormal communication patterns and alert organizations to suspicious or malicious activity.
- Cloud Logs-Based Detections: Utilizing cloud logs to monitor and identify security events related to AI services, such as modifications to AWS SageMaker models that make them publicly accessible, potentially leading to data leakage or unauthorized usage of proprietary AI algorithms.

Upwind’s GenAI-tailored threat policies empower security teams to proactively detect and respond to threats targeting AI workloads in real time. For example, Upwind can detect when an AI model is being manipulated through adversarial inputs or when an unauthorized workload is attempting to exfiltrate training data to an external location. These real-world attack scenarios highlight the critical need for AI-specific threat detection capabilities that go beyond traditional security measures, ensuring proactive security and advanced threat detection and response for GenAI workloads.
4. GenAI-Specific API Sensitive Data Discovery
As AI services become more integrated into business operations, monitoring outbound communications is essential to ensure that sensitive data isn’t unintentionally being sent to external AI providers. To solve this, Upwind analyzes API request payloads when an outbound connection to an AI service is detected. This capability is implemented in two ways:
- Regex-Based Detection: With this initial release, Upwind will scan outbound API payloads using predefined regex patterns to detect sensitive data types, such as emails, credit card numbers, API keys, or personal identifiable information (PII). This approach empowers organizations to seamlessly identify potential API sensitive data exposures and proactively remediate API vulnerabilities.
- AI-Powered Payload Analysis: In the coming months, Upwind will employ AI-based models to analyze API payloads, automatically identifying complex data leakage patterns, including contextual analysis of API requests. This will allow us to enhance our detection of GenAI-based sensitive data exposure even further by identifying obfuscated or encoded sensitive data.

Upwind’s GenAI-specific API sensitive data discovery and tracking ensures that organizations’ sensitive data remains protected when interacting with external AI services, mitigating the risk of unintentional data leaks.
The Next Era of AI Security Begins with Upwind
Upwind GenAI Security is the industry’s most comprehensive AI workload protection suite, delivering real-time visibility, threat detection, and misconfiguration prevention for AI services.
This isn’t just a hyped feature drop – this is a real-world solution designed to secure the AI workloads powering modern enterprises. For example, Upwind recently helped a ecommerce firm identify and mitigate unauthorized AI API usage that was leaking sensitive customer data, showcasing how our real-time visibility and AI-specific threat detection translate directly into stronger security outcomes
With Upwind, organizations can:
- Monitor AI service interactions in real time
- Detect and prevent AI-based security threats
- Enforce least privilege access for AI workloads
- Ensure compliance and protect sensitive data
Your AI workloads deserve real security. Upwind delivers it.
Don’t wait – secure your AI workloads today. Schedule a demo now to see Upwind GenAI Security in action and take the next step in AI-driven protection.