A new report from cybersecurity firm Tenable reveals that a majority of AI services running in cloud environments are riddled with unresolved vulnerabilities, exposing businesses to potentially catastrophic security threats.
The Cloud AI Risk Report 2025, released March 20, found that 70% of cloud-based AI workloads contain at least one unremediated critical vulnerability. These flaws could allow attackers to manipulate AI models, tamper with sensitive data or leak proprietary information.
“Cloud and AI are undeniable game changers for businesses. However, both introduce complex cyber risks when combined,” the report said.
Among the most alarming findings: 30% of AI workloads scanned contained the critical CVE-2023-38545 curl vulnerability — a security flaw discovered in 2023 that remains unpatched in numerous deployments. The vulnerability enables unauthorized access to sensitive environments through malicious redirect requests.
According to Tenable, such risks are often worsened by what it terms “Jenga-style” cloud misconfigurations — when foundational cloud services inherit risky defaults that cascade through dependent layers. In Google Cloud’s Vertex AI Workbench, for example, 77% of organizations had at least one notebook instance configured with an overprivileged Compute Engine service account.
“Cloud security measures must evolve to meet the new challenges of AI and find the delicate balance between protecting against complex attacks on AI data and enabling organizations to achieve responsible AI innovation,” said Liat Hayun, vice president of research and product management for cloud security at Tenable.
Tenable’s analysis covered data from major cloud providers including Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. The research examined telemetry collected from active cloud workloads and configurations across enterprise clients globally between December 2022 and November 2024.
The report highlighted other high-risk defaults:
- 91% of AWS SageMaker users had at least one notebook with root access enabled, allowing full control of the system if compromised.
- 14% of organizations using Amazon Bedrock did not block public access to at least one AI training bucket.
- 5% had overly permissive permissions on training data storage, increasing the risk of data poisoning and manipulation.
These conditions leave sensitive AI models — often trained on personal data, proprietary algorithms or business-critical insights — exposed to theft or corruption. A successful attack could undermine model outputs, damage customer trust, or even halt mission-critical operations.
Tenable’s findings echo broader industry concerns about the convergence of cloud computing and AI, both rapidly adopted technologies. According to McKinsey’s 2024 global survey, 72% of organizations had integrated AI in at least one business function, up from 55% the previous year.
Microsoft Azure led in cloud AI adoption, with 60% of organizations using Azure Cognitive Services. In contrast, 25% of AWS users had enabled SageMaker, and 20% of GCP users deployed Vertex AI Workbench.
The report stresses the importance of secure-by-design principles and urges companies to adopt a Cloud-Native Application Protection Platform (CNAPP) approach. It also recommends organizations prioritize remediation based on impact, adopt least privilege access controls, and continuously monitor for misconfigurations.
“Securing your cloud AI workloads requires a robust, AI-conscious protection strategy that defines sensitivity and contextualizes risk across your cloud infrastructure,” the report said.
Tenable, which serves over 44,000 customers worldwide, has positioned itself at the forefront of exposure management by identifying gaps in security visibility and providing actionable insights to reduce risk across IT and cloud environments.
For more information, the full Cloud AI Risk Report 2025 is available at tenable.com.