According to estimates from Goldman Sachs, generative AI (GenAI) will constitute 10-15 percent of cloud spending by 2030, or a forecasted $200-300 billion (USD). The public cloud serves as the perfect vessel for delivering AI-enabled applications quickly, cost-effectively, and at scale. For organizations looking to profit from AI’s potential, the path effectively travels through the cloud.
For cloud security teams on the ground, however, the impact of AI can seem complicated. Understanding the challenges it presents, and the key capabilities it enables, can help them work smarter and more effectively. This article explores the three ways cloud security teams should think about AI to enhance protections, improve efficiency, and address resource constraints.
1. Apply cloud security best practices to AI services
A decade ago, cloud computing fundamentally transformed how businesses and industries operate, but it also introduced significant security challenges. Cloud provider services often defaulted to insecure settings in an effort to prioritize the ease and speed of cloud development. It wasn’t an oversight on their part, but a conscious decision to make their platforms easier to use and eliminate the effort of configuring services.
Organizations encountered security risks as a result. Many exposed storage buckets to the public Internet, even when very few wanted to make their data accessible to the entire world. Misconfigurations across numerous settings surfaced regularly — expanding attack surfaces and raising the potential for severe security incidents.
Fast-forward to today, we see the same problem returning with AI: vendors are introducing services that favor ease of development and deployment over security.
Recent research offers evidence. The 2024 State of AI Security Report analyzed AI security risks in cloud services and found numerous examples of misconfigurations affecting most organizations. Among them, nearly every organization in the study (at least 98 percent) has yet to enable encryption for their self-managed keys, increasing the likelihood attackers can exploit exposed data. This finding applied to the three largest cloud providers and their AI services.
To address these risks, cloud security teams must:
- Configure settings with security in mind. Teams must address the default settings that lead to security misconfigurations. This ensures their organizations don’t encounter the same problems the cloud industry first experienced a decade ago.
- Gain and sustain full visibility. Defenders need to gain full inventory of the AI models, packages, data, and risks present in their cloud estate.
- Adapt to early-stage challenges. Practitioners must discover safe boundaries of the AI innovation so they can protect and enable it.
Cloud defenders must also prepare for future shifts in cloud and AI security. This includes multi-cloud adoption, which will accelerate as some cloud providers develop more advanced AI capabilities than their peers.
Cloud defenders must prepare for changing cloud providers by adopting tools and approaches that work universally.
2. Recognize that attackers are weaponizing generative AI (GenAI)
We already see the attackers can weaponize generative AI (GenAI) to automate workflows and advanced attack chains. This commodification continues through the threat landscape.
Recognizing this continues to fuel the trend in… (We need to keep the bullet list but I’m not going to adjust them.) (But we keep bullet list.)