Shadow AI Is the Fastest-Growing Blind Spot in Your Tech Stack
Shadow IT is so 2020. These days, it seems like every software vendor is bringing an AI-powered product to market, and as every IT pro knows, signing up for a sleek AI tool is never a one-way street. Unseen, unapproved, and unmonitored, AI processes pose a significant threat to company data, customer privacy, and a laundry list of terrible things too stressful to put in a blog post.
That’s why we brought together a handful of our favorite IT leaders for a town hall on what’s to be done about this whole Shadow AI Business. Productiv’s own VP of Information Security, Josh Mullis, was joined by seasoned cybersecurity veteran Suktika Mukhopadhyay to outline the history, risks, and framework for combating Shadow AI.
Naturally, we recorded the whole thing and have published it here for your perusal. Why? Because we care about you and your company’s security, you sweet little IT ragamuffin, you.
The full video and transcript are below, in case you don’t subscribe to HBO Max. But, if you’re short on time, we pulled out some highlights too.
Josh begins with a quick walk down memory lane, calling to mind the simpler times when all you had to worry about was whether your Head of Marketing paid for a blog subscription pop-up plugin with a company card. Times have changed, and while you may now have full visibility into your SaaS stack, tech companies are routinely rolling out new features, updating T&Cs, and letting their AI capabilities loose inside your perimeter. Enter: Shadow AI.
Suktika illustrates the complexity of the problem with the example of open source technologies. While companies whose AI capabilities are on the side of the tin may afford the savvy IT pro an opportunity to conduct thorough review & risk assessment, things start to get squishier with open source technologies and the more advanced AI operations whose model reasoning isn’t there for you to inspect. DeepSeek is an instructional example, as for Suktika, the evalution of a new AI capability all comes down to risk.
Suktika outlines a 4-step framework for evaluating the risk shadow AI presents.
- Identify Your Risk Tolerance.
- Stack rank SaaS products by usage, impact
- Assess visibility into AI capabilities
- Identify authorization mechanisms
Great! That’s only 4 things! Shadow AI handled, right? Unfortunately, SaaS companies aren’t always (or ever) forthcoming about the AI capabilities that come along with their product, meaning it’s déjà vu all over again when it comes to SaaS visibility. Suktika prescribes getting specific with vendor legal teams and weighing potential benefits against perceived risk.
There’s plenty more where that came from, so if you’re thirsty for more, check out the full conversation below, in both stunning 4k resolution and good old fashioned .txt.
Full Transcript:
Josh Mullis (00:01):
Genuinely, thank you all for being here. I see a lot of great faces—some new, some familiar. I’m really excited about the topic we’re diving into tonight.
For those of you who don’t know me, I’m Josh Mullis, VP of Information Security at Productiv. I want to walk through a quick agenda so you know what to expect.
You may have walked in and thought, “Wow, there are a lot of cameras. This isn’t what I expected.” Apologies—we’ve got to feed the marketing monster!
But the goal tonight isn’t for Sukhi and me to sit up here and talk at you for an hour. We’ll speak for five to ten minutes to set the stage with a framework for how we’re thinking about Shadow AI—because that’s what we’re here to discuss. Then we’ll open it up. We want this to be a collaborative dialogue. We’ve got a room full of incredible experience—CIOs, CTOs, CISOs—so let’s use that.
We want to hear your experiences, your challenges, and the solutions you’re thinking about.
Before we get into it: who are we? What’s the context for this event? Again, this isn’t a sales pitch—many of you know I promised that. But here’s why it matters: Productiv is a SaaS management platform. We help you gain visibility into your SaaS portfolio—understanding features, usage, and license allocation—so you can make informed decisions about onboarding, renewals, and retirements.
Governance around SaaS becomes incredibly relevant in the Shadow AI space, because a lot of AI shows up within SaaS tools.
But enough about me and Productiv. Much more interestingly, we have Sukhi from Snap—most famous for Snapchat, but also doing a ton in augmented reality. She has a deep background in cybersecurity. Sukhi, want to introduce yourself?
Suktika Mukhopadhyay (01:55):
Yeah, sure. I’ve been in the space for about 15 years, focusing on cybersecurity strategy and risk management. Most recently, I’ve been leading Snap’s Privacy Engineering team.
At Snap, we take the privacy of our users—and all sensitive data—very seriously. But we’re also leaning into generative AI: integrating it into our products and models, and even into our internal protective mechanisms, like detecting spam and abuse.
A quick note: a lot of what I’ll share today reflects my personal views—so take them as such. But I’m excited to explore this topic with you. It’s a new and evolving space, and there’s still a lot the industry needs to build in terms of protective mechanisms.
Josh (02:57):
Awesome—really appreciate you being here tonight.
To level-set: What exactly is Shadow AI? Many of you probably know or have encountered it. But it helps to look at how we got here—an evolution from Shadow IT.
(03:13):
The term “Shadow IT” was coined by Gartner around 2008–2009. Back then, it meant servers hidden under someone’s desk or laptops running in closets—stuff that shouldn’t be there. It was slow and organic, and while we needed detection tools, it was still manageable.
Fast forward to the late 2010s—SaaS became a commodity. Then the pandemic hit, and adoption skyrocketed. It was “enable the business at all costs.”
Anyone with a corporate card could expense a tool. Freemium options exploded.
That’s when companies started turning to platforms like Productiv to regain visibility and governance. We’re lucky to have a few of our customers in the room who’ve gone through that journey.
Just as we were starting to get a handle on it—boom—AI arrives. Large Language Models (LLMs) start showing up everywhere.
And here’s the crux: even when you think you have visibility into your SaaS portfolio, AI features are now being embedded in tools—and you may not know they’re there.
That’s the challenge of Shadow AI. So tonight, we want to talk through that, share some frameworks, and—most importantly—hear from all of you.
Sukhi, to kick us off: given the wide range of generative AI applications, how do you think about categorizing them?
Suktika (04:58):
There are lots of architectures out there, but we loosely group things into three buckets:
- Proprietary LLMs like OpenAI and Claude—models that we’re integrating into products and internal processes.
- Open-source models—like DeepSeek, which recently made waves. These democratize AI and encourage competition (which is good), but also come with risks—less control, black-box behavior, higher risk of insecure code, hallucinations, bias, and toxicity.
- SaaS tools embedding AI features—like Slack or Zoom summarizing meetings or analyzing conversations. This is probably the most relevant category for tonight’s discussion.
Josh (06:04):
Totally agree. For context, I saw a report from late 2023 that said 74% of SaaS providers had either implemented or were testing AI. That was over a year ago. Imagine where we are now.
It’s reached a point where people joke, “Does my toaster really need Wi-Fi?”
Suktika (06:33):
You don’t want a robot barista making you coffee, Josh?
Josh (06:36):
I mean… maybe. But seriously, AI is becoming table stakes for SaaS vendors—regardless of actual business value.
So like you said, let’s set aside LLM building. That’s typically done through formal contracts and monitored closely. But it’s those other two categories—open source and embedded SaaS AI—that raise trickier challenges. What are some you’re seeing?
Suktika (07:11):
Right. When you work with OpenAI or Anthropic, it’s deliberate—you go through legal reviews, negotiate data clauses, put infrastructure in place.
But with open source? There’s no one to contract with. Sure, you can review the code—but model behavior often remains a black box.
For example, DeepSeek outputs more insecure and biased code compared to GPT-4 or Claude 3—three to four times more, according to recent reports.
So while it’s cost-effective, there are real risks—geopolitical (like data being sent to China), security, and brand impact. We’ve mitigated that by running models locally with guardrails.
Josh (09:02):
Yeah, and that sounds a lot like Shadow IT all over again. Someone signs up with their corporate email, uses Google SSO—and suddenly your perimeter has expanded.
Unless you containerize usage like Snap is doing, you need to focus on visibility as a first step.
Suktika (09:39):
Exactly. We’re trying to get ahead of it—because we know our engineers want to use these tools. We want to enable them.
That means reviewing security from day one and giving engineers clear guidance on where and how to use these models.
With SaaS vendors adding generative AI, there are no real opt-ins. Features just appear. You might not even know they’ve been turned on. Then you’re stuck figuring out if they can be disabled—and often, they can’t.
It’s why visibility and prioritization are so key.
Josh (11:17):
And the risk isn’t just about “is my data being used to train a model?”
We also need to think about decision-making autonomy. OWASP even created a GenAI Top 10 list—because it’s so different from traditional app security.
Think about recruiting platforms. What if an AI filters resumes based on class or demographic? Or chatbots giving out unauthorized refunds? These are real risks—business, ethical, and legal.
Suktika (12:36):
Absolutely. And while safety isn’t tonight’s main topic, it’s something we invest in heavily.
We do adversarial testing, bias detection, prompt injection tests. Especially around sensitive events like elections, it’s critical we’re not inadvertently spreading misinformation.
Deploying AI responsibly requires entirely new safety teams and tooling.
Josh (13:29):
Totally. So the big question: how do we fix this?
When we were prepping, we landed on a four-step framework. Want to walk us through it?
Suktika (13:48):
Happy to. And to be clear—these aren’t radical. They’re the fundamentals.
- Understand the risk. But also—do you care about the risk? Not every risk is mission-critical. Prioritize based on your company’s specific context and tolerance.
- Know your data. Where does your most critical data live? Is it user data, IP, or internal ops data? Support this with data classification.
- Prioritize. You can’t protect everything. Focus on the top 10–20 SaaS apps that matter most.
- Get visibility. That’s where tools like Productiv help—understanding your SaaS landscape, configuration, auth mechanisms, data flows, and user access.
Once you have the fundamentals—configurations, auth, logging, protection—you’re in a better position to manage new risks from GenAI features.
Josh (16:45):
Couldn’t agree more. From a lifecycle view:
- You can’t secure what you can’t see—so start with visibility.
- Understand the business use case for each app and whether it’s changed.
- Consolidate your footprint. If you’ve got 500 apps, it’s impossible to monitor them all. Narrow it down where you can.
- Lock in the basics: offboarding, authentication, audit logs, vendor reviews.
This should all tie into your vendor risk management program. Are you having proactive conversations with legal? With vendors about their AI roadmaps?
Again, you won’t get to all of them. But if you know your top 10 or 20, you can start to threat model and get ahead of the risk.
About Productiv:
Productiv is the IT operating system to manage your entire SaaS ecosystem. It centralizes visibility into your tech stack, so CIOs and IT leaders can confidently set strategy, optimize renewals, and empower employees.
Learn more today