Expert Insights Podcast
Expert Insights Podcast
#54 - Empowering Productivity By Securing GenAI (Moinul Khan, Aurascape)
0:00
-21:40

#54 - Empowering Productivity By Securing GenAI (Moinul Khan, Aurascape)

AI, GenAI, and AI copilots make your employees more powerful and more productive, says Moinul Khan, CEO and Co-Founder at Aurascape. But, as with all new technologies, they can leave organizations vulnerable to security threats.

“You want to make sure that you can keep the bad guys out, and that your sensitive data and intellectual property are not leaking out through insider threats or accidental data loss.”

Enter: Aurascape, an AI security provider and one of this year’s RSAC Innovation Sandbox finalists.

On this week’s episode of the Expert Insights Podcast, we caught up with Khan to discuss the benefits and risks of using AI tools in the workplace, and how Aurascape is empowering employees to innovate fearlessly without their IT teams and CISOs worrying about the loss of sensitive data or intellectual property via AI. Read on for the full Q&A.

Note: this transcript has been edited for clarity.

Moinul, thanks for joining us today, it’s great to have you on the show! You have over 25 years of experience in cybersecurity – could you tell us about your security background, and how that led you to founding Aurascape?

I’ve been in the security space for over 25 years, and worked in different domains, different setups, small companies, mid-size companies, large companies – I also tried a startup before!

Before Aurascape, I was at Zscaler, one of the best cloud security vendors in the industry. I was there for six years as their Senior Vice President and GM, and I ran their data security business. Before Zscaler, I was at Palo Alto Networks for about five years. Before Palo Alto, I was at Netscope, very early on when the company was at stealth mode, and I had a great time helping the company to create a new market called CASB. Before Netscope, I was at Juniper Networks, where my focus was remote access. So I’ve been fortunate enough to navigate through different domains within the security space.

You’ve drawn on this experience in different domains in creating Aurascape, as well as from other cybersecurity companies, such as the emergence of Checkpoint during the early stages of the internet and Palo Alto Networks during the application explosion. How have these companies influenced what you’re doing in the AI space?

Over the last 25 years, I’ve witnessed several major transformations in the security industry. I’ve seen how NextGen firewalls replaced traditional firewalls. I’ve seen how cloud-based distributed proxies replaced legacy proxies. I’ve seen transformation in endpoint security, I’ve seen MDM, and when iPhone and Android became very popular, everybody was looking for a BYOD solution. And each time, in hindsight, I found myself thinking, “I could have built that!”

So this time, when AI took off, I knew it was a space where there was going to be more transformation, and that it was going to stick around for at least the next two decades.

AI is reshaping everything. And I felt that not building cybersecurity solution for AI at this early stage would be a huge missed opportunity.

At the same time, I'm a strong believer in the concept of Founder-Market Fit, and I felt that I’d accumulated enough experience in the industry to lead and make a meaningful impact in the AI security space.

Before we dive into some of the challenges you’re solving, I want to talk about why those challenges exist. Organizations – or at least their employees - are increasingly embracing GenAI in the workplace. At Aurascape, you’re not trying to block AI tools, but make them safer to use. So, what are some of the benefits of allowing your employees to use AI, GenAI, and AI agents?

We’re a cybersecurity company, but our focus is really enablement. We want to enable organizations to empower their employees to use AI tools. AI tools are really great; they make your employees more powerful, more productive, and they should be using it.

When you’re thinking about allowing AI tools, there’s a significant amount of concern from IT security teams and CISOs, and sometimes even CIOs, because, at the end of the day, this you want to make sure that you can keep the bad guys out and that your sensitive data and intellectual property are not leaking out through insider threats or accidental data loss.

When we say we’re doing AI security, at the end of the day, we are monitoring what your users are doing when they're using AI applications, and we’re also monitoring your application-to-application data flow, because all modern apps today are connected with some type of AI tool, LLM, or small language model. So, we’re monitoring all these interactions in real-time to make sure that the threats are not coming in, and that your data is not going out.

By doing so, we are giving a tremendous amount of confidence to IT security teams, CISO, and CIOs and saying, “Empower your users, let them innovate fearlessly! You don't have to worry about security. You don’t have to worry about data loss and intellectual property loss.”

Could you talk us through some examples of different security risks that you’ve seen in the real world?

Sure. A simple example is, if I come to my work this morning and I go to ChatGPT and say, “Why did Angelina Jolie and Brad Pitt get a divorce?” I'm going to get an answer from ChatGPT and most likely that answer is going to be accurate. My IT team shouldn’t worry about why I asked that question as long as I'm not violating any laws and things like that. But if as a user, I go to ChatGPT and I say, “I need to write a very convincing email to one of the companies that we are about to acquire” and then I put all this this sensitive content in, I'm essentially leaking sensitive data leakage. Any if that information goes out, we are going to risk our business.

If I'm a software developer, it's okay for me to go to a code assistant tool and say, “I know C and C#, but my manager expects me to learn Python - what are some of the best practices for Python coding?” Or, “What are some of the similarities between C, C++ and Python?” This is absolutely okay. The tool will train me and make me more productive. But if I go to the same code assistant tool and cut and paste my company’s source code and I say, “Fix my source code!”, from the organization’s perspective, I’m leaking sensitive source code and putting the company at risk.

At the same time, if I say, “Let's just block all AI tools!” Guess what? It's not going to work! Users will always find a way to use AI tools and AI applications that make them powerful. So, the question is, how do you allow them in such a way that your data and your intellectual property are always secure?

How does Aurascape help organizations to address these risks, without disrupting employees’ productivity?

We are in line and monitoring and enforcing policy in real time. Essentially, we are sort of like a proxy. We sit between the user and all these AI tools, so everything is going through us, and we inspect the content on the wire. When we’re looking at the content, the purpose is to make it 100% contextual so that IT teams can make policy decisions.

Going back to one of the examples that I gave you just a few minutes ago, if a software developer is going to a code assistant tool and saying, “I want to learn Python,” we will look at that interaction and say there’s no risk to the company. But if the same user is inputting company source code, then we alert admin users. If they want to take an action automatically, they can create a policy where that particular transaction within that application will be blocked.

Outbound, the primary focus is data loss and intellectual property loss. Inbound, the primary focus is threat prevention.

Now, as we were developing the service, one of the things that we strive for is the end-user experience. If you want to be a sledgehammer, users don't like it; it impacts their user experience. Instead, we have a lot of focus on coaching and notifying your users when we see these violations.

So, if someone is going to their personal Copilot account, in line we’d see that and we’d say, “Would you consider using your corporate Copilot account? That way, our company’s sensitive data will be secure.” Then when the user sees that message, they feel part of the decision-making process, so they’re more receptive to comply with all your company policies.

We’ve discussed what Aurascape is currently doing to address some of the challenges in this market, and now I’d like to turn our attention to the future. You recently secured $50 million in funding, and you’re one of this year’s RSAC Innovation Sandbox finalists - what plans do you have to continue to innovate in the AI security space?

Ffirst of all, we want to become a big cybersecurity company. As AI is becoming mainstream, we feel that we are going to be more relevant than everything else that people are using today. From an innovation perspective, this is what startup is all about! Of course, everybody needs business - we want our customers to believe in it, we want to see ARR growth and all of that stuff – but as a CEO, my primary focus is on innovation. This isn’t about making dead fish and selling them as sushi. So, we’ll continue to innovate within the AI space.

I think our biggest opportunity is how we build our platform, how we build our foundational stack. Because when it comes to building features and services, many people can do it; it's not difficult to build features and services when it comes to threat prevention, data security, and DLP. But the foundational stack is our differentiator.

We’re hearing a lot about the concept of an autonomous SOC analyst or SOC team, and many organizations are focused on developing this kind of technology. Do you see Aurascape expanding into this space in the future?

Yeah, eventually. Our current solution has some part of that already. I talked about our foundational stack – we built three robots in our foundational stack. Those are basically AI agents: you have to leverage AI to fight against AI.

One of these robots, called NeuroOps, automates the whole incident management workflow for your admin users and automates the interactions between end user and admin users. For example, if I went to ChatGPT and I input some sensitive company information, that robot will be able to detect it and it's going to start interacting with the user and say, “I have just seen that you're doing that. Do you have a good business justification?” Then user then interacts with the robot saying why they absolutely have to do it. That request then goes to the admin and, without the admin really doing anything manually, the AI agent automatically creates a limited time exception for the user.

This whole notion of automating the workflow between end user and admin user makes our product much more deployable. The SOC team no longer has time to deal with 19,000 alerts coming from 70 different security tools.

This is just one component of what you said about the autonomous SOC; eventually, as we expand and as we provide more security services, we will get there.

What are your final words of advice to security teams looking to safeguard the use of AI apps and copilots within their organization?

When we are talking to enterprise customers, every enterprise has either an internet facing firewall, a cloud-based proxy, some type of proxy for their web traffic, or they're using SSE, secure service edge. These are great products.

Now, when it comes to AI, my fear is that many security professionals out there today have a false sense of security. They think that they can just use a URL filtering engine to block an AI application. But this assumption is wrong. It’s no longer someone going to a website or using a SaaS-based service; to secure AI, you have to leverage AI.

So, my advice is: make sure you have complete visibility over all the AI tools that your users are using. The visibility cannot be just the name of the applications that they are using, but you need to have deep decoders so you can understand what kind of prompts are going out, and what type of responses are coming back.

Once you have that full context, your policy is going to be much more meaningful.

Expert Insights provides leading research, reviews, and interviews to help organizations make the right IT purchasing decisions with confidence.

For more interviews with industry experts, visit our podcast page here.