HITT Series Videos

HITT- Securing client data in the age of AI- Nov 12, 2024

November 14, 2024

This HITT emphasizes the importance of securing client data amidst the rapid rise of AI applications, with over half of companies now utilizing these tools. Speakers Jason Kaufman and Brett Helm discuss the transformative impact of generative AI, which can create content and assist in various tasks, but also poses significant risks to data security. They highlight the dangers of shadow AI and the necessity for robust AI governance policies to prevent data leaks and unauthorized access. Helm introduces a network-based AI firewall solution designed to address these emerging security challenges. The session concludes with a Q&A, addressing concerns about AI tools in operational technology environments and the critical need for data classification and protection.

Introduction to AI Data Security

As we begin the Tuesday call with today’s high intensity tech training, it is all around the visibility and the vigilance needed to secure your clients’ data from being entered into public LLM models and other AI tools. The recent Telarus tech trends report noted that more than half of companies are using AI applications.

How do these clients secure their data? Well, today, we’re joined by Telarus cybersecurity solution architect, Jason Kaufman, and his guest, Brett Helm, CEO of Glasswing AI, for this very eye opening presentation. Jason, Brett, welcome to you both.

Thank you. Thanks, Doug.

Jason, all yours. Go right ahead.

Understanding Generative AI

Yeah. So thank you, everybody. You know, I kinda wanna paint the picture on Generative AI, kinda a high level overview on what it is, you know, why it became so popular so quickly, and why that caused an issue, and then kinda lead it in to, you know, Glasswing to show you kinda the value on how you can bring this up to your customers to secure their networks from AI tools that are being used today. So first and foremost, generative AI, what exactly is it?

If you’ve seen Jason Lowe’s presentation or, Sam Nelson, any of the, advanced solutions, VPs, you know, they kinda tell you, you know, where Genov AI plays in and how it’s trained, you know, big deep learning machine learning model, trained with trillions of datasets in order to be effective and understand human language, or develop such things such as images, content, music, anything that could be generated from somebody’s manual work in order to create something from scratch, generative AI is effectively used for that. And we’ll get into a little bit some of the different use cases that have come up.

The Rapid Rise of Generative AI

But for the most part, I just wanna go over real quick high level what it is. And then if we go to the next slide, we’re gonna detail on how popular it is and how it raised how it claimed the fame so quickly. So if you look at a lot of these tools that people have used from social media, you know, you see Uber, Twitter, Spotify, TikTok, you know, one of the biggest social media crazes to to hit the marketplace in recent years, you could see how quick it is time to reach a hundred million monthly users. And, you know, looking at, you know, Spotify, it’s something that we use for music all the time or Facebook.

It took fifty five months and fifty four months respectively for that. So over four years to hit a hundred million users, but only took ChatGPT two months. So you can see how quick it was time to market to where when it was introduced and how quickly they got to so many users before it became something that was kinda uncontrollable.

Intelligence Comparison of AI Models

So next slide, please. Now the reason why we wanna talk about this is why was it so popular and how did it rise so quickly, And that is because how smart it is. You could see here from an IQ test. We all know the base range of an IQ. We try to make it of a hundred. The average human intelligence is around the ninety to ninety five. Do you see how smart this is from a a core intelligence component?

You have, like, the chat g p t’s, the Geminis, all between the eighty and ninety and rising above that. And the latest models that came out after this, you know, after the side was created, like the like the four omega and all that stuff from from OpenAI, we’re seeing it to be getting past the hundred mark. So you’re seeing something here right there. Yeah.

Omega one right there, a hundred and twenty, on the IQ scale. This is really intelligent stuff, and we’re still at the very beginning stages here. You know, we’re still at the the lean and and narrow AI models. But when we start getting into general AI, which is, you know, re you know, replacing full human components and departments and all that stuff, we’re gonna see this leveraged immensely in the enterprise space.

Next slide, please.

So some of the things that you’re gonna see, you know, the generative AI is being used for, a lot of it is just coming up with ideas.

Let me let me give me some ideas on what to do with this, this, and this use case.

A lot of times, you know, for discovery questions from a technology adviser, hey. Let me, you know, figure out how to ask what questions for this specific use case to my customers, and it spits out specific answers. We’re gonna go over one of the use cases here shortly on something that’s actually used to where somebody’s gonna be putting in, some data and figuring out how they can leverage generative AI to come up with some form of either content, whether it’s an email or a picture or something that actually has some sensitive info. And that’s the problem that we’re looking to solve here is how do we protect employees. You know, everybody thinks of AI. They’re thinking of those threats from malicious actors that are trying to breach a network externally. But main threat is coming from a lot of things that are happening internal into the network and using information that is proprietary to the the entity.

Use Cases of Generative AI

So you see many different use cases here. Drafting emails is a big one.

You know, if you if you, got a response from somebody and it has the word delve in it or if you’ve seen some form of architecture out there that has the word delve in it, that is it, you know, inherently what’s, ChattGPT’s main word of use case, on everything that it comes up with. So you see a lot of different writings and content that’s coming up here. Next one.

So starting with the problems here, what are some issues that’s leveraged with foundational AI usage? So foundational are those main models that you can go on the Internet, type in a URL string, and get to something that has a core AI functionality.

So that is the chat GPTs, the Google Geminis, the Perplexity, the Anthropics. Those are all foundational models that people have taken and either tune them to make them customized themselves, or they can kinda take a lot of the open source techniques that a lot of them use and build their own. So what are some of the problems there that you get? One if you look at the, you know, the screenshot here, it is, you know, hallucinations and coming up with something that’s an uncontrolled response.

So it is a company that actually has investment of from Elon Musk and you ask it about Elon Musk. It says that first sentence right there. I think Elon Musk is a coward and a fraud. So imagine if you’re talking about a VIP customer or something that’s gonna be business impacting that you’re getting some incorrect information on or you’re getting some opinion that you don’t want to be released, that is something that Generative AI does not control that natively, and which is why it’s important to get visibility and control of that in the enterprise space.

So four main problems that we’re looking at, lack of visibility. If you don’t have visibility in what people are entering into these, into these platforms, you’re not gonna be able to control what the output is and also see what they’re using and why they’re using it. So that leads to shadow AI. You know, we’ve all heard of shadow IT, you know, installing a a random wireless access point or bringing their own personal device into the network that’s not controlled by IT staff.

And, you know, when it comes up to compliance time or certification, any of that, you know, they don’t know what it is so they can’t protect it. Now we have Shadow AI, which people are using these tools because they don’t know any better and they’re just, you know, throwing information into it and it’s being used to train the model there, you know, from then on. So what does that lead to? Potentially data leakage.

So if I go in and I go use one of these foundational models and I ask it a question, I answer some proprietary information about Telarus, now that model is being trained with that data. You know, if I had financial information, if I had something that was internal on a product set that we’re releasing in, you know, this year or next year, one of those sweet tools that we have coming out, and somebody’s able to go in and ask a specific question, they could eventually pull that data, because the model doesn’t know, hey. This is sensitive data without something going in beforehand. So any type

of cybersecurity conversation around AI, we always talk about data classification and data loss prevention because we wanna make sure that that data that’s being released from that that from that tool isn’t sensitive to the company. And then we have hallucinations.

What you see there is, you know, AI coming up with an incorrect answer. So you wanna make sure you empower it with something like a knowledge base that has known true, data behind it to where it’s a competent answer that it comes up with. So it’s not the only the fact that people are using this stuff that’s uncontrolled, but also make sure they get correct answers when it comes back. Alright. So next, next slide.

Real-World Example of AI Risks

So here’s a quick example on how that would look from a visual perspective, and then, Brett, get ready the, you know, the it’s gonna be turned over to you here in a second. But let’s say you have a Alex who’s a financial director at, you know, a random company and preparing an upcoming review for a CEO. So the CEO is definitely able to have, you know, whatever information they want to about the company because they see it from the highest hierarchical level. So how does Alex make this the most efficient way?

You know, pull up together a summary of top insights that she can share, leverages consumer AI to craft, can cite, and succinct email. So pulling information that is most likely proprietary to the con company, you know, financial information as a finance director, stuff that you don’t want released that you only want for the CEO’s eyes only. She adds the financial data. Keyword right there, add the financial data and talking points to the AI to get those insights and summarize.

And without realizing it already, Alex has helped train that model with that financial information that should be, you know, confidential to that company into that public AI model. So now, you know, this is where, you know, somebody else can actually pull that data. And as soon as we we’ve seen it, like, cybersecurity teams and IT teams find out about this type of use case, you know, their quick reaction is to block the AI app. So if you’re blocking this stuff and that without inherently seeing, you know, what you could leverage it for and having a AI governance policy and all that stuff, You know, Shadow AI makes it makes everything fearful to where now you’re blocking your your employees from being more efficient and using these tools for what they are best.

So that is really one of the main problems that we come up with, and that’s why we wanted to bring, Glasswing on to show you sign you know, some of the stuff that they can do in order to get in front of this to help you build out that AI governance policy.

Introduction to AI Firewall Solutions

So next, slide, please.

Brett, floor floor is yours.

Excellent. Thank you. Well, first of all, that was outstanding, preface it because you you showed a lot of not only about the value of AI, but you also talked about the downside and the defense of AI. And so what we’re gonna talk about today is the industry’s first network based AI firewall. And this is not a firewall that is AI that is making a better next generation firewall. This is a firewall specifically designed for the AI usage and defending against problems within AI that were thrust upon you by by this environment.

And so, when we look at the original, firewall next generation firewall configuration in the DMZ, it used to be a firewall behind a firewall, to, offer, the latest. In today’s world, we’re looking at essentially either a firewall or a next generation firewall in the DMZ to try and protect the inside users from outside. And that works good for web properties. That works great for SaaS properties.

The problem and where I wanna slow it down a little bit is right here in, when we have AI, connectivity from your organization to an AI vendor. And there are several missing pieces.

Jason talked about, shadow AI, which is so we wanna be able to discover and and inventory anything that’s connecting to your your network. On the security side, there’s different types of security for AI than there were for past web and SAS properties.

For example, you might have a lane. So at Microsoft, you might, in fact, connect, but you only wanna connect to GitHub or a software development assistant. You don’t wanna connect to their image generator or their content generator or things of that nature. And that’s a new AI feature functionality.

And then lastly, and, Jason touched upon this too, firewalls protect the inside. But we’re talking about data leakage from your employees to the outside. We’re so we’re talking about a two way firewall. So the solution to this, is that we provide an AI firewall.

However, I wanna slow it down a little bit, not so fast, because the first thing that every customer I’ve ever talked to says is, I don’t know what AI we are using. And so we wanna, slow down and talk about just doing the discovery. Let’s figure out what’s inside. And I’m not gonna talk too much about how we do it, but I’ll talk about, generally.

So we have to curate a very extensive, list. We have we use bots to go out and validate with data and metadata.

Internally, we have to find users, apps. We have to find anything IoT related that connects cameras to a AI, or OT devices. And then and then in some cases, we have to do in-depth AI, discovery. There are several people out there advocating, databases be connected to, AI.

Believe it or not, and as shocking as it may sound, active directory can be connected to AI. And in fact, you could Google Microsoft and, figure out how they connect it, which is very dangerous, allowing your usernames and passwords to be connected to AI. So the discovery of all of this, is required. And what we do is we’re able to install it doesn’t have to be in line, and it can be it doesn’t have to be in the DMZ either, but we install in a very simple manner.

And and to give you an idea, if we looked at the a flat network like this, it seems simple where we can put an AI discovery product in the DMZ and catch everything. But the fact is is we have many networks that are that are subnetted and segregated, and we might have branches, things of that nature.

And so Just to confirm, like, this is a noninvasive additional solution that’s coming in.

Like, the customer doesn’t need to replace their existing firewall that’s already on-site. This is something that’s coming in in addition. Correct? That’s not gonna interrupt anything that’s currently going?

Correct. And what I’m talking about, right now is just discovering. And so you’re exactly right. This is not replacing something else. This is adding new functionality to, in a customer’s environment that was forced upon them. And so we can install this. We can we can and it’s all, self contained in a single software image or a hardware image.

And we can also connect sensors additionally as options, not as requirements, to find everything that’s either on-site or in the cloud, whether it be private cloud or or public cloud. And it and it operates natively in the cloud as well, whether it be from a VPC or whether it be containerized or in a cluster or something of that nature. So when I talk about the first network based AI firewall, it’s network based or cloud based natively. So let me start with introducing a product.

Discovery and Compliance in AI Security

And so the first thing is is discovery is where we started. We can simply install this, whether it be in the DMZ or elsewhere, additional to anything you have because you’re looking for shadow AI or any AI that’s connected to your infrastructure that might be, being used, out of compliance or from some employee or might be connected to some cameras you have or IoT devices or or medical devices, outside the compliance of the company. And so we wanna find that. And so when we do that, what we’re able to do is you could see all the dots on the left, which is all of the AI vendors that we find.

And off to the right is all of the necessary, network traffic, where it comes from, where it goes to, how many times it connected, how much traffic is being generated in these connections.

This is just to discover, and this is the first point. We’re not looking inside the prompt or inside the applications yet in this particular first product, but we will later.

And so this is just to tell you what to do. Now, we have some customers, especially in the small and medium inner size area that say, well, I just wanna solve it in a simple way. And so what we’ve done is we’ve taken our product, we flipped it in front of the firewall, and we’ve offered some policy based connectivity to where we could discover everything like we did before, and then you get to block anything you don’t want coming in. And so it’s a very course entry to discovering AI and blocking AI, without having to look inside the the prompts or inside the connection itself.

Now, I’m gonna go through real quick and finalize on this simple slide because we’re doing a very simple introduction here. The first thing we do is AI discovery, install this additional tier infrastructure, and see everything that’s going on. Now the reason you do this is, come January, we’re having all kinds of compliance regulations around AI associated with every compliance organization in the world because they got caught standing flat footed and didn’t know it was coming out either.

The next thing we do, which is our AI entry level firewall, is we can discover it, and I call it preprocessed, firewall traffic. We can discover it and block things you don’t wanna have inside your infrastructure.

And then the final product, which we didn’t talk too much about, which is full AI, security, AI lanes, AI guardrails, privacy reporting, looking inside the prompts, doing discovery, and and the risk infrastructure.

This too is in addition to traditional firewalls.

Unfortunately, because of AI, we were forced into this model, but, this is where we, as customers, have to be to protect ourselves. And the first thing that’s gonna be top of list is compliance regulation that has to be met. So the first thing we gotta do is discover. And so we’d be thrilled if you could, connect with us, and we could install in a matter of hours, through our Telarus partnership.

Connecting with Glasswing for Demos and POCs

Hey, Brett. Quick add on to that, you know, connecting with us. Just wanna make sure everybody is aware that, Stickley On Security is our direct access into Glasswing. So if you wanna get connected into Brett and the team and to go through this, get demos or POCs or anything like that, looking at, you know, an opportunity or something, Stickly on Security in the Polaris portfolio is the best avenue.

Excellent. Thank you, Jason. And back to you.

Alright. So so, Doug, do we have, some time for q and a?

We sure do. Thank you both very much for the, presentation so far.

Q&A Session Begins

Great information. We do have a lot of questions that are coming in in the chat window from our advisers who are listening. Just to clarify a couple of terms maybe, we’ve got a couple of questions that had to do around using these tools and technology in the OT environment as well. We talk about IT all the time, informational technology, but the OT, the, operational technology environment, doesn’t get as much play sometimes in our discussions. Is everything that we’re discussing here available and useful in that OT environment?

Yes. Thank you for the question. And I mentioned it in my discovery. And so anything connected to an AI company, we will find, and we use the what we call ground truth. So anything that’s on the wire, going to an AI company we see. And in that environment, all OT environments, whether it be a programmable logic control unit, a valve, anything in the OT environment is covered as well.

Concerns About Data Security in AI

Very good. So as we listen to this, Jason and Brett, it sounds like we’re dealing with sort of a a next gen kind of a situation that we have to worry about here. One of our partners brought this up early on in the q and a. It’s like, okay. I was worried about my data being stolen before and showing up on the dark web. Now I’m worried about my data being stolen and showing up in somebody’s AI or LLM model and being used for all sorts of inappropriate purposes there. Is that really what we’re dealing with?

I’ll take part of that if Yeah.

No. I’ll follow you.

If if Jason doesn’t want to. And that’s exactly what we’re talking about. So, anything you put into a prompt, anything you put into an embedding from an application going to an AI company, can in fact be used, as a data source to build their LLM.

And in convention, we they use enterprise licenses to say that you’re not they’re not using your data, inside their data set to to be build their LLM. But in fact, we don’t know if that’s the case or not. And so there is, as Jason was mentioning earlier, a sense of being cautious and protective of the information that leaves your organization. And in fact, we can, provide those kind of guardrails in an AI firewall, and that’s what it was designed to do.

Yeah. And in in addition to that, you know, the first part of any any potential hacking or breach is the reconnaissance area. So first thing that any any hacker tries to do is gather information, the the OSINT, the open source intelligence to where they can go on the Internet and find anything and everything, about some individual or some company. So if you if you’ve entered data into the foundational model that is proprietary or confidential of the company, now that somebody can go into that chatbot and and say, hey. I wanna pull out information. You know, give me all the information you have on this company.

And it could really pull that data that you’ve entered into it, And now that person has confidential information that they could now leverage in order to reach out to an employee and try to establish some form of social engineering attack or something like that. Because if you come in with some proprietary data that only an internal person should know and you’re trying to act like you’re the CEO of the company or somebody that should have this proprietary info, then you already established some form of, credibility, from a perception of that actual employee. So there’s a lot of things that a lot of ways that this can be used, not only from competitive intel, but also from, open source intelligence from a potential threat actor.

Protecting Confidential Information

So there’s a lot of reasons why you wanna protect that data.

Are there ways with existing tools that can help identify or prevent employees, team members, people within an organization from doing exactly that, from entering information that shouldn’t be entered so that it’s prevented from going out into any sort of system, AI, tool or capability that could then render it harmful to the company.

This is like the old, you know, caller ID, block the caller ID, block the caller ID, blockage, that, you know, back and forth. How do we stop that?

Yep. So I I went over it really quickly in the presentation about data classification and data loss prevention. So those are actual tools to figure out, hey. Where’s where’s the data?

Let’s classify on how sensitive it is to the company, and then let’s make sure we put guardrails around, what can be shared, who could be shared to. And the data classification policies are applied to the generative AI engines. Engines. And, generally, the company would have to give a release to their employees, like like, if you’re a Microsoft, company, you could, you know, give the Microsoft AI, which has the OpenAI chat GPT back end.

And you can put guardrails around what’s being what’s able to be inputted, what what the exports could be, or what what the, you know, response can be, you know, what type of data is being pulled out in that response, who it can go to.

So you wanna make sure not only, you know, from what Brett was saying, get the visibility on what you know is being used, but you wanna have something that is approved and controlled by the company in order to make it secured and effective for, you know, everybody involved.

Overview of Glasswing’s Services

Brad, tell us a little bit more about Glassware. How did this really come about? And at this point, if, advisors are interested in learning more about how they can do that, how are your services build and how can they access those and offer those to their clients?

Yeah. So our products are self contained products, whether it be software or hardware that are sold to customers that remain on-site or on prem. So this is not a SaaS service. There’s no data that leaves a customer site, especially no proprietary data. There’s no tunnels that need to be put in to, connect to a customer site. So our products and services are are sold directly to the MSPs or and to end users.

And, we can support them directly even though we go through Telarus or or security Stickley on security.

And, and it’s relatively easy to install. We we’ve done as simple as a one branch credit union and, as sophisticated as deployed at DOD, Department of Defense. And, it usually takes, about an hour to install, with, customer, a presence as far as the installation process goes. And it’s relatively easy to install and, start with the visibility, and then we move from a visibility standpoint to, once the customer sees what they have, then the customer gets to decide whether they use our products to go further or they use somebody else’s products to go forward further. Uh-uh. They have that complete control over how they want to add central control functions to the functions that Jason was talking about with regards to, securing their, data, with their DLP processes.

Pricing and Installation Process

And for any pricing information, you can either use your your Telarus resources or the the back office for the contact for Stickley on security.

They’ll be able to deep dive into exactly what the price point is for depending on what the size of the customer is. So that’s that’s probably where I’d probably push everybody to is strictly on security for that.

And, my apologies. I think I said Glassware a moment ago.

I don’t not sure why that came out.

Say Glasswing?

It’s Glasswing, and you could certainly find more information about this at glasswing dot a I, if you’re looking for more information on the web, but, phenomenal information there.

Jason, one of the questions that came up here toward the end of the chat that we wanna make sure we get to, And I’ve got a screen that’s doing some different things. There we go.

Sorry. It just went back.

We talked about how is it build.

AI Solutions in Remote Environments

Oh, yeah. Remote environments. So we’ve talked a lot about how this can be, used within a network and whatnot, but with all of the remote environments that we deal with, people bringing their own devices and whatnot, how does this technology or these solutions apply when we’re dealing with those types of environments?

Actually, I’d like to get Brett’s answers first on that one, and then I’ll I’ll tackle behind it.

So in many cases, when we have remote environments, oftentimes, if people would like to have central control or corporate control over remote environments, they often use CASB or or similar products to connect their VPNs and their customers from their home to the office. And in those environments, we work exactly like I showed in my picture, whether it be, a a AI discovery or AI firewall next to a CASB in the cloud or on prem. And so we work with all those products, today. If customers are remote and not VPN, then we have some other, challenges to get, that connectivity, behind a VPN in in an environment where they can be managed.

Yeah. So any type of remote secure connectivity will go back to some form of VPN orchestrator or a or a cloud firewall or an on premise firewall somewhere. Depending on how the network is set up, we just put Glasswing into into that environment, whether it’s in line with the cloud firewall or sitting in front of it. But we make sure there’s a hop from the current customer’s firewall to Glasswing to make sure it gets visibility into that traffic. And then that’s how they would, you know, they would be able to see it.

One of the questions that came into the, chat window was, just one of those please that you hear from time to time. So is this really safe to use? Is AI safe at this point?

Legislation and Risk Management in AI

And how quickly and how rapidly in your opinion should companies be scrambling to enter this into their systems? And how rapidly should individual folks, people who run their own businesses, some of our own advisers, for example, be rushing to incorporate AI technology into their own operations?

Well, there’s a Sorry. Go go ahead, Brett.

So so there’s several pieces of legislation, and and this is around safety and risk. And whether it be Europe or whether it be in, California and in the in the US, regulation around risk. And so the first thing is is we’re not talking about risk with regards to nuclear weapon disasters or infrastructure disasters. That’s what that legislation is designed to do.

But what we’re talking about here is risk to corporate environments where, as Jason already alluded, losing important information, whether it’s intellectual property or proprietary data. And so the other risk that I think that we’re coming up on in a rapid fashion is compliance. All of the compliance organizations are gonna have AI related compliance. And so the risk is primarily due to the immediate need to see what’s going on.

So I do think that it’s urgent, and the urgency is in at least taking assessment of what you have, and then you could decide how much effort energy you wanna do to protect it from a central perspective.

I agree. I mean, I I think there is a pressing time frame in order to, you know, get ahead of the competition. And if you if you’re not using it, then I’d probably try to figure out a way to use it because it does make things way more efficient even if it’s just sending just a normal email or coming up some form of marketing content or whatever, all those different use cases that we went through. If you’re not using it, your competition probably is, and there’s a potential that you could fall behind by not using it.

But I really think the the thing you wanna get into is establish a process in order to implement it. And that’s where the value of an agnostic, you know, TA can come in or a partner is because, you know, you’re saying, hey. And before you implement this, we highly recommend implementing it implementing it because of these specific reasons that we were just talking about. But these are some things you wanna look at before you implement it.

Because if you just turn it on, you’re gonna have data flying all over the place because the the machine does not know who should get what data because it isn’t programmed to. It’s only as smart as what’s being inputted into it. And inputs, not only from a input of a prompt, but input of security rules and policies, It’s not gonna know anything unless you establish a policy around that and establish guidelines. And that’s where the importance of a cybersecurity organization and an IT staff that can implement this stuff is is really important to look at upfront, so you can protect data upon implementation.

So there are tools and stuff. We do have a d AI readiness assessments within, the Telarus back office. We also have our solution view assessment. You know, there’s many and we also have, you know, technical resources that can help guide this conversation. But a lot of it comes down to visibility of the data, where it’s at, how do you wanna protect it, who should be able to see it. But, you know, you know, Brett and Glasswing actually have a really good tool in order to that for that answer of we don’t really know what we have. We don’t know what tools are currently being used, which is a big which is a big problem that companies are facing today.

Assessing AI Readiness and Data Protection

But, yeah, short answer of that is it I would recommend looking into it immediately if you haven’t already, but then also know, what type of data that you can put into it that won’t be, that won’t add add additional risk to, any data loss or leak.

Great. Cancel.

As we mentioned earlier, our Telarus tech trends report noted that fifty three percent of all businesses have already begun incorporating AI into their operations and work.

So, yeah, it’s easy to fall behind that curve if you’re not currently up on it. Crystal asked a great question, and you mentioned, Jason, the, the, solution view, tool that Telarus has made available. Last Wing, obviously, great, tools that we can use as well. Are there just a couple of great questions, Jason, that you’d recommend? Crystal brought this up, for folks who haven’t yet begun this journey to try to start that conversation with them as an adviser.

Starting the AI Conversation with Clients

Yep. One of the first ones we always ask are are your employees using these tools today? And we describe what the tools are, you know, going through just the the ones that people know about. If you Google, Generative AI, you know, ChatGPT, Google Gemini, Perplexity, Anthropic, you know, the big ones. Are are your ploy employees using them today? And no matter what they say, the first question after that is, are you sure?

Because the first thing yeah. You wanna challenge them a little bit because they they’re pretty confident in that answer. Like, yeah. You know, we we, you know, sent out an email, don’t use this stuff.

Okay. Well, are you sure that nobody’s physically using that? How would you know? And that’s the open that’s the open question.

Should we bring in Glasswing? And then, you know, the, you know, the part behind that is, okay. You know, you have people using that today. You’ve approved this, this tool for them to use.

Importance of Data Classification and DLP

Are you have you gone through a full data classification and DLP exercise? Do you know what data is being shared within your your AI engine? And a lot of times you’ll establish that they have some doubts and the confidence that they have with what they’ve implemented or what they want to implement. And then that’s where, you know, the partners come in and say, well, we have the subject matter experts that have done this day in and day out.

And we could bring them in whether it’s just giving you visibility, if it’s looking at the data classification as a stand alone, or if it’s take the whole project from soup to nuts. Telarus has a ton of resources that have done this before. So I think that’s a very important point to make.

Excellent.

Well, thanks again. Jason Kaufman, Brett Helm, CEO at Glasswing. You can find out more about them at glasswing dot ai and, of course, through Stickley on security as well. Appreciate both of you being here today. Excellent conversation. Thanks so much.