HITT Series Videos

HITT- AI Takeover: The Key Strategies to Safeguard Your AI in 2024 – August 13, 2024

August 14, 2024

Your comments and questions are welcome in the chat window for Q&A following each of today’s live presentations.
Later in today’s call, you’ll meet Telarus supplier NTT Global Data Centers and NTT’s integrated solutions for the secure housing of network infrastructure.
First, however, it’s our hit training. We focus today on the captivating topic of AI, artificial intelligence, its rapid rise, and as some fear, its impending takeover and the strict attention required to ensure its security.
Today, we’re joined by Telarus VP of security, Jason Stein, and Telarus senior sales engineer, Jason Kaufman, along with experts, Tom Houde, he’s senior manager, channel solutions architecture from Entirety, and Aaron Russo, senior cybersecurity specialist from LevelBlue. We’ve got a vital discussion of the key strategies needed in twenty twenty four to ensure that AI is and remains secure.
Jason Stein, Jason Kaufman, welcome back to the Tuesday Call HITT training. Thank you both for being here and setting us up for this important discussion today.
Morning, Doug. How are you?
Doing really well. Thanks. Great job last week at Partner Summit. Glad to continue the conversation today.
Looking forward to it. It was such a great turnout. Had some amazing conversations.
And, AI was one of the biggest topics along with security. So we’re, perfect time slot to follow-up on that. Yeah.

Thanks for the invite here as well. When I heard that I get to partner up with Jason Stein on something, I immediate yes. I jumped to it.
You’re the best.
So It’s all yours.
Let’s jump right in.
Yeah. Let’s do it. So we’re excited to talk about artificial intelligence. It’s obviously a huge conversation track.
If you haven’t seen our tech trends reports, please go look. You’ll see that your partners are looking to invest heavily in artificial intelligence. But, you know, if you think about it, I think that artificial intelligence is kinda where cloud was about ten years ago, fifteen years ago. We moved everything into the data center.
We wanted to get it off prem, and we wanted to make it convenient and accessible for our employees, for our customers.
And we didn’t necessarily think we need to secure it that heavily to the depths that we are today, and and that’s really where you’re at today. Widespread adoption rate, fifty four percent of global consumers are really investing in AI and having routines around AI. And for every dollar you invest in AI, your clients get a three x return, which is huge. You know, the global AI market is a hundred and fifty billion.
We haven’t seen these kinds of numbers come up overnight since Tesla. It’s just been amazing to see.

You know? And so it’s expected to continue to grow thirty six percent, year over year. I’m continuing to grow through the next, several years. But there’s not a lot of security, and that’s what we wanted to do is really bring on some experts to talk about as AI comes up, you know, you can ask your clients, are you looking to be a leader in AI?
Do you need help getting there? You know, where where are you thinking about doing from a security standpoint? And the security today really soar, which is that orchestration layer, EDR, which is endpoint detection response, which is looking at everything that has an IP address and and making sure that we can identify, eradicate any bad actor that’s in, the the network, and and that’s where MDR comes in. MDR, as Jeff Hathcock loves to put it, is that human element, eyes on glass, looking to make sure that they they find those anomalies.
And then SIM SIM is the event management platform that looks at logs, which says, you know, is anybody doing impossible travel or what’s kinda going on in in the network from a stamp perspective. And a log is something that’s already happened. So you have an a SOC, a security operation center where people go in and and look at those bad actors, those bad actions by employees, and identify them. So, Jason Kaufman, let’s talk about what are you seeing from an engineering standpoint?

How often is AI coming up in those conversation tracks, and how worried are people about trying to secure that as far as the engineering resources you’re getting from Telarus?
Yeah.
It’s actually coming up in every single part of the conversation now because you’re seeing from social engineering perspective, you know, you’re getting that email now that sounds much more realistic because you can run it through an a an AI engine to come up with something that sounds very, very intentful, realistic. It is coming for you know, they’re they’re trying to make it for you’re the new spearhead or the the whale that they’re coming for. They’re making it garnered directly towards you, not just a generic statement that has all these different spelling mistakes or it’s completely off topic and all that stuff.
We’re actually seeing some really good emails come through. And how do you protect the customer from from that type of, you know, threat that’s coming through? And that’s where that social engineering training is very important that we’re that we’re seeing. So the attacks are coming in.
Also, they’re automated. Like, you can use an LLM, you know, large language model, and tell it with human language to go attack a specific network. And it will use a lot of that automated pin pen testing tools that we use for defenses to see where the gaps are, you know, to actually attack networks. So you’re seeing a lot of people that, you know, you don’t have to have all this different network or hacking, knowledge in order to do this.
You can just use human language now and breach a network, using AI. So protecting that is actually make is what scares a lot of CISOs and directors of infrastructure now.
Very interesting. I know you put some really good data. Let’s go through some of your slides and talk about, you know, where is cybersecurity coming in. You know, when you think about some of these use cases coming in, how are we helping clients with some of the suppliers in our portfolio?

Yep. And I I just got off one of them right there, which is that second bullet point, the penetration testing. So having that cost effective penetration test, you know, you have the automated tools now that can, you know, take you know, I know just big on the the human component, which is a great factor to have in a penetration test. But what if I don’t have the budget in order to do that, but I wanna do vulnerability scanning to the next level?
How do I automate that to where I can run a monthly scan that also uses scripting techniques that you’d find in, like, a Cali Linux deployment? Something that could take that escalation to the next level and prove that a breach happened, not just give you a priority scan and tell you, hey. You need to fix this stuff. So we’re seeing some some traction there in the penetration test capabilities.
And then now I actually had a, you know, multiple opportunities for code reviews. How to use stuff like, you know, AWS Code Whisperer to come in and see, you know, if there are any potential lines and code that could open up to a potential breach on an application that’s already in the customer’s network that the customer’s network that they developed in house. So how do we get more proactive in application development than reactive?
And then the, you know, how to how to leverage that SIM tool and that SOC in order to make sure that you have a new employee that comes in, how do they get stood up really quickly to where they can immediately respond and remediate potential attacks that come in. They get all the correlation data right there in front of them, but leveraging a large language model tell them, you know, hey, you know, here’s your first five steps to run here or if I already tracked all this stuff through, you know, to to find out where the breach occurred, You know, getting that data upfront in order for them to to media react is perfect.

Who’s gonna win? So you think about AI and and how fast we had so many users on chat g p t. There’s so many different ones that are that are out there. First off, I guess, are the the engines themselves going to find vulnerabilities faster than we can fix them? And then, you know, I I know you got a list of questions here. Walk us through how you would advise partners or TAs to have a conversation with their clients? And and is it a business conversation, or does it have to be technical?
First question, the the AI engines, you know, the machine learning and deep learning catalogs that are behind any the algorithms that are behind all these attacks that we’re talking about that are AI driven, unfortunately, move much faster than the human component. So you could be as proactive as possible when you’re looking at patching equipment. You know, let’s say patch Tuesday on Microsoft. You know, there’s a lot of discussion on should I patch it immediately on Tuesday, or should I wait a little bit to see how those patches roll out?
I mean, we’ve all seen, you know, how patches that thrown into the wild without effective QA with the whole crowd strike incident. So there’s that open topic on, you know, how quickly should we be patching stuff to where the AI is automatically already looking for those vulnerabilities out there depending on all, you know, all IP addresses that it can see. So to answer your question directly, yeah, it it’s very, very tough to keep up with an AI threat that’s coming into the network. But luckily, that’s where MDR comes in to where you have the specialist behind you that are, you know, you know, proactively as much as possible, patching the systems, but then they’re also monitoring it and watching for these AI tools that are coming in and trying to breach the network.
So the questions themselves, it’s a big business conversation.
I I know we skipped over the slides super quick, but if you look at the numbers, and we’re not talking about, you know we’re talking about really reputable companies that are doing these surveys. There’s a forty percent decrease in potential payout that happens on average per breach.
You know, the average data breach cost about four point four five million globally with, you know, mid market enterprise customers, and that goes down about one point seven six, million if they use AI tools because AI is a rapid response that can effectively contain and then, remediate before a human can come in. And then it’s more, hey. Let’s figure out what we could do to fully remediate the the network after that. But it’s all about the response and how quick you could do it.
But if you’re talking to a CISO, a lot of them are actually very hesitant when it comes to AI because, basically, what you’re telling them is, hey. I’m not gonna have a human touch this stuff until the until the automation runs through, and then you can go in and read what happens after the fact. So that’s very that’s why the very important question upfront is what is your level of comfort? And it’s been it’s an education on, hey.
Let me pull in a subject matter expert that can, you know, talk the language with you on what’s happening And then it’s going to you know, it’s the same top track as you’re talking about with AI. And then it’s going to you know, it’s the same talk track as you’re talking about with AI across all the other swim lanes. You know, what easy repetitive responsibilities are happening? Much research does somebody need to do to figure out this specific vulnerability that shows up on a CVE or this specific threat that comes in?
What if the system already told you this is what’s happening?
This specific threat that comes in. What if the system already told you this is what’s happening and this is how you respond to it? So you can see all the different processes and tasks we’re asking about automated here. We’re just hitting that home because that’s the type of stuff that a CISO needs to recognize that their, their SOC analysts are doing day in and day out. Maybe we can offload a lot of that to where they, you know, don’t have to worry about having that extra employee there or they they can help, you know, help monitoring stuff after hours where threats typically happen. And so there’s a lot that goes into an AI discovery question. And one thing I definitely wanna hit home on this is for partners out there, it’s it you know, these are some high level business discussions, but bring in your Telarus SME and sales engineer to help from that agnostic perspective, or you’re gonna see some of the specialists we have today with Entirety and LevelBlue.

I love it. Thanks, Jason. So one of our guest speakers, I had the privilege of being a part of the hiring process for, and, entirety is near and dear to my heart.
Tom, let’s talk a little bit about what you’re seeing today.
How is AI becoming part of the conversation, and and what are you and entirety doing to help protect clients?
Yeah. So what I’m seeing is a lot of it come out in I I want AI. And a lot of those customers aren’t understanding that the base for AI is data. And when you start to have conversations around data and what that data is doing, what we’re looking at is where does that data live, okay, first and foremost. And that is something that, you know, a lot of those customers are starting to say, hey. I’m gonna go buy a solution.
And we’re very much in that process of very similar to when security became very to the forefront of everyone’s mind is people going out buying a product, but not understanding what they need to do to build a foundation to ensure that they’re moving forward. So, some of the key things that we like to look at is, you know, where the data lives.
And then on top of it, when you start looking at the data, the data is what the AI is going to look at and query off of. Now your data is the new gold. It’s the new oil in my opinion because everybody has great data. But how do you secure that? Where where is it living? So by having those high level conversations of understanding what the AI is gonna do for an organization, that can help us shape the ability to understand, should it live in this particular area?
What is the data doing? And then more importantly to what you brought up earlier, right, is how is the data being secured? Is it encrypted? Is it, proprietary data that is going to be within the compliance framework?
Right? And that’s where all of these things that we can look at from the managed infrastructure, managed security to the managed data services, and then lastly, you know, the managed compliance aspect. And all of that is where the building blocks of AI start regardless of what large language model you use. If you don’t have these key things, And that’s a conversation I’m having with a lot of customers is these key things are what’s going to empower you to hit the business outcome that you have set for the organization.
And when you’re getting the AI conversation, I feel like that’s being pushed down from top down.
And now a lot of people are very similar looking at it from what do I do, and and these are the key steps that you wanna look at. Right?
And I think that’s gonna be the most unique thing that we can help them with is understanding that AI is not just a one stop shop. There are multiple things that you need to put into your organization to ensure that you have a robust and secure AI deployment.
Love it. Thanks, Tom.

Aaron Russo and I have gotten to spend a lot of time together. AT&T cybersecurity became level blue. Next slide for me, Chandler. So, Aaron, let’s talk a little bit about, you know, what is LevelBlue doing from a cybersecurity standpoint? You have some great statistics here. How are you helping clients helping partners and and their clients with the AI conversation?

Yeah. And and a lot of you know, we’re echoing the same message here is everyone wants the the new toy in the block. Right? Whereas, the the next steps that come to follow is, I’m a little scared of what it can do and and how do I put those guardrails around, you know, this technology that as we as we continue to hear the spark words, you know, you think of the evolution of just the network to policy stack, SD WAN, transforming into SaaS C, and then, we continue to keep, deploying newer pieces of technology.
Each new piece of technology is a vector risk. Right? And when we have a vector of risk, we need to understand what it can do and what it’s the possibilities of the exposure to the business or the company. So, just some fun stats in front of you, but if you wanna flip, the next slide, here’s some of the things that we’re doing within LevelBlue, within LevelBlue Labs.
Right? We we have a unique vision of being able to see the threat analytics and the threat intelligence that we have within, underneath our hood of our MDR service. So I’m sure people will be able to read this and and and get a stack of it. But if you go to the follow-up slide at the end of talking about what we’re doing for our for our customers is that we’re following the AI risk management framework.
So if you talk about map, measuring, and managing the services, we’re able to get, consultative services to evaluate, for clients, really get in there, see what’s happening. Right? So you don’t have a chatbot that you wanna deploy within your website, leak proprietary data. Right?
So we’re putting those guardrails.
We’re able to put some type of, checkpoints to to really ensure that if you are gonna deploy the service and the technology that you’re safeguarded.

That’s amazing. So, Jason, Tom, Aaron, what does the future look like for AI as we head into two thousand twenty five?
Are things gonna change? Are there gonna have be more regulations you see people putting in the chat around cyber insurance? We don’t know what the requirements are gonna be just yet for two thousand twenty five, but, you know, we saw things like risk pop in, identity and access management, you know, putting stronger passwords in. Some of the things that weren’t there before are now more emphasis on compliance.
Where is AI going, and how are we gonna have some of those conversations? Where are they gonna be regulated? You’re seeing Gartner, Forbes all talk about the emphasis on AI. And even our trends report, most customers say that their biggest to do initiative is gonna be artificial intelligence.
Yeah. I think it’s gonna start around, you know, the the governance of it. So what tools and stuff there are is your, you know, employees because there there is a lot of accidental data leakage. Like, AI isn’t just a threat that comes in, but insider threats to where, you know, somebody’s using ChatGPT or something like that, and they’re putting in confidential data in order to make their lives easier. And all of a sudden, now the the actual machine learning algorithm, the deep learning brain that’s behind the Chattop team, you know, software is, you know, being trained on that data as well. So now somebody else can actually, you know, put in an input and actually potentially get some of your data that you put in there that’s confidential.
So it’s gonna be it’s gonna be stuff that, you know, a lot of regulation that comes through. One thing that that I really like to see, from an insurance perspective is I’m feeling they’re gonna probably push more folks to use AI as part of the remediation tactics because, you know, when you’re looking at the, you know, the the risk algorithm, when you’re looking at threat and impact and how those all relate to risk, really, what you’re doing is minimizing the impact on how quick you can react to something by leveraging AI and automation tools. So when you’re when you’re, you know, reacting super quick, you’re mitigating how far that threat can expand across the entire network or, you know, through the rest of the business. Just by using those tools, you can mitigate that upfront.
So I think we’re gonna see more and more of it, and then I think we’re gonna see MDR providers leverage AI more for automation to where they can expand the same same number of, SOC analysts across more customers to where we’re gonna see actually prices decrease across the board.

So, Aaron, Tom, same question. But if we start to if if AI can combat AI, why do we need humans? Why do we need MDR?
Well, go, Tom.
You can I was gonna say, we’re we’re gonna have a battle for this one?
So the human element can never be taken out. Right?
Because if you think about it, AI, it it does have a generative way of building things. But the the human element is always going to need to be in place. And with MDR, what a lot of people don’t understand is that MDR has already leveraged AI for a number of years. It’s just that there’s the checks and balances, and I think that is going to be the most crucial thing of checks and balances because to what I’ve seen in in in the the panel and and what I’ve seen in the chat, right, is how is this going to affect us going forward? And I think that the way it’s gonna affect us is we’re gonna get more governance. And the governance piece, we will always have to have a human aspect.
And with the way that the world is going to to Jason’s point with chat GPT and and being able to query off of somebody’s sensitive data, we’re gonna be here as always the analysis aspect of ensuring what’s going on within that AI, ensuring it’s secure. And then AI battling AI, that’s, you know, that’s been going on. But what we’re gonna look at is what are those trends? What are those algorithms?
What are those discrepancies? So the human element is always gonna be able to be a a a big force to reckon with within AI, whereas AI itself is a tool. And a tool is always meant to be used in a certain way. And I think that where the human element is going to hit the the rubber meets the road is going to be the education.
Because educating your your staff on how to properly use a tool, it is just like a mechanic understanding exactly what the tools are in his tool bag to fix that specific model of vehicle, going forward. So I I don’t see human element getting taken out. I actually see probably human element probably being added in even more as a safeguard, because this is a very unique, time frame, with ChatGPT taking something that’s been around for years and really putting it into the main stream. So we’re gonna be facing a lot more questions and concerns.

So we’re always gonna be there. The security operation center is always gonna be there. But to echo Aaron’s point, you know, you’re gonna have to look at those frameworks and ensure that you’re doing best practices within your organization.
And that’s where kinda we will come in as subject matter experts to help you make sure that the underlying foundation is secure, and then it’s kinda we’re gonna help you make sure that nothing bad happens.
To to also stack on that, you have to think this is not a a full replacement, but as a complimentary efforts to the human element. Right? Where this is to streamline and make things more efficient. Maybe some tasks that were, minor or not desired.
Right? We could throw some AI towards it. But at at the end of the day, we’re not trying to create the next SkyNet. We we don’t we don’t want we don’t want something to say we’re replacing every one else.
And, you know, maybe there are individuals out there that will will try to to to back a a a full business by, you know, some type of AI powered, backing, but you have to think of the evolution. It is a little bit of uncharted waters. And as we talk about compliance and we start talking about governance, that takes some time till it starts hitting, you know, the politicians’ desks. So we’ll see as this continues to evolve in the next couple of years, you’ll we’ll probably see even more guardrails and more conversations that you can imagine just because technology to most people who are maybe in office or wherever they’re at, they’re just they’re so unaware of the capabilities.
So it’s gonna take a little take time to bubble up. And then as we continue to evolve, I I don’t think the the human health plan by any chances is going to be peeled back that much. Again, it’ll be tasked towards things that may have not been as desired to do in the past. So I think it’s complimentary.
Love it. Doug, there’s some good questions in the chat. Which one do you wanna start with?

Well, I tell you, we do have some good questions.
Again, for everyone watching and listening, we’ve got Jason Stein and Jason Kaufman here from Telarus, along with Tom Hood from Entirety and Aaron Russo from Level Blue, the old, a AT and T cybersecurity group here. Fascinating discussion so far. I think many of the questions are hitting on this idea of, look, as you mentioned at the beginning, AI is now kinda where cloud was many years ago. It was in use.
People just didn’t know about it that well. And as we bring our customers and clients into a knowledge of what it is, how it’s being used, and where it’s being used, one of the reactions that keeps coming back is, okay, we see the need for, security. We knee see the need for the guardrails and the regulations. But as you just mentioned, it may take regulators and congress and others a while to get there.
What is the importance then, and what is the burden on individual customers and clients to get out in front of that now for their own organizations?
Yeah. We’ve seen, even Meg’s putting in some of the chat that the the regulations are coming. They’ve already been signed. So there’s gonna be a lot of things you’re gonna have to do either for cyber insurance or to pass your compliances that you need to make sure that you adhere to, and it’s gonna be part of that framework conversation. You know, Aaron, Jason, then Tom, you know, what do you what do you, think is happening?
Where what are you guys looking to do?
Yeah. I think I think it’s all gonna come back to the data and what’s being used to train those machine learning models.
Anything that’s identifiable, you know, we’ve already we’ve always talked about financial information, health care information, identifiable information. I think a lot of that stuff’s gonna be what’s governed by a lot of these a lot of these rules and regulations. Like, we just saw the, UK just came out, and signed into law, and I think just went into effect the first of this month. And, you know, it’s basically targeting a lot of the hyperscalers and meta and stuff and who has big, large AI models, you know, and it’s around the data. So it’s it’s basically gonna see what are you allowed to do with that data, what telemetry can you get from that data in order to, you know, run more efficient or whatever your goals are of using an AI model.
Unfortunately, we’re not talking about bad actors here because they’re gonna run and do whatever they want. But companies that have reputations and stuff like that to uphold, they’re gonna be following a lot of these different new regulations and and compliances that come out.

So from a from a enterprise perspective or customers that you may face, you know, getting ahead of the game and thinking, okay. Some of this is already being in you know, enacted. What do you what is the future of this? You know, what are you following data protocols to make sure you’re you’re following, you know, GDPR, CCPA, any of those privacy guidelines that are probably gonna be implemented in the future for these AI models.
Make sure you’re following that upfront so you don’t have to turn into a reactionary model where you have to come back and fix whatever’s happening there. Really, it’s know what your model is doing under the hood, which is now a big concern for a lot of these companies. They’re training these models quicker than they know how it’s actually calculating these algorithms. So make sure you have that upfront before you have to come back and do it do it again.
And you bring up a great point. So, you know, we’ve had a lot of conversations with some deans of universities, and and they now have students that are either gonna get their degree in AI or they’re gonna get their degree in cybersecurity. Nobody’s telling any of these, you know, young, people that are gonna be the the, you know, technical, you know, resources of tomorrow that they have to be experts in compliance. You know? It’s not something they wanna do. Aaron, you know, what are you seeing when when it comes to security and AI and and compliance?
And and to be very clear, I know a lot of the conversations that we are having around cybersecurity, sometimes are reactive.
Right? There there’s some type of driver or something that’s already happened or occurred. Or in this case, we’re talking about compliance and regulation.
Staying ahead of this, there’s plenty of best practices from, just the cybersecurity approach or or a framework that we should always be implementing even though maybe we’re not being told from the government down. So in terms of just like this and and if someone if you have a client or a business looking to implement AI, you know, your follow-up questions to that should be what extra practice or what extra steps are you taking to ensure that you’ve actually put some type of, fence around this so it doesn’t do something that you don’t want it to do. And and a lot of times, again, back to the stats, there’s a lot of c suite, and there’s a lot of private equity.
There’s a lot of people saying, I need AI because that’s that buzzword. But then the follow-up to that is you’re seeing again to the numbers, hey. I’m starting to get a little bit scary. I’m starting to get a little scared about what’s the potential that this could do if I especially if I don’t understand it to the fullest capability.
So, I’m seeing a lot more. You know? We need to get ahead of that, ensure that we’re having that conversation early and often, and be proactive rather than reactive.
Erin, that’s a great point. And, Tom, you hinted at this earlier too when you said, look, before you can talk about what AI is or isn’t going to do to your data, where do your data live? Where should your data live, and how is it being secured?
In addition to where that lives, Cameron brought up a great point here in the chat a minute ago, based on some of the conversations that went on x last night about, we’ve learned in in recent months how AI itself requires a huge amount of additional bandwidth, connectivity, all of these things where we’ve looked at cybersecurity and other forms of security for years.
So many of these things seem to be incumbent on the individual customer or client to prepare for now before they venture too far into, as you called it, some of the buzzwords and some of the shiny things that go along with AI.

Yeah. Yeah.
The sorry, Jason.
I I was gonna say to kinda add to that, Doug, you you’re a hundred percent right.
This is the time where I’ve been actually seeing more and more customers looking at this from where should my data live. And the reason being is exactly those points. It’s a lot of power consumption. It’s a lot of, you know, different GPU.
But also it’s how do I outsource the risk?
That is the big thing is outsourcing the risk because this is a newer thing for a lot of companies, and I think we’ve all talked about that. You know? It’s being pushed down. How do you how do you make sure that you’re going about this? And with that risk, there’s a, you know, tolerable risk that you can take as an organization.
But the one thing that we can’t risk is your brand reputation because if something goes awry or, you know, if you go on Google, Bard or Gemini, I should say, and and start googling and trying to see what AWS is doing, I guarantee you’re gonna find information that has been confidentially leaked to that that particular LLM.
And by having it be a strategic conversation of where, how, and why is going to be, I think, the the focal point because, you know, people in the chat have been bringing it up. We are gonna get guidelines. We are gonna get these things. And the things that are going to be the hard thing to do is keep up with the changes because we are in a pivotal moment with AI, with the security, with how it should go.
And to Jason’s point, that compliance aspect. Right? A lot of these kids and and our future leaders are thinking about, I wanna do AI, but they’re not looking at the larger scale of what AI is and what AI can touch. So I think that’s going to be everybody’s main point in having these conversations is not necessarily looking at AI as a standalone, but helping educate why you use AI and how.

It is the most, I think, impactful thing that you can do as a trusted adviser.
A lot of it. So I know we’re running out of time. Doug, I wanted to wrap up.
One more I wanted to get to, Jason.
There was an interesting question here from Andy, and we see this in a in a number of instances.
We want to be able to take advantage of AI. We want to be able to outsource the things that, require security and whatnot, But we’ve got a lot of very enthusiastic users within organizations that like to use ChatGPT, that like to take advantage of the various tools that are there. Andy’s asking how complicated, and I would add how desirable is it, to block individual users from using certain AI tools while still making it available at the corporate or business level for, their needs?
Yeah. So I wanted to wrap it up, and I’ll let Jason, kinda chime in there.
There’s companies that are completely embracing artificial intelligence, and then there’s companies that are blocking it. And they can block all their employees from using it, and it’s it’s not allowed within their organization. Look. Cybersecurity, artificial intelligence are massive conversations.
There’s more data coming out. We’re still gonna see more security coming out. And, you know, we have some great suppliers that are gonna continue to evolve with cybersecurity, especially when it comes to artificial intelligence. We’re gonna put a lot more resources in place for you within Telarus University. You can go into your back office, and you can get into the solution view, which cybersecurity is getting a refresh from Nate and his team, and then we have the AI cybersecurity solution view. Jason, you know, where where do you see it when it comes to allowing users to embrace it? Are companies that are gonna block their employees from using AI gonna be left in the in the dust?

So can you block it? Yes. You know, there are tools now that leverage just like you say heard from, like, a layer seven firewall that recognizes applications at the application level. That’s identified as Microsoft Office traffic.
That’s not a certain packet size that runs on this port. They have to look down on a layer three, layer four type firewall log anymore. Now it’s just recognizing at the application level and telling you what application it is, and then you can define what you wanna do with it and who do you wanna have access to it. Same thing’s happening with AI tools, ChatGPT, Anthropic, Perplexity, Google Gemini.
I mean, we can go all day and all the different tools are out there. They’re coming out quicker than than we can even recognize them. You know, they all have their identification features where these tools say, yes. I know that one.

That is Chat GPT or that’s OpenAI, you know, through Microsoft tenant. So there and then you can take and say which users have access to these different tools. It’s same thing as applying a zero trust network architecture, z t n a aspect across the, you know, the different AI tools. So can it be done?
Yes. Are there tools out there to kinda make this an easy button? Yes. So we’re seeing that kinda come across the board, but I think the most important thing you could do upfront before you try to figure out which tools you wanna put in your network and what you wanna limit is back to that education piece.

Allowing users to use the tools, but then making sure they recognize the potential threats that come from using those tools and what what impact that they have from that as being an internal employee. What data to put in there? What data to not put in there? Which ones do you recommend from an from a, you know, a hierarchical level from your organizations?
I think that’s more important than, than bring the tools in the equation to monitor that they follow that. So accountability and training, I think, is upfront.

This is great. We’re gonna provide so many more, questions, answers to all your questions, and I know a lot of questions are gonna continue to pop up. Look for some some new things within, the Telarus University, and then, you know, we’ll make sure that, if you haven’t heard in some of our past recordings, j Jason Lowe does such a good job of teaching, training, coaching partners on how to have a business conversation around artificial intelligence, especially with cybersecurity. So thanks to our panelists, Doug. I’ll pass it over to you.

Thanks, everybody. Terrific presentation. Thanks, Jason Stein, Jason Kaufman, Aaron Russo, Tom Hood.