Modern Cyber with Jeremy Snyder - Episode
12

Ryan Smith of QFunction

In this episode of Modern Cyber, Jeremy sits down with Ryan Smith, founder of QFunction, to explore how combining AI with human expertise can streamline anomaly detection in cybersecurity.

Ryan Smith of QFunction

Podcast Transcript

Jeremy at Firetail (00:01.902)
Hello, welcome to another episode of the Modern Cyber Podcast. As always, I am your host, Jeremy Snyder. I am really thrilled to be coming back to you with another episode today. Just a reminder, take the time, love, share, sharing is caring, all that good stuff. So you know all the things you're gonna hear, subscribe, rate, review, all those things. If you can, if you're interested, if you're so inclined, please do so.

I've got a really interesting guest to bring on to today's episode. Somebody who's working in this space that I know is top of mind for almost everybody. And I think we're legally required to mention AI in every episode right now. But I've got somebody today who specializes in AI in cybersecurity. So I know we're gonna have a really...

enjoyable conversation. I'm joined today by Ryan Smith, the founder of Q -Function. Now, Ryan has more than 80 years of expertise in the cybersecurity field, experience working with organizations like NASA Jet Propulsion Laboratories, Pfizer, and Q -Function really aims to help businesses improve their cybersecurity posture and resilience by providing tailored threat hunting and user behavior analytics solutions powered by AI. Ryan has worked in both threat hunting and red team efforts at

NASA JPL as mentioned earlier, and he's worked with a full stack Academy where aspiring cybersecurity professionals can learn ethical hacking and penetration techie testing techniques. Excuse me. We're going to talk about AI in that field as well. Ryan has a BS in computer science, a GC IH certified incident handler, a certificate in sans applied data science and machine learning for cybersecurity professionals and a certificate in deep learning specialization by Coursera.

Ryan, that is quite a background. Thanks for taking the time to join us today.

Ryan Smith (01:45.087)
Absolutely, Jeremy. Thanks for having me. And for those who are looking on video, please excuse the camera. My camera is a little bright. I look a little ghostly, so just bear with me on that.

Jeremy at Firetail (01:54.188)
Yeah, a bit washed out, but you know, the focus is not on Ryan's facial appearance today. The focus is on the expertise that you're bringing to the conversation, which really couldn't be more relevant to where we are at this point in time. There was a couple of years ago, I was having a conversation with somebody and we were talking about kind of the general state of cybersecurity. And we both made the observation that, you know, things that we did when I started my career, a lot of security things were hardware.

You know, we had hardware firewalls, we had hardware email filtering devices, et cetera. But in the modern age where most things are built in the cloud, hardware solutions are kind of a thing in the past. We're dealing with virtualized objects and we're dealing with data. And where I think everybody finds themselves today is drowning in data, including in the cybersecurity realm. And I think this may be the time and place for AI and cybersecurity. So Ryan, I mean, just to start out,

Is that your experience when you go in and work with customers? Are they kind of drowning in data?

Ryan Smith (03:00.222)
Yes, that's the idea. So what we're seeing right now in cybersecurity just across the board is that cybersecurity teams are expected to do more with less. They are expected to do their daily activities, whether it be instant response, whether it be SIEM administration, whatever that they have to do on top of being proactive in terms of looking at their collected data. Now,

Most teams simply don't have the capacity to actually do that. Everyone is already wearing multiple hats in cybersecurity organizations. So we're at the point where.

We have no choice as cybersecurity professionals, but to embrace AI. That's where things are going. There are attacks that are still being successful, regardless of how big your organization is. You can have the great firewalls, the EDR products, whatever, yet people are still being breached. And there's a problem with that because there's so much investment within cyber where you do all these things, yet it's still not working.

So I think that the next frontier for that is AI. And I get that it's a buzzword, it's everywhere, AI washing is the term for it. But there will have to be a point where we as cybersecurity professionals have to embrace that. I'm a proponent of AI, not just cyber, but for our culture as a whole. So I believe that there is definite benefit in using it within cyber.

Jeremy at Firetail (04:07.209)
Yeah, yeah. Yeah, yeah.

Jeremy at Firetail (04:25.959)
Well, you mentioned something there that I want to dig into a lot more, which is really, you know, this concept of AI washing. And I agree with you, it's a buzzword, it's overhyped, et cetera. And, you know, every company that I talk to, they're trying to figure out what is their quote unquote AI strategy right now. And I think, you know, cyber teams are no exception to that as well. So I'm curious.

you know, there's always in any of these hype cycles, there's going to be like, it's going to solve everything for us, right? There's going to be that moment. And I think that's kind of where we are right now, maybe. But then you kind of find the concrete use cases where you really do see a lot of value. And one of those areas where I see a lot of potential value from from AI in general is on things like parsing huge volumes of logs. And I think, you know, aside from all the data that we're creating through

Ryan Smith (04:56.797)
Yes.

Ryan Smith (05:11.902)
Yes.

Jeremy at Firetail (05:15.942)
everything that we're doing with digital apps and mobile apps and everything like that. We're also just creating just an insane number of log files and a lot of cyber teams are collecting those. So talk a little bit about how you think about applying AI to log files or let's say like when you started an engagement with a customer. Because I think a lot of your focus is on those kind of log aggregation environments, right?

Ryan Smith (05:23.581)
Yes.

Ryan Smith (05:37.758)
Yes.

Correct, yes. So the idea is that for those who may not be familiar, what happens is that we have all these disparate data sources. We have our vulnerability management data. We have our endpoint data. We have our firewall data. Now these are all made by different vendors. So what we do with cybersecurity professionals is that we put them in what's known as a SIEM, Security Instant and Event Manager. And you can think of that as just a giant database of security data. And what we do is that we collect that data into that centralized spot so that we can find correlations between certain

Jeremy at Firetail (05:50.341)
Yeah.

Ryan Smith (06:08.527)
parts of that disparate data. Now what's happening is that we as cybersecurity professionals, like I said, already don't have time to look at all the data that we're that we're collecting. And what ends up happening is that we don't really look at that data until we're doing some type of incident response or something that requires us to go back into that data to actually look at it. Now the problem with that is when it comes to modern cybersecurity, best practice is to proactively look at that data.

look at that collected data.

So again, we have no time to do that. So how can we look to automate that? And that's where AI comes in because AI, depending on how you actually use it, can be very good at summarizing large amounts of data and finding trends in that data, or mainly in terms of cybersecurity, finding the anomalies in that data. Because we as cybersecurity professionals are really concerned with that. Most data sources have a normal and we're concerned when things deviate from that normal. Because

Jeremy at Firetail (06:58.756)
Yeah.

Ryan Smith (07:08.527)
hopefully threats aren't normal in your organization. Like, yeah, hopefully, you know, this is just, you know, it's kind of once out of the blue thing. So that's what we're concerned with, right? But again, too much data, therefore we're looking at AI solutions to actually help us bring those anomalies to light.

Jeremy at Firetail (07:26.052)
And just on like a practical sense, when you sit down with a customer, talk us through like what the process is. Because, you know, most customers, if they're large enough and they're sophisticated enough to have a seam, they've probably already got some baseline detections in place, right? So they've already got some kind of core, like looking for anomaly, anomalous network activity, or looking for anomalous logins and things like that. Is it the case that they're reaching the point where like their anomalies aren't good enough and they're still getting breach?

Ryan Smith (07:40.828)
Yes.

Ryan Smith (07:44.668)
Right. Yes.

Jeremy at Firetail (07:54.756)
Or is it the point that they're looking for new attacks that they've never thought about in the past? Or is it some combination? And how do you start to think through that process with a customer?

Ryan Smith (07:59.804)
Yes.

Ryan Smith (08:04.797)
Absolutely. That's a great question. So first, yes, to answer your question, we are seeing new attacks all of the time, right? It's the reason why that we're still getting breached. And when it comes down to it, when it comes to cybersecurity, it's a lot easier to hack a person than it is an actual system, right? And that's why social engineering is always going to be the biggest attack factor. So when it comes to a lot of modern threats, people think that you're creating some really custom malware with someone behind a keyboard with a black hoodie.

No, it's much easier to just email someone and say, hey, can you give me the key to your organization? If you frame it correctly, then they'll end up giving it to you, right? It doesn't matter how secure your organization is. You can have the latest technology, you can do everything right, but it only takes one person to actually give away the keys or sensitive information to your organization. So that's the type of threat that we're actually seeing. And what's interesting about those is that...

Those attacks aren't using anything that stands out. Those are using just basic, hey, you know, we emailed this person and they gave us the key, therefore we're able to impersonate this user and assume the privileges that they have. If I can assume the privileges that you have, now everything that I'm doing, quote unquote, is legitimate. Like, yeah, it may, and for any type of security tool, they may not pick that up because they're looking at, I don't know, PowerShell, they're looking at, you know, various

Jeremy at Firetail (09:23.425)
Right.

Jeremy at Firetail (09:29.473)
Yeah, yeah.

Ryan Smith (09:31.036)
viruses or things that have traditional signatures to them, right? So if attacks are, if detection software is trying to look for that, it's not going to see that quote unquote normal behavior that most users do. So I think that's where AI is going to actually shine there. So what ends up happening for a Q function engagement is first we dictate, okay, what are you trying to look for? Are you trying to get...

Jeremy at Firetail (09:34.56)
Yeah, yeah.

Jeremy at Firetail (09:47.872)
Yeah.

Ryan Smith (09:56.285)
a better look on your network logs, are you trying to get a better look at your user systems? And then we go from there, right? From then, I ask, what is important to your organization? Is it this specific system or is it these specific users? Because what my opinion is, is that we need to focus as cybersecurity professionals, what is important to your organization, right? There's something that's known as the defender's dilemma, where we as...

professionals, cybersecurity professionals have to guard everything and attackers only have to find one hole. There's a complete imbalance there, right? So how do you do that? That's not fair. Like we're always going to be at the disadvantage there. So what we need to do as cybersecurity professionals is identify, hey, what are the crown jewels of our organization? What is important to us? And then...

Jeremy at Firetail (10:26.335)
Yeah. Yeah. One thing, yeah.

Ryan Smith (10:45.181)
fortify the defenses around those important things. Now that's easier said than done because like I said we already don't have the time already. So what QFunction does is attempts to help out with that process, try to put more I guess modern detections around those things that may not be caught by traditional security solutions.

Jeremy at Firetail (10:53.184)
Yep, yep, yep.

Jeremy at Firetail (11:07.647)
And so as you work through that process, right? So you sit down with the customer and the customer says, hey, maybe it's our source code, maybe it's our customer database, maybe it's whatever, our lab equipment, whatever it is that they say, different types of organizations, et cetera. And you then say, okay, well, I've got such and such telemetry data or such and such log data coming around those sources. At what point is the first opportunity to inject AI into that?

into that activity where you're trying to say like, okay, these are my crown jewels. Let's look at these logs. Like where is it? And you know, on a practical basis, where does it get started?

Ryan Smith (11:42.908)
That's.

That's a great question. So it's right after the log collection, right? It's literally right after that point. Because what happens is there are different AI algorithms. There are different AI models that can help you find weird activity in those logs, right? So let's say that you identify some sensitive asset or some crown jewel in your organization. You want to get the logs from that system. Because effectively, those logs will dictate the normal of that system.

Jeremy at Firetail (12:11.262)
Yep, yep.

Ryan Smith (12:11.613)
So then from there, that will dictate it. So as long as you have a good amount of logs for that, then you can start creating the AI around that. And there are different ways that you can actually do that. There are different such thing as auto encoders, and that's a type of data structure within AI and algorithm for AI. You have GANs, generative adversarial networks, which is another way that you can do anomaly detection. There's types of clustering algorithms that allow you to do the same thing. It just depends on what your problem set is.

Now when it comes to AI, depending on how much data you have, one algorithm may be better than the other. So when it comes to cyber, because we're having large amounts of data, the deep learning methods of AI fare much better. And that's where you come across your more deep learning models, like I said, autoencoders, generative adversarial networks, in order to find those anomalies in that data. So to answer your question, it's right after you do that law collection, like, yeah, that you can

Jeremy at Firetail (13:09.339)
Right after, got it, got it. Yeah, yeah. I'm kind of curious, like when I think about SIMs, I tend to think about three types of data that feed into it, kind of classically and, you know, in a modern world, a lot of companies have hooked up much more than that by now, but you know, classically, I would have thought of my SIM as getting like my endpoint data. So what's happening on all my endpoint devices, my user activities, right? So, you know, Jeremy logs in, Jeremy opens his email, et cetera, et cetera.

Ryan Smith (13:11.454)
start injecting that AI process.

Ryan Smith (13:19.228)
Yeah, right.

Ryan Smith (13:29.564)
Yes. Right. Yes.

Thanks for watching!

Jeremy at Firetail (13:37.562)
And then I think about my network traffic, right? I'm curious, like you've gone through this exercise with a number of customers by now, where does AI perform best across those three data sources?

Ryan Smith (13:47.932)
Yeah.

So it may be network data because what's nice about network data is that it's very structured, right? So when it comes to, when you look at firewall logs, you know, you get the same type of logs every single time. You have the source, the, yeah, exactly. Right. The number of bytes sent, the number of bytes received, all that stuff. Right. So when it comes to structured data, that's where it performs the best because you know what to expect. And when it comes to finding anomalies in that, you're looking at numbers, right? So if you're seeing some IP that is sending more bytes,

Jeremy at Firetail (13:52.283)
really? Okay. Yeah, yeah.

Jeremy at Firetail (13:59.642)
Right, right. Source IP, destination IP, yeah, port protocol. Yeah, yeah. Yeah.

Ryan Smith (14:19.726)
than normal or receiving more bytes than normal that says, hey, you should probably maybe take a look at this more, right? Now, when it comes to endpoints, that's a little harder, but still possible because now you're trying to do effectively log anomaly analysis on these data sets. And then from there, you have to do a little more finagling of the actual algorithms, but the code still works the same, obviously. But to answer your question, it works very well with network data. But again, it can be very well with traditional log data, whether it be

Jeremy at Firetail (14:21.113)
Yeah.

Jeremy at Firetail (14:41.978)
It's

Ryan Smith (14:49.582)
you know, syslog data from your Linux systems or Windows event logs from your Windows systems, just depending on whatever your endpoint is, and you can apply it to that, right? So just depending on how you do it, but long -winded short answer, it's the network logs. Yeah.

Jeremy at Firetail (14:59.033)
Okay.

Jeremy at Firetail (15:04.857)
Got it, got it. But all those lines, I mean, one of the things that I worry about or I kind of wonder about in a Cloud First world, and we deal with API security, we're looking at API traffic all the time, and we see API logs and they have an element of network in them quite a lot. But then we're also looking at the function level above, which is to say what was requested, what was returned, et cetera. But one of the things that we notice from an anomaly detection perspective is that,

Ryan Smith (15:15.259)
Yes.

Ryan Smith (15:21.595)
Yes.

Ryan Smith (15:29.275)
Correct.

Jeremy at Firetail (15:34.232)
you know, the first time you see something new, that's by definition an anomaly. So, you know, I'm whatever company and the first time I get a customer from Taiwan, this is my first customer from Taiwan, and they try to purchase something and boom, it's an anomaly, right? Because it's the very first time. And by the way, we're living in a world of cloud where every cloud provider has a range of IP addresses that are pretty fluid, right? And also, you know, I might connect from cloud provider A today and have

Ryan Smith (15:37.883)
Yes.

Ryan Smith (15:46.235)
Yes.

Ryan Smith (15:50.075)
Right. Yes.

Ryan Smith (15:56.475)
Yes. Right.

Jeremy at Firetail (16:02.583)
IP address 1234 and tomorrow 5678, right? And so like all of this kind of stuff changes so frequently. I've generally had the opinion that kind of like general purpose anomaly detection is super noisy and prone to a very high rate of false positives. So A, is this your observation? And B, if this is also your observation, how and where is AI going to improve that situation?

Ryan Smith (16:04.348)
Yes. Exactly. True.

Ryan Smith (16:15.579)
Yeah. Yes, it is.

Yes.

Yes.

Ryan Smith (16:29.213)
That's a great question. So yes, like you said, anomaly detection can be very noisy, right? And what QFunction does is that it attempts to put the human element in that because when it comes down to it, we're in the industry of looking at that data, finding the anomalies, and then verifying that, right? So there is a human element involved in that. Because like you said, there's going to be a lot of false positives there. That just comes with the nature of anomaly detection. By nature, it's just hard. Because like you said, cloud provider IP

are fluid and things change within the actual data. So when it comes to any threat hunting engagement, what we like to, what would we do is that we take those anomalies and then actually verify them. So if you can go into the logs and say, hey, you know, these are more bytes than the normal or these are weird IP addresses, it would be nice to be able to say, hey, we looked at these manually to be able to say, hey, this checks out or this doesn't check out, right? So that's the idea behind it. Getting that professional cybersecurity

standpoint on looking at these actual logs, right? So not only do we do anomaly detection, we verify the anomalies that are actually found because again, we're looking for those things that normally don't happen. And then we do that by using various threat intelligence sources looking at, you know, well, IPs that may already be blacklisted, or we can do more research, you know, just depending on what's actually out there to find out, hey, is this actually bad or not, right? So it's more that human elements that kind of that we

inject because what Q -Function does is that we tend to focus on businesses that are small, medium -sized because like you said, those large companies, like yeah, you know, they're dealing with a lot of stuff, right? But when it comes to those small and medium -sized companies, often you'll see teams of like one cybersecurity person, two cybersecurity people, and they're overwhelmed because technically speaking, you know, when a lot of those businesses, you have IT people who are the cybersecurity people who are the help desk people. So, you know, it kind of

Jeremy at Firetail (17:59.446)
Okay.

Jeremy at Firetail (18:09.814)
Yeah, yeah.

Jeremy at Firetail (18:26.933)
Yeah, yeah.

Ryan Smith (18:27.598)
they're overloaded as it is, so we try to make that job a little easier for them. So hopefully that answers that question.

Jeremy at Firetail (18:32.885)
Got it. Yeah, I mean, it should probably also provide some reassurance to people that human jobs are not all going away. We're not all being replaced by AI and robots, right? So, yeah. Nah, yeah.

Ryan Smith (18:42.683)
Exactly, and we are so far away from that too. Like yeah, I think that event, yeah, we are so, there are limits to AI as it is right now. It will get better. Like yeah, it will get better. And I think that was the second part of your question, but if I remember correctly, I've been talking. Okay. Yeah.

Jeremy at Firetail (18:55.412)
no. Well, I mean, you know, the second part was kind of like, you know, how is AI going to improve anomaly detection so that it's less noisy or so that it's at least more targeted and lower rate of false positives?

Ryan Smith (19:08.284)
That's a great question. So when it comes to that, you're going to have to start merging it with well -known threat intelligence data. So when it comes to, for example, Google, I think that they recently released their Gemini model that merges with threat intelligence. And what that does is that it summarizes a lot of data and be able to tell you what's there. So I think that's the next step towards getting more.

Jeremy at Firetail (19:23.348)
Yep. Yep.

Ryan Smith (19:31.227)
higher fidelity findings as opposed to just general anomaly detection. That's where I see it going anyway. And I think as time goes on, these tools become a little more mature in finding those things.

Jeremy at Firetail (19:33.715)
Yeah. Yeah. Yeah.

Jeremy at Firetail (19:42.899)
Yeah, it's interesting. I mean, on the point of threat intelligence, in many ways, threat intelligence is very similar to like one of the things you mentioned early on, or at least has some parallels to figuring out your crown jewels, right? Like if threat intelligence tells you that like, okay, you know, all the bad guys are after, you know, cloud based Windows systems right now, you know, that like, that's where you need to focus your defenses. But then at the same time, you can also say, well, actually, none of my crown jewels match that. So, you know, you can actually...

Ryan Smith (19:53.947)
Yes.

Ryan Smith (20:01.659)
Yes.

Ryan Smith (20:09.339)
Yes.

Jeremy at Firetail (20:11.314)
look a little bit elsewhere or allocate resources to places where it continues to matter to you. So there's a couple of things. You mentioned something in there, again, a generative adversarial network, is that right? Or GenAI adversarial network? So what is that and how can people use it and where would it actually be applicable?

Ryan Smith (20:20.987)
Mm -hmm.

Yes. Exactly, yeah.

Ryan Smith (20:32.668)
That's a great question. So generative adversarial networks are a type of deep learning method in order to do various things, right? So in my case, I use it for anomaly detection. So the way that it works is that you have what's known as a generator and a discriminator. So there's two parts to it. So the generator learns your data and is able to produce really good fake data.

based off the data that it was actually trained on, right? So at the end of the process, you have a generator that is really good at producing data that looks like data from the actual data set. Now you have the discriminator. And what the discriminator does, it is trained to identify fake data from the

generator and real data from the actual data set. So what's really cool is that as the AI trains, the generator gets really good at producing fake data and the discriminator gets really good at identifying that fake data. So at the actual end of the process, you have a discriminator that's really good at looking at identifying fake data and it can be used as the anomaly detector.

Well, yeah, so it's a very novel concept of actually doing it. So it's a supervised approach to a traditionally unsupervised problem. Yeah.

Jeremy at Firetail (21:44.72)
Okay. Okay.

Jeremy at Firetail (21:54.094)
Yeah, this point about supervision, by the way, is something that I think is kind of like not well understood, but actually quite important. You know, it's you hear about this kind of, I think, semi unfounded risk around around currently like LLMs, large language models in particular around things like hallucination when they continue to just kind of go off the rails. But if you just let them spin and spin and spin and keep generating text and at the same time, like.

Ryan Smith (22:00.955)
Yes.

Ryan Smith (22:11.002)
Yes.

Jeremy at Firetail (22:22.927)
that assumes no supervision. That assumes that nobody is kind of correcting it ongoing saying, no, no, it should be more like this, more like this, more like this. And as anybody who plays around with chat GPT or any one of these models will tell you a little bit of a nudge in any direction can provide a lot of correction, right? I just, you know, was playing around with generating a presentation on a particular topic not too long ago and.

Ryan Smith (22:25.433)
Yes.

Ryan Smith (22:31.193)
correct.

Ryan Smith (22:41.209)
Yes.

Jeremy at Firetail (22:49.966)
you know, I asked for the content and it gave me the content and then I said, well, you know, that content is not in a format that I want. I wanted in more of a bullet point for slides content. Boom, gave it to me. And then I said, well, no, no, remove all of the kind of words at the beginning because it started with words like observe this and understand that and so on. It's like, well, remove all the descriptive verbs at the beginning and it did that. Right. And so like, anyway, these little incremental nudges and so.

Ryan Smith (22:59.705)
Yes.

Ryan Smith (23:08.281)
Yes.

Jeremy at Firetail (23:16.014)
I guess the point you're kind of making is that as you go through these exercises and you kind of, you kind of train again as you're helping your systems get better, right?

Ryan Smith (23:22.841)
Yes.

Yes, exactly. That's the idea. And let's go a little bit into that unsupervised direction. So what we have right now, we already have AI solutions in the market. And one of the big ones is user behavior analytics, right? And that's the idea of learning what's normal for users and systems, and then telling you what is not normal for those users and systems. Now, the problem with a lot of user behavior analytics solutions is that they are unsupervised, as in you don't know what they're learning.

learning, right? You know, you can't really tell them what is your normal. So what ends up happening is that you deploy UBA solutions in very large organizations and because they're so large, they don't end up learning anything or rather they end up learning things that are not useful. So I think that's one of the biggest issues going forward for a lot of these AI solutions is how do you tell it what is normal for your organization and how do you make it scale properly?

Jeremy at Firetail (23:57.292)
Right, right.

Right.

Jeremy at Firetail (24:24.204)
Hmm.

Ryan Smith (24:25.9)
for those larger organizations, right? And from my experience, like, yeah, I've worked with various UBA products and what ends up happening is you have effectively threats that it finds that are composed of the anomalies that it finds. You'll find anomalies and then base threats off that.

Jeremy at Firetail (24:28.972)
Yeah.

Jeremy at Firetail (24:40.394)
Yeah, yeah.

Ryan Smith (24:44.697)
There was a time where one of the products that we used had a million anomalies and it composed 500 ,000 threats based off those anomalies. I don't care how big your team is, you were never triaging all of those. It's just too big. So what I believe is that we have to do it in that more targeted approach, which is what...

Jeremy at Firetail (24:57.898)
Yeah, for sure. Yeah.

Ryan Smith (25:03.385)
which is what Q function offers. But I think that as time goes on, these algorithms will get better. But the whole point of it being that unsupervised learning still does have a ways to go in terms of being more effective within organizations.

Jeremy at Firetail (25:18.09)
Well, this whole kind of interaction between the human and the AI and the model and the training, all of it really brings to mind one of the questions that I think we'll probably close today's interview on. What are the ethics around all of this? And what are some of the biggest ethical concerns that you see as somebody working with this stuff day to day?

Ryan Smith (25:36.729)
Yeah, that's a great question. So what's nice about cyber, what's nice about how Q function does is that we work directly on your data. You know, we're not learning things outside of what you actually provide us. I think that the bigger concern about ethics is actually outside of the realm of cyber, right? Two things being truth and profit. And I'll get into both of those. Truth, we as a society have lost trust in each other. Like, yeah, we just have. We don't trust our government.

Jeremy at Firetail (25:54.442)
Okay.

Ryan Smith (26:06.683)
We don't trust our banks. We don't even trust each other by default. We think that everyone's lying So when it comes to AI and be able to do deep fakes and and just create false information It's going to be very hard going forward in order to identify what is true and what is not and when it comes to Critical issues whether it be elections whether it be you know, or just daily things You know someone making a fake video off of you about things that you didn't actually say that's going to be very hard to identify going forward because

Jeremy at Firetail (26:10.249)
Yeah, yeah.

Ryan Smith (26:36.603)
Because my understanding right now is that there's no good way to tag artificially generated content. It's one thing to be able to watermark a photo. It's a completely different ask to watermark a video, an AI -generated video. And going forward, I think that that has to be done in order to better secure people. And then when it comes down to it, now you have to worry about AI bias based off the data that it was actually trained off of. Can you explain the AI, how it makes its predictions? And that's a good question.

Jeremy at Firetail (26:48.777)
Yeah, yeah.

Ryan Smith (27:06.523)
going to be kind of hard going forward too, but I think that we in the community are aware of those issues now, so it's better that we know them now so that we can kind of address them going forward. And from my understanding, I don't remember a technology where literal nations are coming together in order to figure out how to best regulate this. There was a, I think it was a treaty or something along those lines, an agreement between the USA and the UK to collaborate on how to best, I guess, moderate AI, right? So that's going to be very important going forward.

Jeremy at Firetail (27:25.161)
Yeah.

Jeremy at Firetail (27:34.601)
Yeah.

Ryan Smith (27:36.443)
like yeah, it tells you how big this actually is. So trust is going to be, I think, the biggest issue going forward. Second is profit. Now, obviously, you know, like businesses want to make profit, but I believe that...

If that's all we're using AI for, we're doing it wrong. I do believe that there is definite use in using it outside of just, you know, money making, right? Like, yeah, I think that we do have a technology that can really help our society move forward. And I'm not saying that we're going to turn into some utopian society or anything, but I do believe that it can make people's lives a little easier at the very least, right? But if all we're doing is just focusing it on profits, like, yeah, and just making more money and more money, it's like, I think that we're

doing it wrong. And especially now where we have all these incoming young people who are most likely going to college, studying AI, studying computer science. And, you know, I'm just hoping that when they get to the end of that journey, they actually have jobs outside of just big tech, right? Like, yeah, I would hate to see that everyone has to work for either Google, Microsoft, Facebook, right, in order to use this. I would love to see smaller entities come up to be able to employ these actual, to employ these, these

Jeremy at Firetail (28:40.069)
Yeah, yeah.

Ryan Smith (28:51.675)
young students, right? So that's what I'm hoping that Q function be able can be able to hit at some point to kind of you know, whether it be Q function or any other cyber or AI company that can come up and you know, really hire these people because I would hate to see all these people put in all these years and and time only to reach a job market that can't hire them because there's just not enough jobs, which is what we're seeing right right now. The economy will get better. It always does. But I mean, you know going forward, I think that's going to be very

Jeremy at Firetail (29:13.859)
Mmm.

Ryan Smith (29:21.595)
important. So that's just me ranting. Like yeah, those are my issues about AI going forward.

Jeremy at Firetail (29:24.611)
Yeah, yeah.

Well, look, I think those are two really great points on the ethical side and certainly something to be aware of. I'm just curious, if you take like a standard engagement, no specific one, how much does it come into play though in your day -to -day work in cybersecurity? Because the kinds of data that we're talking about, network logs, UBA, I tend to see these as very...

Ryan Smith (29:39.159)
Yeah. Yes.

Ryan Smith (29:48.823)
Yes. Right.

Jeremy at Firetail (29:53.633)
I don't know what the word is, but they're super straightforward, kind of bland data logs where I, for instance, wouldn't see a lot of opportunity to even influence it for profit or bias purposes.

Ryan Smith (29:58.071)
Yes. Right. Yes.

Ryan Smith (30:07.223)
Correct, agree, and I completely agree with that. And that's what's nice about it. We're not really dealing with human data outside of just saying, hey, you're not supposed to be collecting data that you're not supposed to be collecting. We depend on the actual organization to do that. Hopefully that organization is not collecting things that they're not supposed to. If not, we can actually call that out. But like you said, if you're dealing with network data, system logs, there's going to be little to no bias in that. We're not really concerned about that. So hopefully that answers the question. Yeah.

Jeremy at Firetail (30:13.249)
Yeah.

Jeremy at Firetail (30:17.825)
Yeah, yeah.

Jeremy at Firetail (30:31.809)
Yeah, yeah. Yeah. Yeah, look, I hope that's the case. Although I will say it's pretty interesting. I don't know how much you've gone. You've noticed this today. We happen to be recording on a day when one vendor who shall remain unnamed got called out for something in their terms of service, where it was discovered that they were actually ingesting direct message conversations on their platform for training their AI model with a really, really roundabout opt out process like.

not a web form, not anything, you know, you had to go get a very specific piece of information, email it to a super random email address with a particular subject line. I can't imagine, by the way, I wouldn't want to be the human monitoring that email inbox today. Has this crossed your radar today, Ryan?

Ryan Smith (31:03.575)
Yes.

Ryan Smith (31:20.951)
Yes, I have seen that. And again, that's gonna be one of the problems going forward, right? Is what do you collect or how do you best handle user data, especially when it comes to AI, right? Because now we're seeing new avenues of attack for these AI models, right? To be able to extract the training data that it was trained off of, right? Can you poison AI data? Can you extract data from those models that you shouldn't be able to extract, right? So, and that's gonna be an ongoing field now. I remember now there are new companies that are

red teaming LLM models, which is very unique, right? You know, be able to say, hey, can you make the LLM model reveal things that it shouldn't? So yes, that's going to be absolutely a problem going forward. Like, yeah, we'll see how that works out.

Jeremy at Firetail (31:53.311)
Mm -hmm.

Jeremy at Firetail (32:05.693)
Interesting, interesting. Well, certainly no shortage of things to look out for. Certainly no shortage of innovation. Certainly no shortage of data in the world and no shortage of opportunities to try to make things better with AI. Ryan, thank you so much for taking the time to join us today on Modern Cyber. It's been a really enjoyable conversation. Super, super relevant topic. And for anybody who's out there, where's the best place if they want to reach out to you, learn more about what you're doing, where's the best place for them to get in touch?

Ryan Smith (32:11.287)
Yes.

Ryan Smith (32:15.095)
Yes.

Ryan Smith (32:33.143)
Absolutely, go to Qfunction .ai. Everything that you need is on that site. I have a blog, you can see how AI can work. And what Qfunction aims to do is to not hide the process. I think that a lot of vendors kind of adopt that black hoodie per persona and all that. I think that it should be a little more open. So all the blog posts are there. I even provide the code that's used. So if there are any machine learning people out there listening to this, feel free to look at it. Critique me, you know, because at the end of the day, I'm just trying to get better. Qfunction is trying to get better.

Jeremy at Firetail (32:44.54)
Mmm.

Jeremy at Firetail (32:48.444)
Yeah, yeah, yeah.

Yeah.

Jeremy at Firetail (32:58.908)
Yeah, yeah.

Ryan Smith (33:03.097)
So please go to Qfunction .ai, you'll find everything that you need.

Jeremy at Firetail (33:07.356)
Awesome, and that's such a great philosophy to have around it. We have the same here at Firedale. We're very much developing the light, developing the open, be as open and transparent about everything we're doing. So kudos to you for having that same kind of approach. Ryan Smith, thank you so much for joining us on Modern Cyber.

Ryan Smith (33:13.719)
Yes.

Ryan Smith (33:22.039)
Thanks so much, Jeremy. Great to have me. Thank you. Bye.

Jeremy at Firetail (33:24.668)
All right, bye bye.

Discover all of your APIs today

If you can't see it, you can't secure it. Let FireTail find and inventory all of the APIs across your organization. Start a free trial now.