Modern Cyber with Jeremy Snyder - Episode

Dan Grzelak of Plerion

In the this episode of Modern Cyber, Jeremy is 'down under' in sunny Australia for an in-person chat with Daniel Grzelak. Dan is the Chief Innovation Officer at Plerion, an agentless cloud platform that allows clients to identify, prioritize and remediate the risks that matter most.

Dan Grzelak of Plerion

Podcast Transcript

hello welcome to another episode of the modern cyber podcast for a change we're actually recording in person for once
which I am excited for and we're here with today's desk Daniel gelic Daniel thank you so much for making the time to
join us appreciate it excited to do it awesome I've got a few things I want to get through today but just before we
kind of kick things off why don't you start with a little background on yourself what your current role is with PlayOn and then maybe some of the past
few positions you've been in and kind of how you got into cyber security okay um well I'm extremely weird okay my current
position is officially Chief Innovation officer Clon okay uh which is a cloud security company and there I do mostly
uh research and like technical security research and evangelism based on that
research and also trying to get some Innovative stuff differentiation into our product yeah before that I was sio
at link tree and before for that head of security at the last yeah okay I mean those are two organizations that are
kind of that I think of as being very Cloud forward and operating at scale so
it must have been a lot of learning along that those Journeys and those stops along the way yeah absolutely Lan
was the the company that really introduced me to the cloud like runs on AWS and that's where I got started with
learning everything I could about it gotcha gotcha it's really interesting you know some of the stuff that you guys
have been doing recently on I think AWS specifically the last couple of posts that I've seen from plon around um
discoveries and findings that you've had in security research I think I've been AWS specific right um for those who
haven't seen the articles there was one on kind of extracting account IDs and then another one on extracting metadata
of another type from instances or from tags or remind me from from policies
from policies right okay and so like H what first of all how did you find these things and second of all what LED you to
even go look for them so I I'm I'm not the kind of person that will find
vulnerabilities in things like deep technical vulnerabilities I'm just not smart enough for that okay um but I love
figuring out how I can manipulate things like that work their way that work their
intended way but then you can get them to do something that's unintended and interesting yeah um so for example with
the the policy one where you could basically read another company's tags
yeah so tags are a way you organize your stuff you put the department Etc on a
resource yeah um and the way that you can actually do that is by abusing an IM
policy in in AWS um because typically what you say is when you're accessing something um you you can write a policy
that checks something about you like right your username or your department or something like that yeah but in this
case there's also a couple of ways that you can check something about the thing that you're accessing right and in this
case you could check does it have the appropriate tag um and there's a special
condition that right the I am conditional statement yeah exactly there's a special condition where you
can start matching things character by character for instance does the tag value start with an S yeah and then does
it start with an s t and so on and so on and so you can enumerate the value of the tag and that's really interesting
because I know from my own experience and look I started at AWS in 2010 in those days there were no vpcs
most companies had one AWS account by the way like everybody using the root user um there weren't really you know
there was no IM am there wasn't a possibility for me to give a DG user versus a Jeremy user versus whatever
right um but tags were the main way that we let's say differentiated workloads from each other right everybody's living
in this one big control plane and network plane as well and all we really had to go on to differentiate my
instances in this Mass Fleet of ec2 from yours is tags and so the reason I bring
that up is that I think that that habit of using tags to Mark workloads and
various things about the workloads continues to this day but has also evolved in a way where I've seen
companies storing secrets and tags and using let's say a boot script to read in the tags via the instance metad metadata
calls and that might give me an API key that then I you know go use to access a
bucket or something so there's actually like potentially valuable information that you could extract or enumerate from
this yeah so AWS explicitly says in a big red box in their documentation do
not put conf confidential information in TXS they're just meant to be organizational elements like we talk
about Department whatever yeah billing code whatever yeah exactly um but you're right we've seen customers put secrets
in there um the one the one one mitigating Factor probably is well two
really um one is you have to know the name of the tag okay so if the tag is called something really obscure it's
very hard to find unless it's in code somewhere and you find it but AWS secret key is not a super obscure tag name sure
or password or something exactly yeah the other thing is the resource has to be accessible by the attacker okay in
some way so for example the most the most common and easiest example is if you make an S3 bucket accessible let's
say you publish your website on it then now you can enumerate the tags of that
bucket Etc yeah but there's also situations where for instance um and I
think this maybe ties to the second piece of your research the marketplace relationships and um establishing
something of a trust relationship between account a and account B in order for me to sell you something via the AWS
Marketplace would that make resources available for query or not to much so so one of the other ways that you can
expose a resource is um allowing an account a role to be assumed right so if
if you can assume a role into someone else's account um then you can enumerate the tags on it yeah yeah and I feel like
there's maybe more instances of that than people realize considering the number of SAS solutions that are
designed to integrate with your AWS environment and probably do create these kind of trust relationships with assum
roll capabilities and and you know way trust and things like that exactly and then there's there's it all depends on
how you build your applications if you expose a queue there's all sorts of weird and wonderful assets that you
intentionally expose to AWS yeah one of the interesting things I mean you made the point that there's this big red box
that says you know don't put secrets and tags you know we've also known don't store passwords and text files on
operating systems for a long long time but that happens you know X thousands of
times per day I'm pretty sure not everybody either sees or heeds that warning about don't
put secrets in tags absolutely yeah absolutely this is just one of those
almost like security behaviors that you know training hasn't really broken
through on I so in general I think what's what's happened with the AWS
ecosystem is it's proved out that training and documentation don't work at scale so if if you think about the S3
problem yeah for a long time people were exposing S3 through a variety of making
them public making them available to all authentic bucket policy no IP restriction whatever right and then and
it was all documented don't do it this way but the interface sort of LED you down that's the easiest way to do it and
eventually AWS took action at scale and really changed their user interface to
make it all off by default all blocked by default and you had to really try very hard to make a bucket public yeah
exactly even though they had the documentation and even oh and you've had five control mechanisms for eight years
now that that can make a bucket nonpublic exactly and so like I think
this has been proven out again like the S3 example is just one but it's been proven out over and over in AWS is if
it's easy y people will just do it yeah yeah yeah and the other place that I
find that that really kind of um that this kind of like anti-security Behavior
manifests itself again and again is in debugging and troubleshooting primarily in non-production environments right I'm
a developer I'm working on something that's not working um I could painstakingly go through the I am policy
permission by permission to try to figure out what is blocking it or I could Grant myself star get it to work
again and move on with life absolutely and now iws has tools to help you do exactly that I access analyzer will tell
you which policy you need but it it's hard yeah yeah yeah I am is really really hard I mean I I was working at
AWS when I am was introduced and I remember thinking at the time like oh this is great but then about a couple
months later I remember thinking I don't know if we properly understood the complexity that we were about to unleash
on the world I say we like I had anything to do with it that that the am tool was about to unleash on the world
and all the complexity and there was a there's a slack community that I'm in where um there was a great quote that I
screenshotted a number of years ago and it was something like every time I work with I am I want to rip my hair out I
would pay somebody literally anything they asked for to make this problem go away yeah the beautiful thing is you can
pretty much do anything you want yeah the terrible thing is you yeah so you guys have published
these two articles people can find those on the blog blog. or is it blog I can't remember blog.
9:56 okay good in plon p l r for those of you who are watching or listening um so do check that out I I'm
curious you know plon is maybe a couple years old as an organization right now little bit all little bit Yeah okay
young company young company and entering a cloud security space that you know I started working in Cloud Security in 2016 and I would say at the time it was
an early stage to work on cloud security customers generally didn't understand
Cloud security security teams that we went and talked to didn't understand Cloud security we would get a lot of
question questions like where's my firewall um we would get a lot of questions like what agent should I be installing on all of my instances things
like that and I would say that the understanding of cloud security is very different from what it was you know
eight years ago but by the same token there's any number of companies in the cloud security space and some of them
very large right you've got the likes of P Alta with the Prisma suite and checkpoint and on and on and on down the
list so I'm really curious from your perspective what makes plon different what's different about the approach that you're taking towards Cloud security
sure um so first it starts with our mission which is to simplify Cloud security one of the things we've already
talked about is just how insanely complex I am is but everything in the
cloud is complex AWS has hundreds of services yeah no no one person can
possibly understand that and in each service may have 10 15 20 configuration items some of which by the way
counteract each other and you know some have a default deny but then an allow and yeah so yes there's the complexity
on the resource or service by service basis as well and then you've got the complexity of the stuff that you're
building in the cloud you've got the complexity of all the vulnerability management and where's your data what
your permissions are ET it's just it's just really complic it's a lot yeah um and what we found is that you log into a
bunch uh some of the older tools and you just get a list of all the problems
right that you have right and now you're expected to figure do and what do you do
I don't I don't know and so our mission is to simplify that process and so help
you get to the top things that you need to do today okay top things that you
need to do after today okay what's going to reduce your risk the most right now
basically and help customers along that journey to to mature their coud security presence and when you think about
figuring out the things that are going to have the greatest risk reduction impact I I'm sure there's some Secret
Sauce there which I'm going to ask you about but I'm curious do you look at it from the standpoint of what has the
greatest blast radius or what has the um easiest fixability or is it like some
combination thereof or how do you think about that prioritization question right so we we use this concept of attack
paths okay and assets at risk okay so an asset at risk might be your production
database customer database S3 bucket with something my production RDS instance whatever yeah yeah exactly uh
and then we build and then so we build potential attack paths so starting from the edge yep to say a vulnerability
sitting inside a Lambda function that is accessible through API Gateway okay now
can that eventually through trust relationships or through other things go and touch that uh asset at risk or that
data at risk and if it can then we want to break that chain and we present that whole attack path to uh to the user and
give them an option to um fix some part of it or break that chain in some way so
that's really interesting because Breaking the Chain is not a response that I've seen from a lot of
solutions what I've seen from a lot of solutions is like okay here's the attack path here's all the implicated elements
maybe that vulnerability maybe the IM role that links the instance to the RDS or maybe the ACL between vpc1 and two
whatever that thing is but then it's really like up to the user to figure out what the remediation is and what the
remediation steps they want to take is are you saying that you guys kind of like intelligently suggest the most
effective remediation to the attack path or how do you think about that so so yes and no okay like we're we're still
working on that part but the thing we do is is prioritize the things within that
attack path so you can see the vulnerabilities that are in that attack path and you get the worst one at the
top for example Etc um and so we're still working on that part but absolutely theide IDE is to make it
super simple for the user so they don't have to make decisions they can if they want to they can see everything and like
maybe for their organization it doesn't make sense to patch that vulnerability for a reason maybe they want to change the trust relationship between the lamb
doing something else okay of trim down the am roll whatever yeah got it got it
I I think that to me that sounds very much like the not very much but there
are parallels to the vulnerability management problem that we as a cyber security industry have
been trying to eliminate for like 20 plus years I find it super depressing by the way that like um you know 20 years
ago when I started an it to kind of date myself I actually more like 25 but the
average time that a vulnerability lived on a system was more than six months and
it's still the case and I think that there's when I see organizations turn on
vulnerability management for the first time they're always overwhelmed by the number of vulner abilities that exist in
their environment and they have no idea about how to figure out which ones that they need to tackle is that kind of you
know is there a parallel there in what you're doing in Cloud security yeah exactly so so part of cloud security is
vulnerability management yeah of the stuff in the cloud and so absolutely we right we prioritize the assets at risks
and then we prioritize the vulnerabilities on those okay assets at risk okay okay yeah there's a there's a
lot there that um the there's a lot of data that has to be correlated to kind of calculate that attack path at least
from my own understanding of AWS so I think that's a really um powerful solution and probably something that a
lot of organizations aren't going to immediately understand like first time
they see it so when you think about walking somebody through that attack path you know do you kind of go step by
step explaining what's the link between step one step two step three yeah
absolutely and I mean they can see it visually visually sure yeah but they may not understand like how thing one
connects to thing two like why a non-obvious am role or a you know
secondary set of permissions that is attached to an IAM user might
accidentally create a secondary exposure that they were not aware of yeah exactly so um we we we describe all of the
relationships um for example uh if something might accessible through uh a
knack y right and so we'll say that there there's a network access available through this relationship another thing
might be that that a role can be assumed okay and so we'll describe that relationship okay but the idea over time
is we make again going back to the Simplicity we want to make all of this simpler and simpler over time so we
abstract all of that comp uh complication and give the user the simplest explanation that they can get
and then if they want to yeah they can go dig into they can press a button and see the policy and dig into it yeah and
how do you think about automated remediation as part of this whole world like is it something that you guys do or
is it something that like that you offer but customers have to turn on or do you think like customers just aren't quite there yet it's something we discuss a
lot okay um whether we should offer it or not yeah um but so personally I feel
like it's a bit of an antipattern okay um so in in principle what we want to do
is shift all of that stuff left to left right whether it's build the right
policy at the start or uh make sure that the container doesn't get into the
registry or make sure that the vulnerable code isn't vulnerable in the first place or or never gets committed
to prud yeah exactly um and and if it does um you want to find it and
establish a baseline so that it never happens again um so like in principle I
think we W to or or you might build uh a service control policy to prevent that kind of
thing ever again in the future so it's sort of like if it if it does happen um
we don't want it to be like a crutch where you just Auto mediate everything over and over and over and over we want
it to be fix your environment systemically so it never happens again
but we do we do we do discuss it I think C what what I found is that customers uh
in a discussion will say they they want that thing but when it goes to actually
putting it in place yeah they very rarely actually use the auto remediation
for for some making sure the production doesn't go down they're worried about the risks Etc you ever work in an
environment that used a web application firewall yes did you have the experience
that like only 20% of the rules that you in theory wanted to implement actually
got turned into blocking mode and production yeah absolutely like what you just described kind of describes or like
captures that Essence to me where I and I've heard this from organizations by the way around the world this is not a
geographic industry company size or anything specific this is
the nobody wants to be the person responsible for the one false positive
Block in production that prevented a e-commerce checkout successful partner
transaction or what have you and yet I feel like in some
way organizations don't learn to stop making those mistakes unless they actually put constraints on them I'm
curious like you know you worked at two large fast moving organizations how did you think about that in those context
like did you have to try to put up these walls to to force correct Behavior no okay I was lucky okay part
luck part by choice the executive teams in both of those both lry and atlin
really believed in security okay and so that fed through the entire organization
exactly and so um if we found something that was wrong we would go back and fix it I think where something like a w is
really really good is that first level triage response or like something's
something's bad and it'll take a day or two to fix it let's just put something to in place to block it temporarily
before we do the proper fix yeah I think that works that use case worked really well for us that's interesting because
that's not one of the primary blocking use cases that I've observed in in customers that I've dealt with which is
the customers that I've dealt with that are able to really Implement a blocking
rule in production it's because it's such a universal truth that it's really really easy it's like block all traffic
from North Korea easy right like that's universally 100% of the time there is no
valid business in this case ever where we would want to allow something but I find find beyond that it gets really it
gets very fuzzy very fast yeah absolutely and then the thing you find with that specific technology web
application firewalls is that bug bounties show over and over and over again that clever researchers will go
find a way around your rule because the rule is so static yeah look I mean this is something we you know not to flip the
conversation but we work on API security right at at firetail and um one of the questions that we run into very often is
like well why can't I just solve my API security problem with the web application firewall and to exactly your
point I think it's been proven I don't know how many times or anybody interested look on our blog for why
wafts aren't enough or something like that is the title of the the article and we link five instances from the last six
months maybe seven months by now that prove demonstrably that there are easy
workarounds and it could be as simple as you know VPN cycle IP address whatever but more often than not it's like oh
actually it's a business logic flaw it's not a demonstrable wrong type of call or
something that the the Waf can pick up so um so yeah I'm I'm
consistently surprised that they're not more universally implemented but also that
people go to them and think of them as a solution and I just see this kind of circular logic flaw where like you C you
know you can't implement it in production with the blocking that you want but then you think it's a solution
that is demonstrably proven not to solve the problem why are we having this conversation um so anyway a little bit
of a tangent there from my side and I want to come back to something from your past experience which is around incident
response and incident communication and um there's something that we've observed in the API security
space which is that um apis are really complex in a way because they kind of sit on a network so they have some
Network exposure they run on top of infrastructure so there's like infrastructure correlation they front
business logic and application calls and they usually also front data sets and so you have like all these moving parts
around them and when there is a flaw or a breach around an API organizations
really struggle to kind of understand what happened what the scale of the breach is and what the scope of the
breach is and so we do things from the product side to try to help simplify that but I'm curious as somebody who's
worked at you know some of these org organizations operating a scale as a practitioner how did you think about the
whole kind of incident response process and also spec and like importantly communicating that to key stakeholders
whether that's like the rest of the executive team or to customers boards Partners like what's important in that
whole process and how do you approach it in the investigation investigation Communications reporting right um and
and I know there's a lot there so there's a lot to unpack yeah uh I think one of the most important things
security incident response differs dramatically to um reliability incident
response okay the most important factor in a reliability incidence like your website's gone down is to get it back up
as quickly as possible security is kind of the inverse of that okay is you sacrifice speed to try and deeply
understand what's going on so you don't make any bad decisions okay so for example you might find that uh I don't
know an actor got into your environment through something and your first instinct might be might be to go immediately block that thing
right um but the actor maybe the actor been in there for months maybe they have persistence mechanisms in a different
direction and what you've done is told them that they now have to exfiltrate as quickly as possible right and so what
you're really trying to understand is scope the the incident before you start making decisions about what to do the
other thing you're trying to do is communicate as fast as you can to your customer customers what's happened and
what they can do to mitigate their risk okay um so so for example um like if
their passwords have left the building sure okay those passwords are going to be used by that actor or some other
actor in some other environment and we see this over and over again one SAS company gets compromised and the
username and password data immediately gets used to attack every other SAS compy every other SAS company with those
same users and so hackers have automation to they're going to try this everywhere absolutely um and so like
what we what you want to do is like tell that customers fast as possible while also making sure you have the
information to be able to explain it to them and control remediate that incident
contain that incident all of those fun things but there's a there's a couple things in what you said that I want to kind of dig deeper on because
there's some of that runs counterintuitive to I think what a lot of us would have experienced as consumers of organ ations that have been
breached I think the so first of all there's got to be business pressure to restore Services I know you know that's
a real thing that everybody faces on a day-to-day basis and you're saying like in a way you don't want that reaction
you want the time to kind of go do the incident research you know it's a matter
of tradeoffs okay or it's a matter of where on the Spectrum you are so you
absolutely want to go fast yeah like you don't want the actor to be rumaging around your infrastructure
but it's number one in a relability it's number one thing you concentrate on it's
not necessarily the number one thing like you want to know exactly what's happened and you see over and over
organizations that rush to put out incident Communications they'll be vague they'll
make some uh some statements about what happen and then a week later they'll come back and have to retract those
statements and explain why they were wrong about these things and it becomes completely embarrassing and you have
this Loop over and over that's I can think of some some examples as well yeah
yeah exactly and so you you want to avoid that and you want to avoid the actor taking actions that are going to
be detrimental to the whole incident response Yeah by by by the fact that they know that you know they're in there
but at the same time we're also entering an era where there's regulatory requirements around the incident
reporting right and so you know in the US I think 96 hours is is you know four
days is kind of what's becoming the standard now and I'm I'm curious like I
don't know you know not to call out any of your past employers but is that a reasonable time frame or is there there
there seems to me that there's still a chance that you're still doing forensics you may not understand how deeply you
might have been compromised and how long you might have been compromised at that point in time yeah so you're absolutely
right I think the intent is right yeah the pressure on organizations to move
fast is right you want to tell customers because the other end of the spectrum is you take forever and don't do anything
about it and then don't notify customers about everything that's happened so all of that is is right but on the flip side
sometimes you don't know after 3 four days what happened and you'll see like I've been involved in incident response
where it took weeks to unravel everything yeah and so yeah there's defin Ely balancing it and sometimes you
just you've got to keep investigating so in that case when you're when you're
trying to kind of not lose customer trust also not lose executive and board
support as you're going through weeks of Investigation what's the communication strategy because to me I can see very
good reasons why I would remain as as unclear and vague and kind of General in
everything that I put out to avoid the embarrassment that might come later for getting it wrong so I mean the the
employeers that I've been at and my philosophy in particular has always been trying to be as transparent as possible
and don't use weasel words like there is no evidence that is the one that's always used meaningless no evidence at
credit cards were compromised you didn't have any logs um but you you so it all depends on
the exact situation but like if the investigation is ongoing you should say the investigation is
um if we've taken some actions uh in the investigation and there's some conclusive results we should share that
I think that's perfectly reasonable if there's especially if there's things that we know customers have to do to
mitigate their risk we should tell them that as soon as possible whether it's privately through sort of like corporate
Communications or publicly uh whenst whatever yeah exactly yeah I mean
there's a lot of tension there right because you know the the loss of customer trust
the reputational damage like these are real risks to organization and you know
recently there was a um a uh micro blog I guess we're not supposed to use the
brand name for whatever the stupid bird company used to be called um you know a an upand cominging Wan toe usurper who
had an API that was just like horribly constructed and we've deconstructed a little bit on our blog if if anybody
wants to check it out but it had everything down to um password reset
codes returned through an API call that was pretty easily obtained even in an
unauthorized manner like I could obtain your user record including password reset codes which by the way we're not
encrypted only encoded and quite easily decoded with open- Source software so I
could you know kind of figure out the email address Associated to your account
potentially update it assert myself as admin use your reset codes reset your password probably take over your account
assume your identity on that platform and you know go make statements in your name and to your point the
communications around it were very much no evidence of this this is not a real risk there's no cases of this a actually
having happened in the wild but I'm pretty sure for that company it's it's you know got to be close to game over so
like how do you think about when you've had kind of a work worst case scenario not using weasel words not
backing out like what's an effective communication strategy in that case so so the way I've done it in the past is
probably the best way to put it is you've got to have principles that you've agreed on upfront okay so your
comm's team your legal team uh your your technical security team have agreed on principles like these going down to the
these are the words we're not going to use like your security is important to us yeah like
avoid those kinds of things and if you have those principles if you have those approaches laid out you can just pull
those off the shelf when a bad thing happens if what you do is try and craft all of the communication at the time of
the incident when it's completely chaotic where you've got hundreds of people working on the incident you're
almost guaranteed to do something silly yeah in the moment um but if you've got those guidelines principles templates
Etc you're more likely to to make a good decision this is these principles I'm I'm curious in your
experience are these things that you kind of agreed on in advance with the executive board and uh sorry with the
Executive Suite and the board yes and no it depends on the company how the company functions uh sometimes it's just
within the security and legal team sometimes it's just with the security and cons team I've actually had it work
where I my the team has had the principles or the outlines of things and when something bad has happened and
everyone's kind of running around trying to figure out what to say and what to do will provide those things and say hey
we've already thought about this up front use this as a starting point yeah awesome awesome well Daniel I know we're
kind of running out of time for today's conversation um I wanted to check I had a list of things that I wanted to ask
you here and I'm trying to pick one that we might have like two three minutes to focus on and and there's one that jumps
out to me from this list and this is I think a philosophical stance of yours which is build the security you expect
so when you think about that in some of your past experiences like what does that mean to you and like what are you
trying to say with that yeah so um I'm really big on principles right Umar
if uh just writing down all of the instructions for everything I I found in
my experience just doesn't work no like sometimes people follow the instructions but often they don't so like often what
you really want is like really I guess trivial or things that people can
understand and use uh and so for at link tree for example we had engineering principles okay um and one of those
principles was build the security you expect and the idea of that is okay you're an engineer or a designer product
manager whatever and you're trying to figure out like what is this thing going to look like what's the architecture
what's the design Etc um you're not we have a small security team you're probably not going to be able to get
their expertise but you want to make good decisions one of the ways that you can make good decisions is thinking
about okay if Google made this thing that we're building how would I expect
it to work would I expect the data to be encrypted in the database would I expect
um it to require me to have a long password whatever the thing is would it would I expect them to have logs if
there was an incident yeah all of those kinds of things and so like there's a really one trivial thing that you need
to remember and the person can hopefully make better decisions with that principle and so do you find that if you
put these principles out there and you kind of communicate educate get people to commit to them overall this should be
kind of a self-reinforcing behavior that actually improves the security quality of the organization both in the products
that it's building and in the way that you're kind of thinking about design and operations and incident response and so
on and and yes and actually what you wanted to do is you want it to become a communication tool um so often people
find it hard to challenge others in a corporate culture or maybe hard to
challenge their manager or P person in Authority yeah right but if they have a common language that allows them to do
that for example a principle like this instead of saying we shouldn't in uh we
should we should encrypt passwords or whatever we should encrypt this data because it's a good thing to do you say
well our principle is build the security we expect and I expect these kinds of
things to be encrypted if Google did as an example so like that that common language if you repeat it enough
actually becomes really powerful cu the security person or the security team can kind of step away and it takes a life of
its own yeah yeah makes a ton of sense and by the way I expect that password to be not only encrypted but salted with
the encryption there one way so awesome well Daniel gelik thank you so much for taking the time to join us on the modern
cyber podcast I've really enjoyed today's conversation if people want to find out more about you or the work that
you're doing the research where's the best place all right you heard
it here first thanks again that's it for today's episode talk to you next time

Discover all of your APIs today

If you can't see it, you can't secure it. Let FireTail find and inventory all of the APIs across your organization. Start a free trial now.