Omer Singer’s Post

Machine learning enabled CrowdStrike to disrupt legacy antivirus vendors- why didn't the same approach work for Lacework and its Polygraph technology in the cloud? And what is the $8 billion lesson on AI for cybersecurity? As security operations increasingly rely on 🤖, creating an informed framework for evaluating AI-based solutions will be important. In this post, I analyze the Lacework crash and identify three lessons learned: 1. Training data matters (unusual != malicious) 2. The black-box/flexibility tradeoff 3. Independent validation A detailed analysis you won't find elsewhere in the link below 👇

Lacework’s AI Didn’t Work

Lacework’s AI Didn’t Work

omeronsecurity.com

Microsoft's AI is about as effective. Auto remediating risky login alerts simply because a new MFA device is setup...what could go wrong? Security people must be more involved in the modeling and validation if AI is to be a viable baked in solution for any security technologies moving forward. Additionally, when AI is being marketed as a capability in these solutions, as you pointed out in your blog, Security leadership needs to be asking important questions about the training data, and whether or not they can validate that. Which means Security analysts, engineers, and leadership all need to start cross training into Data Science so that they can see through any AI snake oilery, diagnose and correct built-in AI automations, and assist with building out future models used in Security Operations. Great article, Omer.

Chris Tillett

Product Management/Research and Development

3mo

Are you quite sure that ML was the failure here? Or was it more GTM or an ego maniac at the wheel? I've seen both destroy otherwise good companies built by well meaning and hard working people. ML for TDIR is pretty simple as long as you scale it right. I don't know their circumstances. I get it that you want to push correlation rules(excuse me "detection engineering") but I've seen plenty of python and SQL fail customers too. Plenty. Happy to have a discussion around this as there is just way too much FUD, AI hype, and nonsense confusing everyone right now. Let's cut through the crap together for the betterment of defenders.

Dan Hubbard

HELPING BUILDERS BUILD

3mo

Big headline, little analysis

When marketing outpaces capability and actual value the house of cards can come crashing down quickly. It’s insanely difficult to build models to detect evil when the teams don’t understand what it looks like. Laceworks was a posture tool that tried to pivot into threat detection. It won’t be the first or last to struggle with this issue when they try to bolt on a capability that is a mismatch against their core offering. This was a team and technology mismatch as soon as they tried to move into threat detection. Great article Omer Singer

Let's not forget about Cylance Inc.. They were on the CUSP of disrupting legacy AI vendors, more so than CrowdStrike.

Yogesh Badwe

Chief Security Officer at Druva

3mo

Bang on target Omer Singer! As a buyer these were the exact 3 key aspects that made a difference to me a few years back:- Black box Unusual detection, inability to customize for detection engineering approaches and no independent validation.

Story Tweedie-Yates

VP Product & Marketing IT Security

3mo

Really great article/research/content!

Comparing lacework with crwd is a bit of an apples to oranges comparison. Of course training data matters, but crwd/cylance's models are using static data extracted from the PE and the model is static/non-recurrent. Lacework claims behavioral detection, which is quite a bit harder and requires more elaborate models. They were also making the "no rules" claim - the GIGO principle applies, you can't just throw data at the model and hope that it will converge and will auto-magically start detecting bad from good behavior.

John Chirhart

Chief Executive Officer, GTG.Online

3mo

I feel the future lies in using Quantum Superposition and Generative AI to enhance data analysis, simulating both compromised and secure states to detect anomalies, marking a significant shift towards supercomputing in cybersecurity. Quantum Superposition allows us to assume “Hacked” and “UnHacked” States. Generative AI allows us to fill in a lot of blanks. Can we use Generative AI to create “synthetic logs” to help find IoCs and make large Enterprise Security decisions in near real time? I think so. 🤔

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics