RSA Conference 2023 Promises New Concepts, Diversity of Ideas
This year's submissions to the RSA Conference have broadened the diversity of ideas and introduced new concepts.
Greg Day
Over the years I have seen hundreds of companies trial and deploy cybersecurity capabilities, and whilst these evolve, the selection criteria and metrics of success has stayed more static. Meeting with executives at events such as the World Economic Forum, I often get asked what is the one metric I should use to measure success.
This is not a simple question, and I can guarantee many of you would have differing answers. For me, there is one that is all too often overlooked and ties into another metric that is also too often glossed over: Detection Efficacy.
All too often, I see testing that is capability based rather than usability. I remember not so many years ago, an organization doing some testing and they came back with the statement that more or less everyone found the same threats. I have to say, I was somewhat shocked by their answer. So I started to dig into their statement, and I found that they had a team of ten testing capabilities, whereas when deployed the solution would be 50% of one person's job.
Furthermore, while in the end the results were similar, the journey to get there was very different. Some had triggered thousands of alerts, and others millions. This would mean that you would need a lot more human power to process one solution output to the next. So the time to resolution, which is a metric that is very commonly used in SOC teams MTTD and MTTR (mean time to detect and mean time to respond) would be very different.
I was sat with a group of CISOs not so long ago, and we all agreed that there simply isn’t the human capacity to scale to the needs to today's SOC, as the volume of threats, security tools and things we need to protection–and the time the business allows for MTTD and MTTR gets shorter.
This is the time-paradox problem we face. The only way we solve this challenge is to reduce the number of alerts that require human inspection. We must start to automate some of the simpler levels of detection and response.
If we know we have to make this shift, you can rightfully ask why we are not making better headway. The answer is detection efficacy; talk to most analysts and they will always prefer to have human inspection, cross-checking to ensure we have made the right determination.
This goes back to simple psychology: we are five times more worried about being seen to have made the wrong decision, as we are of getting it right. If you don’t believe me, go read The Chimp Paradox by Steve Peters.
Today's threats are complex, there is no denying it. I go back 30 years, and you could find something unique in each threats’ code (typically a standalone binary) and be confident in knowing what you found. Today, threats are made up of many components that occur in different places both on and off your end-device.
So, knowing if you have a specific threat has become an ever more complex jigsaw puzzle. You need to put enough pieces of the puzzle (different detection alerts) together to have the confidence that what you are seeing is what is definitely there.
I challenge you next time you are looking at an alert: could you put a figure on the detection efficacy? How confident are you in what you are seeing? If that figure isn’t high enough, you will always need to ask a human to validate it.
This becomes a self-propagating problem, as the more alerts that require human inspection, the slower you are to validate the problem, and typically the more you add in incremental tools to spot the problem, but end up simply duplicating the detections.
Having two solutions both give you an answer that you are 40% confident in does not add up to suddenly being 80% confident, as they may be duplications or they could be two disparate things. The only way to confirm a correlation between them is through more human effort.
If we are not able to scale to future demand as the cyber-time paradox problem will only ever get worse (sorry, that's just simple math), we will have to leverage automation to help us scale. If at every stage we require human intervention, that can never happen.
As such we must start to focus on the efficacy of detection. This is both evidence of how often the determination of a threat is right versus wrong technically, but I would challenge it's just as much physiological: if we don’t trust the tools, we will always want humans to validate.
This is the core of how we detect the process, and if we get this right, the outcome will surely improve through less false positives and other noise so teams can scale, and they can focus on the complex problems. With less noise, the MTTD and MTTR inherently improves, and where we have the confidence to truly take the human out of the equation, we can make huge strides forwards.
I’m sure some of you will still argue that for CEOs and other executives– there is a better metric, something along risk lines etc. But businesses inherently take risks, that's typically where the greatest margins can be made. The key being they understand risks. In cyber, if we can’t understand if the problem is real or not, honestly we can’t determine if the risk is real or not, effectively neutering our ability to make any decision.
At Cybereason, we have found that rather than by looking at retrospective threat artifacts and Indicators of Compromise from known attacks, it is more efficient and effective to detect the whole malicious operation (MalOp), and in doing so we have reduced the number of alerts anlaysts have to triage and investigate by a factor of 10X or greater.
Furthermore, the efficacy of those detections moves from the 20-30% confidence level to the 70-80% level, simply by both checking the efficacy at each stage and then building the complex puzzle of pieces into a MalOp. This is how automation enables detection efficacy and allows a security team to scale.
Cybereason is dedicated to teaming with Defenders to end attacks on the endpoint, across enterprise, to everywhere the battle is taking place. Learn more about the AI-driven Cybereason Defense Platform here or schedule a demo today to learn how your organization can benefit from an operation-centric approach to security.
Greg Day is a Vice President and Global Field CISO for Cybereason in EMEA. Prior to joining Cybereason, Greg held CSO and CTO positions with Palo Alto Networks, FireEye and Symantec. A respected thought leader and long-time advocate for stronger, more proactive cybersecurity, Greg has helped many law enforcement agencies improve detection of cybercriminal behavior. In addition, he previously taught malware forensics to agencies around the world and has worked in advisory capacities for the Council of Europe on cybercrime and the UK National Crime Agency. He currently serves on the Europol cyber security industry advisory board.
This year's submissions to the RSA Conference have broadened the diversity of ideas and introduced new concepts.
We are already seeing ransomware that scans for cloud-based collaboration points. And while you may think the risks are the same, that's not the case.
This year's submissions to the RSA Conference have broadened the diversity of ideas and introduced new concepts.
We are already seeing ransomware that scans for cloud-based collaboration points. And while you may think the risks are the same, that's not the case.
Get the latest research, expert insights, and security industry news.
Subscribe