Here is an interesting dilemma on cyber risk quantification. You need both internal and external data to get to a high-fidelity model.
- Internal data to understand assets at risk and get visibility into controls in place.
- External data to inform the model with attacks and incidents (frequency, severities, TTPs, emerging techniques).
The information required is all there, but it sits in silos. Here is the challenge: it's a business challenge, not a technical one, not a model one:
- Businesses don't want to share their internal cybersecurity insights with outside parties such as insurers, cyber insurtech, and risk quantification companies that need it to inform their models
- Businesses don't have access to the incident and claims databases that insurers own. A few outside and independent entities have good information, but it is not free.
Thanks for reading. Totally agree. Perfect data would be nice, but we can still build high-value models with what’s available now. The key is to combine internal and external sources, fill gaps with well-vetted assumptions, and use ranges to reflect uncertainty. That’s enough to drive better decisions today while we keep pushing for broader data sharing.
Regarding the assessment of InfoSec/IT security/cyber risks, do you think the scalability issue shouldn't actually exist? I mean, IT security departments claim they must assess countless risks (since attacks can occur through multiple vectors), but based on what you mentioned in your newsletters, this might be because they don't tie risks to specific decisions. This lack of focus leads them to fall into scope creep. Is this correct?
Hello Tony, first of all, thank you so much for this awesome newsletter! Thank you also for the book you're writing—I am really looking forward to reading it. II'll also be joining the SIRACON25 (virtually) so I am looking forward to watching your meeting on AI.
work at a GRC software vendor, and we have built a tool for risk quantification, addressing ERM and ORM types of risks. I am observing that many clients and prospects are very excited about risk quantification, but they continue using the qualitative approach even though they are perfectly aware of its limitations in supporting decision-making, and I am wondering why that is.
So, referring to the initial part of this newsletter where you provide some reasons why risk quantification programs fail, I have thought of two more, and I would love to know your opinion.
The first is not CRQ-specific and concerns the adoption of risk quantification by clients. The majority of organizations still use the qualitative approach, which is highly subjective. As such, when a risk assessor selects a specific level, it's basically impossible that someone will challenge their opinion, since it's subjective.
Conversely, if the assessor uses a quantitative approach, which is all about money (everyone understands money!), as soon as a specific risk exposure value is established, certainly most of the stakeholders will ask the assessor, "Where the hell does that number come from? Are you sure? What's the rationale behind it?"
So, I would argue this is mainly a psychological issue related to liability—that is, if the assessor adopts a quantitative approach to evaluate risk exposure, they will eventually experience significant pressure regarding the assessment decision, and over the long term, this pressure may cause the assessor to refrain from continuing to use that approach.
What do you think about this potential issue?
The second reason is CRQ-specific and, in short, concerns the scalability of the assessment process. Over the past years, I have spoken with multiple companies (mainly multinationals), and all of them claimed they need to assess hundreds, if not thousands, of ICT risk scenarios. Therefore, if they had to conduct assessments using risk quantification, it would take a lot of time—in other words, the process does not scale. A claim like this seems to relate more to the scope creep issue you described in the newsletter, but I'd like to know your opinion on the scalability issue related to ICT risks.
Here is an interesting dilemma on cyber risk quantification. You need both internal and external data to get to a high-fidelity model.
- Internal data to understand assets at risk and get visibility into controls in place.
- External data to inform the model with attacks and incidents (frequency, severities, TTPs, emerging techniques).
The information required is all there, but it sits in silos. Here is the challenge: it's a business challenge, not a technical one, not a model one:
- Businesses don't want to share their internal cybersecurity insights with outside parties such as insurers, cyber insurtech, and risk quantification companies that need it to inform their models
- Businesses don't have access to the incident and claims databases that insurers own. A few outside and independent entities have good information, but it is not free.
Thanks for reading. Totally agree. Perfect data would be nice, but we can still build high-value models with what’s available now. The key is to combine internal and external sources, fill gaps with well-vetted assumptions, and use ranges to reflect uncertainty. That’s enough to drive better decisions today while we keep pushing for broader data sharing.
Thanks for the feedback.
Regarding the assessment of InfoSec/IT security/cyber risks, do you think the scalability issue shouldn't actually exist? I mean, IT security departments claim they must assess countless risks (since attacks can occur through multiple vectors), but based on what you mentioned in your newsletters, this might be because they don't tie risks to specific decisions. This lack of focus leads them to fall into scope creep. Is this correct?
Thanks
Hello Tony, first of all, thank you so much for this awesome newsletter! Thank you also for the book you're writing—I am really looking forward to reading it. II'll also be joining the SIRACON25 (virtually) so I am looking forward to watching your meeting on AI.
work at a GRC software vendor, and we have built a tool for risk quantification, addressing ERM and ORM types of risks. I am observing that many clients and prospects are very excited about risk quantification, but they continue using the qualitative approach even though they are perfectly aware of its limitations in supporting decision-making, and I am wondering why that is.
So, referring to the initial part of this newsletter where you provide some reasons why risk quantification programs fail, I have thought of two more, and I would love to know your opinion.
The first is not CRQ-specific and concerns the adoption of risk quantification by clients. The majority of organizations still use the qualitative approach, which is highly subjective. As such, when a risk assessor selects a specific level, it's basically impossible that someone will challenge their opinion, since it's subjective.
Conversely, if the assessor uses a quantitative approach, which is all about money (everyone understands money!), as soon as a specific risk exposure value is established, certainly most of the stakeholders will ask the assessor, "Where the hell does that number come from? Are you sure? What's the rationale behind it?"
So, I would argue this is mainly a psychological issue related to liability—that is, if the assessor adopts a quantitative approach to evaluate risk exposure, they will eventually experience significant pressure regarding the assessment decision, and over the long term, this pressure may cause the assessor to refrain from continuing to use that approach.
What do you think about this potential issue?
The second reason is CRQ-specific and, in short, concerns the scalability of the assessment process. Over the past years, I have spoken with multiple companies (mainly multinationals), and all of them claimed they need to assess hundreds, if not thousands, of ICT risk scenarios. Therefore, if they had to conduct assessments using risk quantification, it would take a lot of time—in other words, the process does not scale. A claim like this seems to relate more to the scope creep issue you described in the newsletter, but I'd like to know your opinion on the scalability issue related to ICT risks.
Thanks
Luciano