Cybersecurity Risk Assessment should be a hot topic these days. How else can you not only convince your board and management team that you need to do something to protect against cyber-attacks, but also be able to communicate for once in a language they understand
Cybersecurity Risk assessment is used to answer three questions:
- What can go wrong?
- What is the probability?
- How much money is at risk?
There are lots of risk frameworks around that can help answer the first two questions, but there are none that can answer the third.
The ISO information security risk assessment (ISRA) is “the overall process of risk identification, risk analysis and risk evaluation”. In general, an information security risk assessment (ISRA) method produces risk estimates, where risk is the product of the probability of occurrence of an event and the associated consequences for the given organization.
ISRA practices vary among industries and disciplines, resulting in various approaches and methods for risk assessments.
Yet ISRA provides a complete and standardized framework of assessing the risk levels of information security assets, and is widely used by risk advisors to implement security controls by following information security standards and regulations.
The ISRA risk analysis component is divided into three categories: quantitative, qualitative and synthetic.
Their quantitative approach constructs complicated mathematical models to try and create metered results, but it is based on difficult to collect historical data to support the models and since the risk landscape changes daily now, historical data is not particularly useful in determining risk. It does not have a way to reflect actual threat data operating in your environment 5 minutes ago.
A view that might have been useful to say Equifax, for example.
Their qualitative method collects data based on experts’ opinions or questionnaires which is easy to gather but entirely subjective. Measuring the Equifax risk in this manner might not have even resulted in a “high” let alone “critical” degree of risk. Which may in fact be exactly what happened there.
Synthetic risk analysis methods can arguably overcome some of the limitations of traditional quantitative and qualitative approaches by applying fuzzy and Analytic Hierarchy Process theory, which at least provides a decision making model. Unfortunately, the design of synthetic risk models can only use attributes of general information security risks and cannot process specific threats like cyber-attacks. Moreover, the risk scores rendered through the model lack any association with dollar value and are usually presented as an asset risk level of 1 to 5, with an overall aggregated risk score of 1 to 100.
This method might have been useful if Equifax were operating in a speed zone of 65, but running though it at 90 did not result in a speeding ticket, but rather in an $800 million (and counting) breach instead.
Additionally, these subjective synthetic scores are useless for cross-company or cross-industry comparisons.
A much better approach would be to use Value-at-risk (VaR) as a foundation. Classical financial risk models like VaR seek a worst case loss over a specific time horizon. VaR considers the actual dollar values of the assets at risk and when factored by active threat can present a measurable impact of Cybersecurity risk at the very moment of calculation.
The actual dollar value of an information asset is easily determined, though will in part be derived through subjective analysis. For example, the customer PII held by Equifax has a dollar value determined by the cost of replacing the lost data as well as the churn, which is the number of customers lost due to the breach. Ponemon (love them or hate them) provides studies showing that the that companies with data breaches that involved less than 10,000 records spent an average of $4.5 million to resolve the breach, while companies with a loss or theft of more than 50,000 records spent $10.3 million, etc.
These values can be usefully applied.
They also have…