Insurance Risk Control


Our Models

Find out why our patented methods of cyber threat valuation are the correct ones

Why Our Cyber Threat Valuation Models Work

Unlike most risk modelling practices, those for cyber attacks and network threats face problems created by the very nature of how the internet operates and the interconnectivity of machines inside and outside all organizations.

Quantar’s cyber threat valuation products draw upon our fifteen+ years of product development, utilizing expertise from universities, military risk modelling specialists and risk capital modelling experts. Our models have been externally reviewed and enhanced in conjunction with a multi-year award winning actuarial consultancy.

Combining knowledge, systems and methodologies, we are able to deliver the appropriate information and services for managing your organizations risks arising from the interdependencies and connectivity between your business processes, systems and locations.

Why do our models and methods fit better than others used by alternative vendors?

The weaknesses arising from many current risk modelling methods integrated into other suppliers’ solutions arise due to the following:

The list of events:

Events that can cause a loss on a single system cannot be listed exhaustively since attacks are not definitely identified. There are a potentially infinite number of different attacks, meaning creating a definitive list is not possible. Companies usually revert to listing known types of attack and ignore the remainder. There is therefore a high probability that attackers will use unlisted attack types.

The probability of events:

Many naturally occurring events occur in a distribution that may be modelled to within a reasonable degree of accuracy using the common statistical methods, such as EVT for engineering projects, including dam development. The same cannot be said for man-made phenomena, and in particular for the case of cyber attacks via the internet. Existing risk models are largely inappropriate for analyzing malicious attacks on computer networks.

Event Independence:

One of the basic tenets of statistical modelling is independence of events. In risk assessment for an earthquake for example, it is normally assumed that they are independent of things such as flooding. The likelihood of both occurring concurrently is computed by multiplying their probabilities together. For IT networks however, multiple simultaneous events are frequently involved. It is well known within the computer security sector that it is far more likely that an attack will combine multiple techniques than a single one. Accurate assessment of joint probabilities of multiple attacks and multiple systems related in an unknown manner is essentially impossible.

Expected loss of events:

Creating an accepted valuation of loss arising from an event, even post facto, is often unobtainable or difficult at best. Losses due to the Love Letter virus way back in 2000 were estimated to be between $1 billion and $7.7 billion. With such a magnitude of range in valuation, the accuracy of valuations in advance of such events is highly questionable.

Mitigation techniques:

In the same way that attacks cannot be exhaustively and comprehensively listed ahead of an attack, neither can mitigation techniques be exhaustively listed. This is due to the number of variables and options for each scenario that may occur. Quantification and subsequent valuation of each is not possible.

Reduction in expected loss:

Given the above, if a less granular approach to listing mitigation actions is taken, the problem of the value of the expected reduction in expected loss is still impossible to achieve accurately.

The reason for this is the cost and saving of each action cannot be estimated with any degree of accuracy. An example would be where password strength is seen as a potential mitigant by increasing the current number of characters. The cost of undertaking this may seem to be low, with a high risk reduction value.

However, increasing password length may result in more people writing down their passwords thereby increasing risk. By how much cannot be computed and is this a cost or a saving? Increasingly, encryption algorithms are used to store data that, when combined with passwords create a level of security. As such, seemingly simple risk value reductions are difficult to calculate in the extreme with any degree of accuracy.

Sensitivity:

Risk analysis sensitivity levels, for example small changes in probability or expected loss values, can affect the selection of one risk management action over another. The selection of one technique over another may have a cascading multiplier effect on subsequent decisions due to the impact creating changes to the mitigating effects of other techniques. Analysis techniques (such as intervals rather than fixed values) used to attempt to overcome this effect have been found to be inaccurate due to the large increase in complexity.

Exponentiation for networks:

Whilst all the above issues face risk managers contemplating individual entities, whether they be at an organizational level, a process level or a systems level, the degree of complexity becomes exponential when addressing IT networks.

The sheer design of networks, with multiple machines in multiple locations, each carrying different systems and data and potentially each having its own security definition creates a scenario where each may face different types of attack at different times.

Attempting to model without assuming interdependency would be to ignore the overall impact on a network and thence to business continuity. Similarly, attempting to model all possible events, sequences and combinations of machines and processes on a network creates a high degree of combinatorial complexity, leading to questionable accuracy.

Additionally, modelling at a single point in time eliminates the impact of one class of event creating a new overall risk scenario that may then be ill-fitted to the first model utilized in assessment and valuation