Handbook of Information Security Management:Risk Management and Business Continuity Planning

Previous Table of Contents Next


Qualitative Elements

Since the qualitative metrics are all subjective in nature, virtually every risk element can be characterized by the first two metrics, “Low, Medium, and High,” or “Ordinal Ranking.” “Vital, Critical, and Important,” however, are descriptive only of an asset’s value to an organization.

The Baseline approach makes no effort to scale risk or to value information assets. Rather, the Baseline approach seeks to identify in-place safeguards, compare those with what industry peers are doing to secure their information, then enhance security wherever it falls short of industry peer security. A further word of caution is appropriate here. The Baseline approach is founded on an interpretation of “due care” that is at odds with the well-established legal definition of due care. Organizations relying solely on the Baseline approach could find themselves at a liability risk with an inadequate legal defense should a threat event cause a loss that could have been prevented by available technology or practice that was not implemented because the Baseline approach was used.

The classic quantitative algorithm, as presented in FIPSPUB-65, that laid the foundation for information security risk assessment is simple:

      (Asset Value x Exposure Factor = Single Loss Expectancy) x
      Annualized Rate of Occurrence = Annualized Loss Expectancy

For example, let’s look at the risk of fire. Assume the Asset Value is $1M, the exposure factor is 50%, and the Annualized Rate of Occurrence is 1/10 (once in ten years). Plugging these values into the algorithm yields the following:

             ($1M x 50% = $500K) x 1/10 = $50K

Using conventional cost/benefit assessment, the $50K ALE represents the cost/benefit break-even point for risk mitigation measures. In other words, the organization could justify spending up to $50K per year to prevent the occurrence or reduce the impact of a fire.

It is true that the classic FIPSPUB-65 quantitative risk assessment took the first steps toward establishing a quantitative approach. However, in the effort to simplify fundamental statistical analysis processes so that everyone could readily understand, the algorithms developed went too far. The consequence was results that had little credibility for several reasons, three of which follow:

  The classic algorithm addresses all but two of the elements: recommended safeguard effectiveness, and uncertainty. Both of these must be addressed in some way, and uncertainty, the key risk factor, must be addressed explicitly.
  The algorithm cannot distinguish effectively between low frequency/high impact threats and high frequency/low impact threats. Therefore, associated risks can be significantly misrepresented.
  Each element is addressed as a discrete value, which, when considered with the failure to address uncertainty explicitly, makes it difficult to actually model risk and illustrate probabilistically the range of potential undesirable outcomes.

Yes, this primitive algorithm did have shortcomings, but advances in quantitative risk assessment technology and methodology to explicitly address uncertainty and support technically correct risk modeling have largely done away with those problems.

Pros and Cons of Qualitative and Quantitative Approaches

In this brief analysis, the features of specific tools and approaches will not be discussed. Rather, the pros and cons associated in general with qualitative and quantitative methodologies will be addressed.

Qualitative — Pros

  Calculations, if any, are simple and readily understood and executed.
  It is usually not necessary to determine the monetary value of information (its availability, confidentiality, and integrity).
  It is not necessary to determine quantitative threat frequency and impact data.
  It is not necessary to estimate the cost of recommended risk mitigation measures and calculate cost/benefit.
  A general indication of significant areas of risk that should be addressed is provided.

Qualitative — Cons

  The risk assessment and results are essentially subjective in both process and metrics. The use of independently objective metrics is eschewed.
  No effort is made to develop an objective monetary basis for the value of targeted information assets. Hence, the perception of value may not realistically reflect actual value at risk.
  No basis is provided for cost/benefit analysis of risk mitigation measures, only subjective indication of a problem.
  It is not possible to track risk management performance objectively when all measures are subjective.

Quantitative — Pros

  The assessment and results are based substantially on independently objective processes and metrics. Thus meaningful statistical analysis is supported.
  The value of information (availability, confidentiality, and integrity), as expressed in monetary terms with supporting rationale, is better understood. Thus, the basis for expected loss is better understood.
  A credible basis for cost/benefit assessment of risk mitigation measures is provided. Thus, information security budget decision-making is supported.
  Risk management performance can be tracked and evaluated.
  Risk assessment results are derived and expressed in management’s language, monetary value, percentages, and probability annualized. Thus risk is better understood.

Quantitative — Cons

  Calculations are complex. If they are not understood or effectively explained, management may mistrust the results of “black box” calculations.
  It is not practical to attempt to execute a quantitative risk assessment without using a recognized automated tool and associated knowledge bases. A manual effort — even with the support of a spreadsheet and generic statistical software — can easily take 10 to 20 times the work effort required with the support of a good automated risk assessment tool.
  A substantial amount of information about the target information and its IT environment must be gathered.
  As of this writing, there is not yet a standard, independently developed, and maintained threat population and threat frequency knowledge base. Thus user must rely on the credibility of the vendors who develop and support extant automated tools or do threat research on their own.


Previous Table of Contents Next




Network Security Library - All you want to know about Windows, UNIX, NetWare, WWW, Firewalls, Intrusion Detection Systems, Security Policy, etc.