You may know the risks and impacts, but are you prepared?

My personal career transition from revenue assurance function (focused on telecoms) to that of enterprise risk management and eGRC software application has opened a completely new world of learning and things to think on. The basics of risk management and assessment include determining the likelihood and impact of risks and the effectiveness of the controls; but then the following question started pestering me. Thus this tiny post is for me to understand your opinion/thoughts.

You have your risk registers; you evaluate the risks; you add more risks and associated controls; you assess the IMPACTS and LIKELIHOODS of these risks; you test the controls for their effectiveness; you report and follow up and reassess; BUT…. if disaster strikes, ARE YOU PREPARED? The question is that of preparedness instead of control effectiveness. Essentially, how much aware are you of the velocity of the strike; and should a disaster strike, are you ready to take it head on? This is my 1st question!

While this was in my mind, and started researching about it, I came across this interesting article here. While I found this article answering the question I had in my mind, but preaching and practicing is different, and thus, the second part of my question is:

Are You really practicing this integrated approach to determine your preparedness?

If yes, how easy and effective have you found?

if No, what are the challenges You are facing?

Let me know your thoughts.

 

 

Moinak Banerjee
Moinak Banerjee
Moinak works at Protiviti Kuwait, as Product Lead for their Risk Technology Services. Over the years, he has worked in product management for several leading vendors of telecom OSS/BSS software.

Related Articles

3 COMMENTS

  1. Hi Moinak,

    When you access the impact and likelihood of the risk, you must have a standard metric to measure them, usually on a scale 1 to 5.Each of the score, should have a clear definition on business operations, financials, reporting and customer impact. Each risk are associate with the control effectiveness. The reason to have this is to allow the management to identify which risks are residual or inherent. Of course, residual risk are acceptable but focusing on inherent risk will be challenging as the business changes, new product introduced, system migration and etc. This is where most management are talking about GRC which are mainly risk management, monitoring top to bottom, control activities and communication.

    I’m confused on you mentioning about preparedness. This is because control effectiveness describe how prepare the company to face the risk. What we can look further is the inherent risk, something that could happen in future, but most of the time , we don’t have enough data to support.

    I remember, when I was in Revenue Assurance long ago(currently doing internal audit), we were trying to calculate the recurring revenue leakage if RA don’t detect the issues, what we call ‘future effect’. Example, if we detect revenue leakage for January is $1mil, subsequent month, February’s revenue leakage will be +-$1mil based on certain factors which scale 1 to x. In my opinion, ‘future effect’ is wrong and I don’t want to repeat the mistake again.

    Ahmad Fairuz,

  2. Hi Ahmad;
    Thanks for your comment. As I mentioned, this post was more for my own clarification, please allow me to explain. When we determine the impacts and likelihood, a number of the High impacts but Low Likelihood events point towards “severe crisis” situations. What the typical measurements ideally do is give an overall picture, but, does not show if the company is ready in terms of execution/action during a severe crisis. Hence how do you get an estimate if the company is fully ready to handle “crisis”? How do you estimate, how much resilient the organization is, should a disaster strike? As an example, we have seen a number of companies go bankrupt post 9/11 terror attacks. My question is, can lack of ‘preparedness’ to actually handle crisis be a cause of concern? We may be able to argue that such companies did not have effective risk management, but that could be questionable, because the attack was an unprecedented one. A certain small airport authority may determine the likelihood and impact of risks, but how do they convincingly determine that they are prepared should a sever crisis arise, imagine a plane hijack or a crash. Crisis management teams do exist in some places, but how many times are they fully integrated with the risk management and mitigation processes? Crisis management would need people on the ground, who would be able to execute under severe pressure unlike a committee who reviews risks and threats on spreadsheets/presentations/software (no pun intended). The worst of the scenario is, a false sense of security, during risk and control assessments for these ‘crisis’ points. How would you determine, that this security bias is actually not present?
    You mention the “future effect” in RA calculations. I guess it was more of an extrapolation of the available data which was more or less used to determine how great the RA activities have been and what possible impacts on the revenue may have been averted. But here, the idea of preparedness is more from a disaster management or severe crisis handling perspective which if not looked at would give a sense of false security, and thus is unlike the “projected revenue savings” concept of RA.

    Was I able to explain my train of thoughts? Look forward to your feedback and comments.

  3. @ Fairuz,

    You said nothing wrong, but I’d exercise caution when you state: “you must have a standard metric to measure them, usually on a scale 1 to 5”. The kind of scale you refer to has become popular by repetition, but I consider believe it causes more harm and confusion than good. I see this kind of scale being routinely advocated by poor practitioners who don’t know understand risk management, and who certainly shouldn’t be teaching how to do it.

    To begin with, you should always measure a risk, as far as practical. But if you know how risky the risk is (the probability of it happening*the magnitude of the impact if it happens), then what does the scale give you in addition? Why do you actually need a scale like this, if you have already measured the risk? In most cases, the level 5 risk is not 5 times as risky as the level 1 risk. The level 4 risk is not twice as risky as the level 2 risk. So what is the meaning of the bands that people create? How does the scale help you prioritize, if you cannot determine if the combined risk of 27 of your level 3 risks is greater or less than a single level 5 risk? How does the scale help you decide whether there is greater cost-benefit in mitigating all 27 of the level 3 risks or the single level 5 risk?

    My point is this: either you have measured the risk or you have not. If you asked me to rank several hundred measured revenue leaks, you wouldn’t start by taking the ones between $1 and $100 and putting them into level 1, the ones between $101 and $1000 and putting them into level 2, and so on… so why do this when reviewing several hundred risks? The truth is that this scale you refer to is often used to do the opposite of measurement – to hide the fact that risks have not been properly measured. Once a risk is properly measured, the scale is just a way of presenting the data, and you should be able to present the data in lots of ways, without getting hung up on using scales like this.

Comments are closed.

Get Our Weekly Newsletter by Email