Rediscovering Intelligence?

Benjamin Franklin was an intelligent man. He was an author and printer, satirist, political theorist, politician, scientist, inventor, civic activist, statesman, soldier, and diplomat. I am especially fond of what Franklin said about learning from experience:

Experience keeps a dear school, but fools will learn in no other.

Contrast that with the latest draft TM Forum output on ‘RA coverage’, which includes this suggestion about the ‘optimized’ scope of RA controls:

sampling is augmented by “intelligent sampling”, supporting… re-detection of issues detected in the past

Franklin suggested that even fools would at least, and at last, learn from experience. Now it seems that some expect less from RA!

Eric Priezkalns
Eric Priezkalns
Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), a global association of professionals working in risk management and business assurance for communications providers.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press. He is a qualified chartered accountant, with degrees in information systems, and in mathematics and philosophy.

2 Comments on "Rediscovering Intelligence?"

  1. Avatar Morisso Taieb | 6 May 2010 at 8:35 am |

    Hi Eric,
    Hi have to disagree with you on that matter. In big organizations teams changes, people changes and the organization "forgets". From time to time we meet again issues that were solved in the past or became irrelevant at a stage in the past, and come back to life after a change of persons unaware of the past or after a program change that didn’t take into account the past issue.
    "Intelligent sampling" (I haven’t read the draft you’re quoting), for me, means that I have within the RA tool an automated routine that prevent us from "forgetting the past".
    By the way, Twitter brought me to this article…cheers on the benefits of twittering.
    Kind Regards,

  2. Hi Morisso,

    You won’t find me arguing with the idea that RA needs to be a ‘memory’ for the business. In fact, I’ve always argued that being the memory is a vital and defining aspect of how to do good RA. However, what I find strange about the idea of ‘intelligent sampling’ is that we’re not talking about memory of an issue, but rediscovery of an issue. To my mind, that implies RA forgets – as if RA can only deal with the business following an inflexible model of do a test, then see the result, then respond. RA can do other things too – like fix the underlying root cause so there is no point in doing the test in future!

    Now, I know know know that some RA people will say “and what if such-and-such circumstance occurs and then we have another error that the test would have identified – but according to you Eric, we stopped testing!” Not so. The TMF catalyst project is about lots and lots of detailed tests. If we want to be efficient in our performance – and hence attain the optimizing level of maturity – we need to learn to drop low-level tests that are highly unlikely to find things on the basis that the cost of executing the tests is not justified by the finding. But dropping low-level tests does not mean dropping tests. On the contrary, a comprehensive test strategy can still cover all leakage points whilst dropping excess low-level tests if there are high-level tests too. The difference between low-level and high-level tests is that low-level tests are very specific, giving them high diagnostic power but low coverage. High-level tests are very general, giving them high coverage but low diagnostic power. So one can substitute inefficient low-level tests for efficient high-level tests, supplemented by low-level tests as and when needed i.e. where the high-level test indicates an issue of sufficient materiality, and you then need to go into diagnosis mode and identify the root cause. Now that’s what I’d call intelligent testing, but it’s not intelligent sampling because it’s actually covering systems and processes end-to-end but from a helicopter view. If that means storing a routine for the low-level tests when (and only when) you need them so they can be rapidly executed, then that’s fair enough. But nothing in the project talks about this aspect of zooming in and out of the detail for efficiency. On the contrary the base proposal of the catalyst is that more and more detail gives better and better results, and I can’t agree with that because it runs counter to any kind of lean/efficient/optimizing outlook on how to do RA.

Comments are closed.