Three Good Ways (and a Bad Way) to Use Test Calls

Today I have the honor of attending the SIGOS Telecommunications and Digital Experience Conference which is being held in Düsseldorf (pictured). My short talk will aim to be controversial, as usual. Not many audiences get excited by talks about test call generators (TCGs) for billing verification, but the Düsseldorf attendees may be an exception. What always strikes me about the topic is that I disagree with the majority who do not use test calling as part of their revenue assurance strategy and I disagree with a majority who do. Test calling is such an obvious control that everybody should use it. But the test approach is often so poor that it becomes a waste of money.

Making a call and seeing if it is charged correctly should be the easiest, most comprehensive test imaginable. I can do it without knowing anything about the telco or how it works. All I need to know is what calls I made (which should be easy) and what the contract with the telco says about prices (which should be easy enough). Then I will be able to spot if literally anything goes wrong. Your switch-to-bill RA system can have all the ‘real-time 100 percent coverage’ functionality you crave but it still will not be able to tell if half the calls on your network never generated a CDR in the first place. In that sense, there is good reason to use test calls to obtain a very wide span of control coverage that will identify any flaws or gaps in other controls.

But then I sympathize with those who never adopted test call programs. If they reviewed what other telcos have done, and the benefits gained, they will realize that many real-life test strategies are so ineffective that they represent a colossal waste of money. The waste is not just reflected in the cost of the technology but also in the manpower that tends to produce report after report reaching the same conclusion because the system or the people cannot, or will not, vary the tests being performed.

Some of the inefficiencies with test call programs relate to a lack of imagination, or an unwillingness to maximize efficiency. As a real human being, every time I use my telephone I am also performing a test, even if testing is not my main goal. I can tell if the quality was poor, and I know if the call dropped. If the bill itemization and my memory are sufficiently detailed then I can also tell if the subsequent charge was correct. Human beings perceive the world as a whole, but telco test programs often chop it up into discrete goals, like whether a roaming partner is satisfying their part of a deal, or whether a particular route is being bypassed, or whether there is coverage in a certain part of the countryside. But a test is a test is a test. We could check all aspects of the telco’s performance for every call. Sometimes we decide not to do so because our test system is not versatile enough. Sometimes we fail to do so because different people in the telco have different objectives. This can lead to a terrible waste of money when one telco uses two separate test systems run by two separate teams to make different calls to check for different test parameters. The potential cost savings should be obvious.

The failure to vary tests may also be due to technology or people. I have listened to vendors who insist they can replicate every kind of scenario which leads to the end of a call, except one: when the B-party hangs up first. Whilst it is interesting to know a test can simulate losing radio coverage, we should also be able to perform realistic tests that reflect the 50-50 split in who will be first to put their phone down. However, some telco staff can be just as inflexible. Confronted with the chore of reprogramming or physically relocating TCGs in order to implement new tests, some people will never bother. So instead of extending the coverage of testing they complacently perform identical tests every day. It is little wonder that they rarely discover new errors.

Some of these deficiencies with the implementation of TCGs have soured the potential to use them more widely. Perhaps too narrow a focus on the objective means we fail to appreciate all the potential value that might be obtained from the test. In my presentation to the SIGOS audience I will mull the analogy between test calling and mystery shopping, and so make an argument for three other ways that TCGs could and should be used in practice, though they rarely are. You can see the slides below.

Eric Priezkalns
Eric Priezkalnshttp://revenueprotect.com

Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), an association of professionals working in risk management and business assurance for communications providers. RAG was founded in 2003 and Eric was appointed CEO in 2016.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press.

Related Articles

Get Our Weekly Newsletter by Email