A while ago I noticed that a senior representative of a test call vendor was misquoted by a competitor wishing to make a point about their own products. In the spirit of that misquote, I offer my own hostage to fortune:
Nobody cares about the duration of calls.
Please feel free to quote me, if you dare. I wrote that sentence because it is true, to the extent that most generalizations are true. There are a few people who care about the duration of calls, but they have little influence, and they are mostly misunderstood. For example, I am one of the people who actually care about the duration of voice calls, and I spent a chunk of my career doing serious work to ensure durations were accurately recorded. But 20 years of knowing about call duration accuracy and talking to people about revenue assurance leads me to an inescapable conclusion: only a tiny minority of RA practitioners have more than a woefully superficial understanding of how the duration of calls is measured, and none of that minority would be promoted if they went to the trouble to master the detail of a subject that no executives care about either.
There are other ways to determine how much people care about call durations. For example, it gives me no pleasure to predict this article will receive fewer views than Commsrisk does on average, despite the provocative title of this piece. I can see how many readers have shown an interest in previous articles about the same subject, and they always underperform. That is because nobody cares about the duration of calls.
People would care if they thought the charge for a two-minute call was being applied to a call that lasted only one minute, or vice versa. Gross errors would cause a lot of upset and draw plenty of attention. The reason why nobody cares about the duration of calls is that everyone assumes they are accurate to the second. Eyes glaze over when you try to explain the significance of inaccuracies measured in milliseconds. This is despite an important and inescapable conclusion that necessarily follows from basic mathematical principles: even a one millisecond deviation in measuring the duration of a call, if consistently applied to all calls, would mean one in a thousand calls must be under or overcharged after the duration is rounded. A consistent 10ms deviation would lead to the incorrect charging of 1 percent of all calls. A consistent 100ms deviation would lead to the incorrect charging of 10 percent of all calls, and so forth. But even a 100ms deviation is too small to engage the interest of most telco risk professionals, never mind their bosses. When everybody assumes that calls are accurate to the second, then nobody cares about the duration of calls.
What people care about is money, not milliseconds. So whilst some customers may be charged for an extra second, or receive a second of service for free, the extent to which people care is determined by the cost of that second. Whilst some calls remain expensive, there is a clear downward trend in how much people pay for usage. The cost of calls has been falling for a long time, and will continue to fall. And the importance of per-second usage rates has also declined because of pricing trends which have seen telcos change their tariffs and customers reorient their spending towards predictable expenditure on fixed monthly allowances and all-you-can-eat plans. So nobody previously cared about the duration of calls, and they will care even less in future.
The astute reader will notice that adding or deducting one second from 10 percent of calls would still represent a significant amount of money to some telcos. The significance of the gain or loss depends on the length of a typical call. A one-second deviation is worth only 0.028% of an hour-long call, but is worth 1.11% of a 90-second call. The data suggests average call durations are increasing, but there are still a lot of short calls. A comprehensive revenue assurance program should take an interest in deviations of this type, because it would be perverse to ignore a consistent 1 percent error in the charging of all voice calls whilst simultaneously aiming to get all leakages below 1 percent of revenues. However, many revenue assurance departments ignore this risk. They simply lack the skills and interest to check the accuracy of metering, and choose to focus on downstream problems with data instead. I do not condone their approach, but these RA functions do not care about the duration of calls.
What prompted all this discussion of a topic that nobody cares about? When a speaker participating in one of my conferences is misquoted then I feel obliged to defend them, even if I choose to be diplomatic about who was responsible for the misquote. Xavier Lesage of Araxxe joined an expert panel that discussed test calls during RAG Online, and the moderator of that panel, Lee Scargall, read him a question submitted by a member of the audience. What follows is the transcript of what was actually said; you can check for yourselves by watching the video recording from the 23.30 mark.
Lee: There’s one good [question] here; maybe this one’s for you, Xavier. This morning Arnd [Baranowski, of Oculeus] talked about the duration of calls being accurate to between 10 and 100 milliseconds when you look at the SIP signals, and that means a lot of money can be made by adding an extra second to maybe 10 percent of calls. Are TCGs [test call generators] accurate enough to catch this kind of overcharging at a wholesale or retail level?
Xavier: Yes, the answer is yes. And unfortunately you don’t need that accuracy to detect problems because you know we recently completed a program to Mauritania, and, believe me, the problem was not a problem of milliseconds. The problem was a problem of tenths of seconds. Tenths of seconds. So for me, the answer is yes, but the problem is not a problem of milliseconds, that’s the wrong problem. The problem is fraud, it’s the set-up time in the case of roaming calls, and specific things, and believe me there are millions of minutes which are overcharged.
The way I interpret this exchange is that Xavier clearly stated test call generators can check metering accuracy to an accuracy of 100ms, but so few telcos care about accuracy that much worse errors and frauds remain undetected. I struggle to understand why anyone would interpret Xavier’s answer as implying accurate metering is not required. The issue that Xavier alluded to, as an astute businessman in his own right, is not that he cannot or should not need to check metering accuracy. He was observing that too few customers demand accurate testing of the duration of calls.
I see no reason for rival vendors to enter into a squabble about whose equipment is more accurate when the real concern is that telcos simply do not care about metering accuracy. If this discipline is going to improve then telcos should set higher standards. Suppliers will then follow their lead. This leads me to wonder who has bothered to adhere to, contribute to, critique or even read the ETSI standard on metering and billing accuracy. I found its handling of duration to be inadequate, for reasons given here. Perhaps some might believe another body should issue a better industry-wide standard, but I am unaware of anyone making that argument publicly.
When I talk about higher standards, consider the following: who amongst this audience would dare to state an opinion on how much more accurate telcos could be, if they wanted the most accurate measurement of duration that is technically possible? I can think of several examples where I encountered people who ignorantly believed recorded durations are already accurate to the millisecond, which only shows they lacked the basic conceptual framework to distinguish between producing a record that has three decimal places and producing a record that is accurate to three decimal places. So the people who most often express opinions about fantastic levels of accuracy are just showing they do not take sufficient care to understand the subject they are talking about.
If none of us are competent to discuss improvements in accuracy, then what does it mean to say we assured the accuracy of charging, except to say we are willing to tolerate a (poorly understood) degree of error because we decided it was not worth trying to be more accurate? We simply do not know how the actual level of accuracy attained compares to the greatest degree of accuracy that could be attained, and hence how much money is being lost or gained because of the difference between the two. Most RA practitioners are only familiar with the simple mathematical observations I made above, which reflect the value at risk when choosing to tolerate certain degrees of deviation, and they do not care to learn more.
Whilst the ETSI standard has weaknesses, at least it is a public standard that others could choose to follow and improve. I am less keen on the notion that the determination of accuracy should be privatized, with certificates handed out by businesses that do not explain how they assess the accuracy of the equipment they certify. What use is a piece of paper saying some equipment is accurate at recording durations to within 100ms if nobody knows how that determination was made?
I have personal experience of the lousy work done by one body that issues accuracy certificates. For some years this business reviewed the accuracy of metering for a telco where I worked, and their superficial approach was inconsistent with the levels of accuracy they pretended to guarantee. On one occasion their employee spent half a day reviewing the accuracy of metering for retail charging of calls and I was consequently notified of a ‘serious’ issue. It took me five minutes to determine they had spent all their time talking to somebody about the metering of records used for interconnect billing when they were only engaged to audit retail billing. This same firm had, prior to my appointment, approved a cavalier change to the processing of records that effectively increased the duration of all retail calls by 500ms, which made a nonsense of their public chatter about driving systematic improvements in accuracy. So when people throw around the names of businesses that certify accuracy, I want to know how they determined what is accurate. That can only be achieved through a public standard that everyone transparently knows how to adhere to, allowing tests of compliance to be independently repeated.
With the latter stories in mind, perhaps the only current way to resolve an argument about who accurately measures call durations would involve lining up rival suppliers and getting them to measure identical samples of calls conducted under conditions that correspond to how calls are made in real life, as opposed to idealized laboratory experiments. This means sometimes the A-party will hang up first and other times the B-party will end the call. Sometimes the two parties will be close together, and using the same network, and other times the parties will be on opposite sides of the planet, with a consequent delay to the signals that pass between networks. And the duration of a real-life call is never going to be a whole number of seconds, even if it is more convenient to design scheduling software that only permits the entry of tests that have an integer duration. After all, what does it even mean to say equipment has been proven to be accurate to 100ms if you perform tests that are 90 seconds long, or 91 seconds long, but cannot perform tests that are 90.5s or 90.3s or 90.89s?
But probably nobody will ever persuade rival firms to agree to a fair and comprehensive comparison of how their technology measures the duration of calls under realistic conditions. Some people might think I should encourage them to do it, but I will not. Why have I not dedicated myself to pursuing that goal? Because nobody cares about the duration of calls.