IPDR-based Usage Metering Accuracy

Cripes! Did I just write that post heading? I surely did. When I use acronyms, they tend to be ROI, P/E ratio, BPR or USP. Today, for a change, I am going to focus on IPDRs and DOCSIS. Why? Because sometimes even I find that to do revenue assurance you need to roll up your sleeves, dive in the deep end, and start wading waist-deep through the technological soup of telecoms. That is especially true when understanding metering accuracy, the dark art of revenue assurance which is understood by few and bullsh*tted about by too many. Metering accuracy is the most extreme point on the network side of the network-billing continuum that RA people need to check for accuracy. RA people love love love reconciliations, but reconciliations are utterly useless if all you are doing is reconciling garbage in to garbage out. Checking for garbage in means checking the meter. Metering is the starting place for the most steretypical RA discipline of all the RA disciplines – the assurance of the usage charges. But even this most ancient of RA checks has to move with the times. As the world and business changes, so more and more fixed-line telcos and cable providers are considering making the shift towards usage-based charging of data services. That means reconciling usage per IPDRs, and, moreover, checking the accuracy of the data in those IPDRs. I want to explore the topic before any goons from BT come along and make up some stats on usage accuracy, based on their own unrivalled nincompoopery. Trust me, BT have done it before – they will try to do it again. Speaking of which, whatever happened to the cVidya-backed, BT-fronted World Revenue Assurance Forum and its so-called chairman? I say ‘so-called chairman’ because ‘vendor stooge’ would be a more appropriate title for the BT buffoon that cVidya chose to spearhead their crypto sales operation ;) But I digress. Luckily for me, a new report in the public domain means we can talk about IPDR usage accuracy, and even discuss real and tested stats, without revealing secrets and without relying on anyone’s misinformation…

NetForecast, a network consultancy, were asked to audit the usage metering accuracy for Comcast subscribers served by the IPDR-based Cisco Cable Modem Termination System (CMTS) model 10000. The report was written by NetForecast boss Peter Sevcik and has been made public; you can find it here. NetForecast’s report discusses the factors that determine accuracy, and the reasons why the volume of data sent and received by the end user will differ from the volume of data sent over the network. For their audit, Netforecast performed a series of controlled tests, creating and measuring data traffic from the user’s perspective, and comparing this to the metered usage per the IPDRs created by the CMTS. Their final conclusion was that the Cisco CMTS 10000 used by Comcast is accurate to plus or minus 0.5% over the course of a month’s usage. National regulators should take note – it is no good setting accuracy expectations that are more stringent than this, because the end-to-end accuracy obtained by an operator will never be better than the accuracy of the equipment they use, and the equipment used by operators is manufactured and sold on an international basis by suppliers like Cisco. This places an effective limit on the accuracy attainable by any operator.

The Cisco CMTS 10000 reports usage in the form of IPDRs created every 15 minutes. The final mile of the user’s connection is between the CMTS and the cable modem in their home. Differences between the usage recorded by the CMTS and at the user’s home are hence down to differences between the volumes of data recorded at either end of the local coaxial or Hybrid Fiber-Coaxial (HFC) used for the last mile. These can occur because protocols like TCP will cause packets to be retransmitted if lost in transit. If a downstream packet is lost between the CMTS and cable modem, it will be resent, meaning the volume of the missing packet is recorded at the CMTS but not at the user’s end. If an upstream packet is lost between the cable modem and CMTS, it will also be resent, meaning the volume of the missing packet is recorded at the user’s end, but not at the CMTS.

The NetForecast report also highlights a number of issues that may influence the user’s perception of accuracy, as opposed to the actual accuracy. To begin with, when performing any kind of measurement, it is important to be precise about what is being measured. In this case, the volume of content that is of interest and can be measured by the user is only the payload carried within the traffic. There are also overheads relating to the protocols needed to carry that traffic. The DOCSIS specification defines how subscriber traffic is carried within Ethernet frames. Anything within those frames, be it the user’s content or the overheads for protocols within the Ethernet frame, will be measured by the CMTS. The essence of the NetForecast test was FTP files of known sizes. NetForecast calculated that the FTP, TCP and IP protocols added about 6.2% overhead to the traffic carried. So if a user replicated the tests by comparing the size of files they have FTP’d to the volume of data recorded by IPDRs, they would see a variance of 6.2%. It is important to know that the overhead variance will occur and may need to be incorporated into customer-facing processes for customers who may complain about being overcharged for usage. Moreover, 6.2% is not a fixed amount for all traffic. The FTP protocol adds a low overhead, and many other protocols will add more overhead, leading to higher variances between usage as perceived by the customer and as recorded by the network.

Other factors that NetForecast rightly consider are timing differences because of how long it takes to produce, poll and aggregate IPDR data, and the influence of rounding on the volumes measured. The average RA practitioner should routinely identify these factors for their business. The RA practitioner is more likely to overlook “background” traffic, which has nothing to do with how much the subscriber uses their service, but gets measured and added to the volumes of data all the same. In their tests for Comcast, NetForecast concluded that background traffic like SNMP polls and modem checks represented less than 1GB of traffic per month. In the context of Comcast’s service, this was unimportant, as Comcast’s usage monitoring is designed to identify use above 250GB per month, and there is a degree of inherent offset because monthly usage is always rounded down to a whole number of gigabytes. However, if background traffic was higher, or if pricing is more sensitive to use at lower volumes of usage, it would become increasingly important for the provider to manage customer expectations relating to charges that the user cannot influence.

Kudos to Peter Sevcik of NetForecast, for writing a report that is clear, accessible and covers all the important factors that determine the actual and perceived accuracy of IPDR-based usage metering of data services. Kudos also to Comcast for commissioning the report and for making it public. Transparency is an essential aspect of accurately and fairly charging customers. This report shows that whilst metering may be complicated, it does not have to be mystifying. Anyone else with an interest in accurate charges, and hence in accurate metering, should take note.

Eric Priezkalns
Eric Priezkalnshttp://revenueprotect.com

Eric is the Editor of Commsrisk. Look here for more about the history of Commsrisk and the role played by Eric.

Eric is also the Chief Executive of the Risk & Assurance Group (RAG), an association of professionals working in risk management and business assurance for communications providers. RAG was founded in 2003 and Eric was appointed CEO in 2016.

Previously Eric was Director of Risk Management for Qatar Telecom and he has worked with Cable & Wireless, T‑Mobile, Sky, Worldcom and other telcos. He was lead author of Revenue Assurance: Expert Opinions for Communications Providers, published by CRC Press.

Related Articles

Get Our Weekly Newsletter by Email