Crawler: From web of data to root cause

In part one of this article, guest author Shahid Ishtiaq described the research he performed into inefficient use of COTS business assurance tools. Now read the concluding part, where Shahid describes the CRAWLER solution, and how it improved the productivity of business assurance analysts in Etisalat.

The Pilot Project

A pilot project was launched internally in order to check if automating the “fixed investigation” element of assurance would increase the quality and performance of analysts. We named this project “CRAWLER”. In order to achieve our goal, many options were evaluated. We eventually settled on developing CRAWLER using a mix of common VBA (advance to Macro) and objected oriented programming. Combining both programming methodologies provided a strong control over the OSS/BSS systems front end. This article will not go into the details of the program logic but it is very simple and can be considered as something similar to Microsoft Excel’s Macro. Nothing was visible to the analyst; the CRAWLER logic queries all systems and brings the data to the analyst.

The following is a simple example of logic used by “CRAWLER”. In order to control a web-based intranet business application as a first step, an object of Internet Explorer was made and then the navigation URL was triggered. When the intranet application response was received and a full page in HTML was thrown, all the fields of the page were picked as objects. These fields were then filled with the required values and then the relevant button was triggered. By using such types of simple controls and using objects you can easily browse through the whole application and pick your relevant data without going in to the application technicalities. It is a very simple method and does not require any backend access and it is the same as the steps taken by the analyst through the front end interface.

After receiving control over the OSS/BSS applications we then moved to the next step and extracted alarms from the RAFM systems. In the start we extracted only the relevant information from the different systems and it was also on the request of the analyst. The analyst used to open the alarm, select “CRAWL” option and then all the relevant data was available (based on alarm type template) on one click. Later on, in order to optimize the performance the data extraction system (CRAWLER) was made dedicated. At any time 30 to 40 alarms pool, equipped with the latest live data, ready for investigation and decision making. Whenever analyst opens an alarm and initiates the decision making process, in the mean time another alarm with full information is added to the pool. Each alarm is equipped with a bunch of attachments, and they contain information extracted from different systems. I would also like to mention that in a few cases the attachments are images. These images are a snapshot of the results from the relevant systems. By snapshot you can imagine something similar to taking a screen shot.

In the last phase of our project we also added automation in the alarm/case transitions, especially closure. These transitions were template-based and most of the inputs and figures were automatically calculated once the analysts hit the relevant button. As already mentioned, normally these bunches of inputs made during different transitions of alarm management are rarely utilized for reporting in any vendor system. In our pilot project we also developed some basic reports on this data. These reports showed us the trends in different grey areas of the company and helped us in prioritizing and becoming more vigilant.

Conclusion

The pilot project was very successful and we were able to achieve our objectives, however there is room for a lot of improvements. The project has underlined the need to bring improvement in the alarm management sections of RAFM systems. In order to optimize the performance, the analyst should be able to categorize the concrete steps to reach the final conclusion. It is also vital to keep the content and context separate. The context information will help in making templates for the specific alarm types. The mining should also be built on top of RAFM alarms so that the grey areas and trend of leakages can be focused on more by the operator. This research has also opened many new endeavors where we can focus more towards new types of controls. For example, in the case of RA, we can build PI controls on DSR (Daily Sales Report) as these will help us gain more control over the opportunity loss and the issues that result as revenue leakage in later stages.

Avatar
Guest
From time to time, Commsrisk invites special guests to make an expert contribution.

2 Comments on "Crawler: From web of data to root cause"

  1. Hi Shahid,

    Interesting articles!

    Coincidentally this chimes with some research and thinking the team here at Cartesian have been doing in adapting the crawler/spider methodologies we see in search engines into ways to verify, enrich and (potentially) even analyse data stored within an operator either at subscriber level, or aggregated to higher level, as a means of increasing the productivity of RA functions and departments.

    There’s some advantages – e.g. timely and simple verification of an issue that a batch-based RA system has reported – and also some complications/gotchas – e.g. when we think about the analysis use case there are some concerns about the ‘rawness’ of the data.

    Whatever the purist advantages/disadvantages of the technique it’s certainly something that we all see the Analysts in RA departments doing everyday, with a click-here and a copy-and-paste there, and hence an area where automation can increase productivity, just as Etisalat have seen.

    We’ve been working on a thinkpiece/whitepaper which, time permitting, will be ready in a month or so. Happy to share either direct or through the site as appropriate?

    Kind regards,

    Peter

  2. Avatar Shahid Ishtiaq | 10 Sep 2012 at 7:17 pm |

    Hi Peter,

    It’s great to know that your team is also working on the similar area. The methodologies like crawler/mob-agents/one-win are the keys to bring operational excellence in the complex information systems environment.

    Many challenges arise when we try to map these methodologies on top of revenue assurance and fraud management. Especially the maturity of RAFM department is the critical success factor in this situation. In my article I have mentioned about pilot project which was very successful, it also acted as a prototype to the overall concept. I would also like to add that the best implementation will be to use n-tier architecture with capability of transformation and communication between the systems.

    It would be great, if my work can add some value to your research.

    Best Regards,
    Shahid Ishtiaq

Comments are closed.