Crawler: From web of data to root cause

In part one of this article, guest author Shahid Ishtiaq described the research he performed into inefficient use of COTS business assurance tools. Now read the concluding part, where Shahid describes the CRAWLER solution, and how it improved the productivity of business assurance analysts in Etisalat.

The Pilot Project

A pilot project was launched internally in order to check if automating the “fixed investigation” element of assurance would increase the quality and performance of analysts. We named this project “CRAWLER”. In order to achieve our goal, many options were evaluated. We eventually settled on developing CRAWLER using a mix of common VBA (advance to Macro) and objected oriented programming. Combining both programming methodologies provided a strong control over the OSS/BSS systems front end. This article will not go into the details of the program logic but it is very simple and can be considered as something similar to Microsoft Excel’s Macro. Nothing was visible to the analyst; the CRAWLER logic queries all systems and brings the data to the analyst.

The following is a simple example of logic used by “CRAWLER”. In order to control a web-based intranet business application as a first step, an object of Internet Explorer was made and then the navigation URL was triggered. When the intranet application response was received and a full page in HTML was thrown, all the fields of the page were picked as objects. These fields were then filled with the required values and then the relevant button was triggered. By using such types of simple controls and using objects you can easily browse through the whole application and pick your relevant data without going in to the application technicalities. It is a very simple method and does not require any backend access and it is the same as the steps taken by the analyst through the front end interface.

After receiving control over the OSS/BSS applications we then moved to the next step and extracted alarms from the RAFM systems. In the start we extracted only the relevant information from the different systems and it was also on the request of the analyst. The analyst used to open the alarm, select “CRAWL” option and then all the relevant data was available (based on alarm type template) on one click. Later on, in order to optimize the performance the data extraction system (CRAWLER) was made dedicated. At any time 30 to 40 alarms pool, equipped with the latest live data, ready for investigation and decision making. Whenever analyst opens an alarm and initiates the decision making process, in the mean time another alarm with full information is added to the pool. Each alarm is equipped with a bunch of attachments, and they contain information extracted from different systems. I would also like to mention that in a few cases the attachments are images. These images are a snapshot of the results from the relevant systems. By snapshot you can imagine something similar to taking a screen shot.

In the last phase of our project we also added automation in the alarm/case transitions, especially closure. These transitions were template-based and most of the inputs and figures were automatically calculated once the analysts hit the relevant button. As already mentioned, normally these bunches of inputs made during different transitions of alarm management are rarely utilized for reporting in any vendor system. In our pilot project we also developed some basic reports on this data. These reports showed us the trends in different grey areas of the company and helped us in prioritizing and becoming more vigilant.

Conclusion

The pilot project was very successful and we were able to achieve our objectives, however there is room for a lot of improvements. The project has underlined the need to bring improvement in the alarm management sections of RAFM systems. In order to optimize the performance, the analyst should be able to categorize the concrete steps to reach the final conclusion. It is also vital to keep the content and context separate. The context information will help in making templates for the specific alarm types. The mining should also be built on top of RAFM alarms so that the grey areas and trend of leakages can be focused on more by the operator. This research has also opened many new endeavors where we can focus more towards new types of controls. For example, in the case of RA, we can build PI controls on DSR (Daily Sales Report) as these will help us gain more control over the opportunity loss and the issues that result as revenue leakage in later stages.

Avatar
Guest
From time to time, Commsrisk invites special guests to make an expert contribution.