top of page

Online Monitoring Part 1– why do we need it & how does it work?

Recent years have been challenging, set against the backdrop of COVID-19 and some fairly seismic geopolitical shifts there have been some quietly staggering security incidents that reinforce the threats businesses and individuals face today.

Many of these security incidents have been small, parochial developments that mean a great deal in certain tech and security spheres, but little beyond these communities. Others, however, have been more widespread and garnered a great deal more press attention. The great majority of incidents directly affecting the general public are data breaches of some variety (hence the widespread interest they generate), although it is how different organisations have been able to react to incidents that has ultimately defined their legacy.

This, in turn, is what brings us on to the subject at hand – monitoring and alerting to identify and respond to any incident. The crux of the issue is this; can monitoring and alerting really be delivered in real time, and even if it can, does this add any real value? Indeed, if it does add value, how can we link it to known intelligence models in time to effect integrated decision making?

Intelligence delivery must be guided by the Intelligence Cycle: Direction, Collection, Processing, Dissemination (DCPD). As a mechanism for planning the delivery of even the most complex of problem-solving tools, this is as time honoured a concept as there is. The more complex the problem, oftentimes the more doctrinal the solution must be.

First, we have to outline the desired end state – what do we want to achieve and what does success look like? In most circumstances where any end-user requires alerting, this “finger on the pulse” function is a result of requiring miniscule advantage to outmanoeuvre and outthink a competitor or adversary. So, this is what success looks like – getting inside a competitor’s OODA loop and achieving decision advantage. But for this to happen, the client must place significant implicit trust in the system. In turn, this trust must be rooted in an appreciation that the people behind the system being put in place truly understand the doctrine that supports it. Ultimately, in any form of crisis, the speed with which you can react is what defines advantage, and that’s what really drives the requirement for monitoring.

So, this must be our start point. DCPD...

Direction / UX / business analysis, the term is immaterial, the output is clear – understanding what is required and shaping an appropriate solution. This is another key founding principle of functional monitoring and alerting – we must always understand the status quo and deliver real baseline understanding before anything else can take place.

Collection / Scraping, once again the term means little. Fundamentally, this is about getting as much relevant information from as many sources as possible, as fast and as easily as possible. Breadth of insight is directly proportional to the variety of sources identified and leveraged. This builds on the insight into the problem delivered in the “direction” phase, and targets accordingly. This time is then used more effectively down the line. The collection phase must ultimately be defined in light of one thing: “where quantitive meets qualitative”.

The next phase is by far the most difficult to achieve without teams of analysts – processing. How do we turn information or quantitative data into intelligence by adding real analytical rigour to it? How do we do this in a scalable fashion that harnesses technology without being over-reliant on it and lacking a qualitative element? “Real-time” alerting (which paradoxically requires real-time processing) is, in its purest terms nigh-on impossible to deliver, however, with technological tools complementing analytical value, next to real-time is achieved.

In leveraging tech capability in order to expedite and assure process, there is time for insightful analysis to be overlaid. More importantly, by having rooted the collection process in real intelligence understanding, the volume of alerts triggered should be well within the capacity of the analyst to manage. Quantitative without qualitative (and vice versa) does not add sufficient value for a user to achieve decision advantage. However, enabling the two to happen in concert ensures real insight and reliability.

The final issue is dissemination – this is important but needn’t be a stumbling block and is vital to articulate with the end-user during initial direction. Again, this must be defined by simple intelligence principles; what does the end-user want to know and how can they receive it?


Comentários

Não foi possível carregar comentários
Parece que houve um problema técnico. Tente reconectar ou atualizar a página.
bottom of page