IT Brief Asia - Technology news for CIOs & IT decision-makers
Story image
Disparate data causing headaches for ANZ businesses
Wed, 28th Sep 2022
FYI, this story is more than a year old

Gone are the days when developers could get away with merely producing code. Many are now expected to be accountable for their code, which should be ‘clean’, right up to deployment.

As a result, having the right data at the right time to make informed decisions has never been more important. One of the biggest roadblocks to this is having to manage multiple data sources and monitoring tools.

This fragmentation produces an explosive amount of operational data that’s difficult and time-consuming to analyse, and the resulting lag and outages result in poor customer experiences and inevitably lost business. Legacy systems and processes, siloed teams and a lack of consistency all contribute to the chaos.

Despite this level of pressure, organisations in the Asia-Pacific region are the least likely to have unified telemetry data compared with those operating in EMEA and the Americas – with many toggling between six or seven tools at a time, according to the 2022 New Relic Observability Forecast. 

Currently, a third (33.3%) of Australian and more than a quarter (27.8%) of New Zealand respondents stated that they use observability to support cost-cutting (consolidation) efforts. More than half (51.8%) of Australian businesses indicated that they primarily learn about software and system interruptions through multiple monitoring tools, and 26.5% indicated they still primarily learn about interruptions and outages through manual checks/tests or through incident tickets and complaints.

Overall, almost a quarter of ANZ respondents indicated that too many monitoring tools (24.3%) and siloed data (23.3%) are primary challenges preventing them from prioritising or achieving full-stack observability.

Tool sprawl significantly impacts an engineering team’s ability to do their job efficiently and effectively. A piecemeal approach creates data silos and blind spots, creating more toil for tech teams having to switch between tools, and ultimately increasing mean time to detect (MTTD) and mean time to resolution (MTTR). 

How can engineers be expected to deliver at such a high standard when they are dealing with disparate data and are unable to see all the pieces of the engineering puzzle? Here are three tips to help organisations tackle the issue and make sense of their disparate data. 

Agree on a single source of truth 

As a business grows and scales, many find that data from various tools (such as open-source programs) is completely siloed resulting in blind spots.

There can also be reporting inconsistencies from one tool to another depending on how that data is captured. For example, data may be aggregated every five seconds in one tool versus every minute in another.

It’s difficult to correlate data captured in different ways. If all the different teams, developers, DevOps and BizDevOps had the same correlated data in their hands, they can make swift, informed decisions without manual interpretation or debate.

Do a tech spring clean 

Before deciding to consolidate data into one place, organisations should undergo a tool rationalisation and data consolidation exercise - have a clear overview and understanding of all tools used in the organisation before starting the process of elimination.

Then, build a comprehensive set of use cases and outline possible approaches for the defined use cases. This includes piloting critical scenarios. The aim is to align specific business objectives to an observability strategy. This might be a metric such as reducing MTTR and downtime, or achieving higher customer sentiment or faster service delivery. Once KPIs are set, tools can be mapped to teams and outcomes.

Next, determine what gaps remain in achieving an integrated view of all systems. Even if you have multiple tools, there are often areas that are missed.

Invest in an open, connected observability platform

Once you have rationalised your toolsets, you can strategically combine relevant data into a centralised observability platform. For organisations with extensive legacy architecture, this may involve a significant migration and new processes. Companies that have gone digital are frequently saddled with technology designed for bricks-and-mortar operations.

It’s better to start with a modern application architecture rather than retrofit. Remove technology redundancies and create a standardised approach to integrating digital activities. This may be a marathon more than a sprint, taking years rather than months. Modern tools may not instantly replace legacy counterparts, so taking gradual steps is wise.

Teams should be fully trained on the new platform, with documentation and knowledge sharing socialised. Instead of the former situation of autonomous teams building, deploying, and maintaining their own services, everyone needs to communicate and collaborate. Start with the basics such as reactive use cases. Over time, as engineers learn new skills and unlearn old habits, they can adopt a more proactive approach.

With the right observability platform, all data types – metrics, events, logs and traces – can be brought into one place, eliminating guesswork and ensuring that the data is captured and measured in the same way to give an accurate picture of how your tech is performing, in real time.