Friday 15 July 2016
The wide-spread availability of analytics has thrust us into a period where we are able to assess past-performance and make informed decisions on how to move into the future. The growth of self-service has only served to open this up to everyone – including those non-technical users.
However, the quality of data fed into our systems can sometimes be criticised and the outcome of this is that it skews our decision-making one way or another, sometimes for the good, sometimes for the bad – but it is still a decision based off inaccurate data.
The objective now is to improve the quality of data we are feeding into our analytics systems so that we are striving to create the strongest decision-making support platform available and thus leverage ROI.
We can exploit ‘data augmentation’ to do exactly this. By combining the data we already have with data from sources coming internal and external to the organisation we are able to augment the data we currently own, providing a much higher accuracy and validity of data due to the additional relevant data added. For a comprehensive definition of data augmentation, Technopedia has a very well-written one.
Quantity Sometimes Equals Quality
Often we hear the phrase, ‘quality over quantity’ but what if in this instance we can rephrase it to be ‘quantity = quality’. That is the mantra of data augmentation.
By supplementing the original data stored within your organisation’s data warehouse with additional data it will serve as a means to carry out more in-depth reporting. Unifying disparate data sources across the organisation and also externally provides a means to add extra variables to the reporting process and provide clarity.
Obviously the issue that commonly arises by adding more data is that it generally adds more complexity especially when applying data governance. Take big data as an example, with the explosion of data we have seen in recent years thanks to data sharing applications like social media we are faced with a data landscape more complex than ever before.
The range of data sources that are available to us now has vastly increased even when compared with just a few years ago resulting in a variety of data source formats ranging from structured data, semi-structured data and unstructured data.
The Big Data Challenge
As the emergence of big data and processes around capturing and analysing it is still in an immature phase, data governance is still difficult to apply with maximum effect. The large quantities of unstructured data that organisations have never before utilised are now seen as the key to unlocking truly meaningful business insights.
It is currently estimated that around 90% of available data in the world is unstructured leaving a huge opportunity for organisations to exploit. Rick Sherman, founder of Athetha IT solutions, a consultancy in Stow, Mass, warns that “Trying to manage or control everything in unstructured data would be a big mistake”. The view here is that a good portion of this data will be worthless and thus the optimum analytical strategies will focus on separating the ‘noisy’ low quality data from the good.
Augmenting the foundation of data already available in your organisation with this additional insight and combining this with a data-driven culture will likely prove to be the key to drive a successful analytics strategy and separate high-performers and low-performers in the new digital age.