Friday 24 June 2016
What comes to mind when faced with the notion of ‘seeing’ into the future?
While fortune telling and idiosyncratic FBI agents have made supernatural representations in popular culture, the reality has been far more obvious: data analysis.
Without knowing it, you may have already been exposed to the results of predictive analytics; online shopping recommendations based on previous purchases, suggested films to watch next and the adaptive language capabilities of a smartphone are all examples of real-life machine learning that has led to analytical outcomes. The more you interact with a service, the more it can tailor itself to a specific need. In short: the more data the better.
Arguably, one of the most important aspects of data analysis is hinged upon the discovery of trends to gleam further insight. Nevertheless, there is traditionally a final step that has the high chance of skewing or missing key results – the involvement of a fallible human being.
With machine learning, extracting the intricate links can be established without intervention. This is especially important in the analysis of big data, where huge amounts of data can be systematically matched and sorted according to a set algorithm. While the algorithm itself is designed by an analyst, its application is the sole responsibility of the computing program that will be pouring over the data.
The importance of this operation is twofold: not only is the analysis much faster, but it is based on an iterative model – the machine actively learns from new data to adapt its analysis, maintaining the reliability of future results as new, fresh data becomes available. The aim is to produce innovative models and algorithms that can not only evolve with the data but predict future outcomes based on past findings.
Take the new Google car, for example. Through extensive testing, it has been found that the self-driving program is too cautious, following the ‘perfect’ driving rules all of the time. The reaction has been to make the program behave more ‘aggressively’ with the aim to better integrate the computer controlled cars with those driven by people. Examples such as edging forward at a stop sign and attempting to merge with moving traffic are relatively minor traffic violations committed by many drivers that, through machine learning, can be ‘taught’ to the computer program as it drives. Until every car on the road is controlled by a computer, these compromises must be made so that integration is as seamless as possible.
Another scenario is safeguarding, with specific implementations that can reap huge benefits in real-life situations. One example is the data used in the identification of potential child abuse victims. Through machine learning, the data on previous cases can be analysed to make predictions using key identifiers: limitless amounts of siloed data such as hospital admissions, locality, age of parents and school attendance can be correlated to distinguish key variables that may lead to the identification of future cases. The use of Bayesian Networks (BNs) to maintain the validity of the data is vital to the contributory nature of the analytical output. Simply put, the right analysis would help distinguish, prevent and safeguard those who may have gone unnoticed without statistical intervention.
The use of data analysis in business has been synonymous with the necessity to not only better understand the data available, but to increase the amount of data that has the potential to be analysed. As computing power increases while the price of hardware continues to fall, more powerful analysis can be achieved without the traditional analysist-based pre-requisite. Data analysis retains its functionality on a much larger scale and the big data challenges that face organisations can be taken care of using computers that adapt to the new data through machine learning. Predictive analytics has already taken care of our shopping and streaming habits and now it will move on to the bigger challenges that are the corner-stones of big data.