Connexica logo

Self-service Solutions
for Modern Data Challenges

Data Management & Discovery | Analytics & Reporting | Search Engine Powered

Request a demo

Our 2018 technology predictions are here and ready for you!

Achat bot inside a mobile phone being held by a human with crosses and icons around it

1) Chatbots

Most of us are now familiar with websites offering a ‘livechat’ feature where you can ask questions and seek advice in real time from a human. Unfortunately, these chat facilities are generally 9/5 Monday and Friday and don’t offer 24/7 coverage.

This is where the Chatbot fit in! Chatbots can act and behave as a human would interact with customers/website visitors to provide an out of hours service, basic introductory, filtering service to determine if they should speak with an agent or book an appointment etc.

Chatbots will become widely used throughout 2018 helping organisations to streamline processes and improve customer service.

 

 

 

A illustration of a google home mini

2)Voice First Interaction

First generation voice interaction devices are everywhere and for most people Siri, Cortana and Alexa are familiar words within their house. They are great at what they do, answering questions, calling contacts for us, telling us jokes, helping us to cook, but it’s all very one sided.

We predict that in 2018 and beyond we will see second generation platforms conversing with humans, asking questions, prompting thoughts plus much more. Currently, Alexa and Siri etc can only understand and respond to one question at a time but going forward we predict humans will be able to ask multiple questions. Voice first devices currently reply with very formal scripted replies, look out for devices designed to build relationships and emulate their owner in responses.

 

 

 

A mobile phone been held with a virtual shop on screen

3) AR E-Commerce

Think Pokemon go for retail!

2015 saw augmented reality (AR) take over lunch breaks, spare time and daily commutes as the world went crazy for Pokemon hunting. For most this was the first taste of AR where augmented reality is overlaid on to the physical world. The first signs of retailers dipping their toes into this world have started with Amazon and Gap via mobile devices with the Tango camera, currently limited to Lenovo and later in the year Asus phones. Shoppers can now move furniture around their house, place TV’s on walls, try clothes on manikins and zoom in/out depending on what level of detail they require, all within the setting of their room, office etc.

Watch out for more devices with this camera capability, more retailers offering this services and more customisable details such as personalised manikins – truly try before you buy.

 

 

 

An illustration of a city with the internet of everything

4) IoT, Big data and AI

In our other prediction blog this year we talk about IoT and how 2018 is going to see more devices, more smart homes and devices streamlining home life. In this post we focus on the data and what’s going to happen? With more devices, comes more sensors and more data, this is where big data thinking will come into play combining these vast datasets to create meaningful insight.

Watch out for AI and machine learning in 2018 stepping in and using the data from IoT devices as we start to receive high volumes of more precise data. The use of this data is endless from improving device performance, enhancing day to day activities to targeted marketing and truly understanding our customers. There will be a case of create and collect in 2018 as the tsunami of data hit’s us and organisations aren’t sure what to do with this abundance of data.

Read more

Self-service Solutions
for Modern Data Challenges

Data Management & Discovery | Analytics & Reporting | Search Engine Powered

Request a demo

A common question we come across at Connexica is fairly fundamental to our business practice – what exactly is data discovery?

Like any industry term championed by Gartner, it can be open to interpretation, but we see data discovery as a methodology that can be adopted by a business in order to support data driven decision-making.

This methodology relies on combining data preparation with business intelligence and predictive analytics to give all users a full perspective on their data.

We believe the business user should have access to the data preparation tools and our core data discovery platform combines the functionality of a data warehouse with the analytics of a business intelligence tool, all through easy to understand self-service interfaces.

Data discovery is about understanding the relationships between data and using analysis to find ways to improve business practices, in formats easily digested by business users. It’s easier to spot an anomaly looking at a smart infographic rather than poring through the numbers manually, in any case!

We have taken data discovery to the next stage by introducing natural language searching into the mix. Guiding users by using terminology they understand removes the need for extensive training or experience in data science. We want to let users search for what they want in the words they would normally use – focus on creating and understanding insights that positively influence the business instead of getting bogged down learning yet another difficult to understand the product.

If data is the oil of the modern business, data discovery is the refinery that turns it into fuel for long-term decision making and strategic planning.

Why should you care?

Data discovery is easy to use, agile and scalable to any organisation size. Data discovery allows you to pick out the important bits from a unified source of data, and translate these insights into formats easily digestible by other business users across the organisation. Data discovery helps you establish a culture of data driven decision making through combining statistical analysis with human intuition and intelligence.

It is more than Business Intelligence, more than Data Warehousing and more than Predictive Analytics.

Data discovery is the vehicle for enterprise digital transformation and the establishment of a truly 21st-century data driven culture.

Don’t waste time asking questions arising from your business intelligence and data warehouse strategy. Get straight into the detail by using a methodology that provides answers instead.

Read more

Self-service Solutions
for Modern Data Challenges

Data Management & Discovery | Analytics & Reporting | Search Engine Powered

Request a demo

The GDPR (General Data Protection Regulation) deadline for readiness looms ever-closer and the world of IT is still struggling to execute a clear and defined GDPR strategy. No doubt – with over 3000 amendments since the first draft it is officially the ‘most heavily lobbied piece of legislation ever’, and the completed regulation is over 200 pages long.

Information Age have suggested nearly half of businesses are not ready for GDPR and in a recent webinar I attended only 15% of respondents claimed that they thought their business would be ready for ‘go-live’ on May 25th 2018.

There is no doubt that businesses across the world are hitting the panic button now we’re only 7 months away from the deadline – seemingly with no clear solution in sight that will solve the multiple business challenges created or exacerbated by GDPR.

So, what are the biggest challenges that businesses are currently trying to solve? From our experience, we can summarise these challenges through four questions:
1. Where do I keep personal and sensitive data across my vast IT infrastructure?
2. How can I catalogue personal and sensitive data across multiple structured, semi-structured and unstructured data sources?
3. How can I create a single view of information to easily identify all data belonging to any particular data subject?
4. How do I maintain readiness post May 25th and will my systems cope with the new rights of data subjects?

If these challenges remain unsolved the GDPR readiness pathway quickly becomes bogged down by manual data location activities and endless repetition of effort across multiple source systems – which is both an expensive resource sink and an imperfect method of satisfying the upcoming ‘privacy by default/design’ requirement.

We shouldn’t have to resort to the ‘person with a clipboard’ method when it comes to cataloguing information and trying to build some sort of single view of an individual. Inefficient, manual and arduous methodologies will not result in organisation-wide readiness before the deadline.

There is only one way to solve these problems – enterprise-wide adoption of smart technologies that will greatly reduce the inefficient time sink created by manual auditing.

If I can use software to solve the four challenges mentioned above I can better coordinate my resources in ensuring that all data is processed in-line with GDPR, instead of worrying that I can’t find and organise personal and sensitive data in the first place.

Thankfully there is technology out there which can help, and data discovery technology is the best fit due to its flexibility and capabilities around finding, cataloguing and organising data.

Below I’ve set out four areas where I think data discovery software can greatly improve your GDPR readiness strategy.

1. Finding out where data is held

The first step to readiness is finding out what personal and sensitive data is held and where exactly that data can be found. This can be wide-ranging – from your operational systems to customer testimonials to marketing mailing lists to customer complaints and everything in between.

This information is found in structured databases, semi-structured XML files, unstructured file systems on individual workstations, cloud-based file systems – you name it, you need to check if there is personal or sensitive data in those systems. Indeed, 80% of all organisational data is unstructured if you believe the statistics!

Finding out where information is held can be easy in some systems, but finding me how many John Smiths I have across 3000 private file directories on separate workstations is going to take me a long time if I’m using a clipboard and ball-point pen.

Thankfully data discovery software can take all information – from databases, XML files, file directories, the lot – and search against it simultaneously, instantly finding me every mention of John Smith across my thousands of previously siloed data sources. No more clipboard required!

2. Cataloguing personal and sensitive data across structured, semi-structured and unstructured sources

Though finding the information is probably the biggest challenge for businesses at the moment, cataloguing the information after it has been found can be just as hard.

I will never be able to build up a clear picture of my personal and sensitive data without a clear information cataloguing strategy.

Trawling through each system to find out where I keep IP addresses, who owns the IP address, what I’m using it for and what the legal basis for processing it is WITHOUT some sort of automatic metadata cataloguing process is going to take weeks of effort. Weeks that are quickly running out…

Modern data discovery software includes comprehensive metadata cataloguing to help identify what data is held where, why, by whom, and for what reason. Smart business rules and regular expressions can extract structure from unstructured and semi-structured data sources, to help automatically build a ‘big picture’ of personal and sensitive metadata.

So instead of just finding out whose data is held where, I can now find out what types of data are held where. If only there was a way to combine this…

3. Creating a single view of information

…through creating a single view of information?

Experian state that “89% of organisations continue to face difficulties in achieving a single customer view”.

This is largely due to a systemic complexity across multiple systems that software has so far struggled to solve. When including semi-structured and unstructured data as well the dream of a unified single view can very quickly start resembling a nightmare.

The reason is simple – relational databases and unstructured data sources do not play nicely, and no amount of tweaking and changing will make legacy back-ends handle unstructured data as well as a more modern approach.

This problem is further exacerbated by GDPR. It’s quite hard to argue that any approach that does NOT create a single customer view is going to make it easy for customer service personnel to respond to subject access requests, data portability requests, the right to be forgotten, etc.

Alternative architectures have been tried and tested to try and solve the unification problem in terms of creating that mythical ‘single customer view’ that only 11% of organisations claim they have successfully done.

An architecture which includes modern data discovery software can quite easily create that single view. By storing all files in a unified ‘index’ format the challenges posed by joining different data from different file types and different data sources is easily overcome.

This allows a comprehensive single view of information to be built across the entire organisation, achieved through combining data discovered across siloed systems with the metadata information catalogue.

Once you have that single view of information, any user can make an enquiry and easily navigate from one entity to another without having to be concerned about logging into multiple systems and re-establishing the context of the search based on system configuration.

Putting it all in the same place and showing it in the same format provides a powerful resource for maintaining information security and establishing what data is being processed, whose data it is, why it is being processed and by whom. In addition to finding how much John Smith data I have, I should now have a full visual history of each John Smith’s interactions with my organisation from day one to the present day.

4. Maintaining readiness post May 25th

Assuming we can reach a point where we are somewhat ready by May 25th without using any smart software or GDPR solution, the question of how do I maintain readiness still remains unanswered.

GDPR establishes loads of rights for individuals – the right to be informed, the right of access, the right to rectification, the right to erasure, the right to restrict processing, the right to data portability, the right to object and rights in relation to automated decision making and profiling.

The significant part of establishing readiness prior to May 25th is ensuring that after go-live each of these ‘rights’ have a clearly defined business process from request to response.

This one’s pretty simple from a technological perspective – if I have all my data in the same place and viewed through a single interface it will greatly empower my ability to respond to a data portability request, a subject access request, an erasure request, etc.

GDPR solutions built on data discovery software can contain additional reports, portals and data capture forms to help customer service teams respond to these requests in an efficient and simple manner.

Finishing off…

There is no silver bullet for GDPR. Every solution is only going to work with enterprise-wide adoption and conformity, with each and every employee educated on their responsibilities in regards to GDPR and what they can and can’t do.

Despite this, data discovery software will help enormously. Without it, you won’t find the weaknesses in your strategy until you receive your first data portability request on May 25th!

Read more

Self-service Solutions
for Modern Data Challenges

Data Management & Discovery | Analytics & Reporting | Search Engine Powered

Request a demo

Unless you’ve been hiding under a rock or you’re one of today’s lucky 10000 to be hearing about it for the first time, the EU General Data Protection Regulation hype train is reaching full throttle and organisations across the world are engaging panic mode as the readiness deadline looms ever closer.

For those that are lucky enough not to have encountered GDPR yet, what is it all about and why should I care?

GDPR was adopted by the European Commission in 2016 with enforcement ‘going live’ on 25th May 2018, GDPR is now recognised as law across the EU. With over 3000 amendments since the first draft it is officially the ‘most heavily lobbied piece of legislation ever’, and the completed regulation is over 200 pages long.

GDPR largely extends the UK Data Protection Act 1998 and clears up some definitions that were ambiguous or out-of-date for the modern world. Indeed, in 1998 only 30% of us had access to the internet, compared to 98% of some generations carrying an internet-ready computer in our pockets in 2016.

GDPR is a regulation which means that it overrides any local law in any EU member state. This is different to a directive which would still have to go through local governmental processes e.g. parliament before becoming law.

No ifs, no buts, if anything of the following applies to your business, you have to comply:

• Organisations within the EU
• Organisations that offer goods and services to EU residents (including free services such as Facebook)
• Organisations that monitor the behaviour of EU residents (e.g. targeted advertising companies)
In short – every organisation in the EU that processes or uses data in any shape or form, or outside of the EU that offers online services to EU citizens.

GDPR has an exhaustive list of requirements for organisations to comply with that can be summarised around the following areas:

1. What data is considered ‘personal’
2. How personal data should be processed and controlled, and for how long
3. What data security controls organisations should have in place in regards to personal data
4. What rights data subjects have in regards to their own personal data, and how those rights should be enforced
The specifics can get pretty complex and there are a number of organisations already offering accreditation courses for privacy professionals to get up-to-speed with the specific changes and how they might impact your specific business.

The biggest headline around GDPR though is not the rights given to citizens (though they are considerable and will make for some interesting reading once people start requesting data from Silicon Valley giants like Google…).

Instead, the main headline is the potential size of fine that can be imposed for non-readiness. GDPR states the maximum fine for non-readiness is either the greater of either €20million or up to 4% of an organisation’s worldwide annual turnover.

For Google that would mean a fine in the region of $3.5billion!

But no need to panic. We’ve got your back and know that we can help with both our expertise and our industry leading data discovery software.

Read more

Self-service Solutions
for Modern Data Challenges

Data Management & Discovery | Analytics & Reporting | Search Engine Powered

Request a demo

The broad realm of asset management has been slow to fully adopt the big data wave and there is still a lack of data-driven decisions being made by active managers.

However, the exponential increase of intangible assets over the last twenty years is only increasing the need for firms to adopt an organisation-wide big data strategy to fully understand assets and understand where improvements can be made to current business practice.

It is safe to say there is a lack of appropriate knowledge about data science, data mining, and the potential artificial intelligence has to revolutionise asset management. This is coupled with a lack of data scientists compared to the number of jobs available – indeed, data scientists becoming the premier league footballers of the IT world!

Another constraint is the lack of any substantial overlap between a data scientist’s qualifications/experience and an active manager’s qualifications/experience. Why would an investor with twenty years of experience start trusting the opinions of a data scientist fresh out of university? With the current dearth of industry-proven use-cases currently available, it’s pretty easy to dismiss data science as hype.

This isn’t likely to change in the short-term and AM firms need to think smart about big data and AI instead of fundamentally changing processes without a full understanding of the consequences.

This means educating asset managers in the act of data science and letting them witness the insights that can be generated first-hand.

A current blocker to consider is the nature of current software offerings and their suitability for direct use by asset managers. Dedicated financial management tools focused on big data are in their infancy (compared to generic industry agnostic methods) and there is little proof in the field that the application of dedicated ‘FinTech AI’ tools have managed to produce benefits anywhere near the scale that they are expected to achieve.

On the flip-side, industry agnostic methods require considerable experience in statistics to decipher the output from machine learning algorithms, and nobody can expect business users to jump head-first into SPSS or R to try and take advantage of their multiple data streams.

Big data or AI tools focused on asset management need to be developed in partnership with an active AM firm. Otherwise, the use cases will be built on guesswork and there will be no trust in suggestions for business change made by AI.

For AM to truly take advantage of the big data buzz there needs to be a change in attitude to how it is adopted. We strongly believe smaller firms should begin experimenting with AI/Big data by building use cases and using an easy-to-use AI tool to test predictive models against real-life BAU. By working in partnership with a smaller software development firm rather than adopting an expensive offering from a major player, AMs can begin to understand the fundamentals of how AI works and what data science can do to let business users unlock insights that transform business processes and get an edge in the world’s most competitive industry.

To get a jump start on AI organisations need to focus on the following areas:
• Joining separate data sources into a single version of the truth
• Applying deep learning algorithms to historical data to ‘train’ artificial intelligence tools
• Democratising access to insights to properly assess the impacts of data-driven decision making organisation-wide
• Partnering with software development firms to build use cases based on real life business problems
• Allowing business users to get experience in data science without having to adopt a risky data scientist-based strategy

Read more

Self-service Solutions
for Modern Data Challenges

Data Management & Discovery | Analytics & Reporting | Search Engine Powered

Request a demo

With Manchester United becoming the first football club in the world to be worth over €3billion the amount of money pumped into professional football continues to boggle the mind. Transfer fees for Premier League players are nearing the £100million mark and the spending doesn’t look like it’s going to end anytime soon.

This spend trend is reflected in the analytics world. MarketResearch.com expects the global business intelligence and analytics software market to increase from $17.9B in 2014 to $26.78B in 2019 – a growth rate of nearly 10% annually! Whilst some sectors are reaching maturity in their analytics offering, sectors such as banking, asset management, insurance, retail, IT and telecoms are still finding their feet in a market where there are increasingly more options than specific solutions.

The increase in spend on analytics and widespread adoption of analytics strategies has also increased the demand in the Premier League footballer equivalent of the IT world – the data scientist. Data scientists are now commanding six-figure pay packets in Silicon Valley and any person putting those two words together on their CV can expect to be inundated with recruiters trying to flog you off to the highest bidder – “You put data science on your CV and you take a 20 percent pay rise pretty much immediately…”.

The trend is almost an admission from medium to large organisations – we have too much data and we don’t know what to do with it. Please help us, we’ll pay you anything…

And the result? Lots of people being paid lots of money who are still not getting the insight expected from the ever-increasing pile of data landing on their desk (which, incidentally, mirrors the Premier League’s transfer spending compared to actual success in European football…). This has led to an increasing dissatisfaction with the job market coupled with an increasing spend on staff that is not yet yielding the expected returns.

The lesson is pretty obvious – it’s not about who has the most expensive data scientist, it’s about who has the best data strategy.

In Gartner’s ‘BI Strategic Planning Assumptions’ paper for 2017, the final assumption is as follows:

Through 2020, the number of citizen data scientists will grow five times faster than the number of data scientists

What is that in English I hear you say? It means the number of business users performing data science activities will increase due to improvements in technology and improvements in data strategy. Gartner fluff it up by calling it a ‘citizen data scientist’, but in layman’s terms, a ‘citizen’ means you, me and everybody in IT that doesn’t have a PHD in computer science from Cambridge University.

By getting in early on a machine learning analytics tool, you can unlock data scientist-level insights without spending Paul Pogba Premier League prices.

By mastering innovative new tools, you can define a data strategy that isn’t dependent on finding a data scientist that will change your business forever and is instead based on the requirements and knowledge of your company’s ground troops – the business users.

As technology gets smarter and begins to incorporate natural language processing, artificial intelligence and prescriptive analytics, we can all start trusting what the computer says without having to spend considerable resources deciphering and dissecting data. Democratising access to data across the organisation is core to defining a progressive data strategy that incorporates all the needs of the business.

To keep with the football analogy – the best football clubs in the world invest in their youth teams and promote from within, as well as investing in players from abroad (FC Barcelona, for example). There is only so much talent in the world and money can’t buy it all! Do you need to spend a fortune on a striker when you could have the best player of all time in your youth team? All he needs is the right tools and a little bit of trust and who knows how good he could be…

Empower your staff with self-service technology and don’t hinge your entire company strategy on the speculation of an overpriced data scientist!

Read more

Self-service Solutions
for Modern Data Challenges

Data Management & Discovery | Analytics & Reporting | Search Engine Powered

Request a demo

Our guest blog this week is from our Founder and Visionary, Richard Lewis. Richard explores how IT can give users what they want in a world where DIY apps empower users to make their own decisions.

Can Disruptive BI be Constructive?

The technology landscape is changing. The cost of software is reducing whilst the amount of software being used by SMEs and large-scale enterprises is increasing – particularly the adoption of Cloud based services (SaaS, IaaS, PaaS, etc.), and innovations around upcoming trends such as new innovative smart technologies and the Internet of things.

For IT departments, this is not just disruption to the norm – it’s chaos!

Gartner like to use the term ‘disruptive IT’, but what exactly does that mean in reality? The Oxford English Dictionary suggests the following…

disruptive – adjective

  1. causing trouble and therefore stopping something from continuing as usual
  2. changing the traditional way that an industry operates, especially in a new and effective way

IT departments that have to rigidly enforce governed access to data and applications through locking down administration capabilities on workstations are fighting a losing battle – especially with the widespread adoption of web-based SaaS tools that do not require anything to be installed locally, and the growing dissatisfaction with traditional enterprise tools.

Control is good but control is also restraining. Users want to be more efficient with their time, less dependent on the IT department, and ultimately improve day-to-day working – restricting this is a paradox from a business efficiency and improvement perspective.

Let’s face it, times are changing.

IT departments seek control as uniformity is easier to support and manage. Therein lays the rub. Business users also seek control. Business users, who are under constant pressure to deliver results, do not want to conform to the corporate rulebook if it means missing out on a deal or saving them from being slowed by the ‘data to insight bottleneck’ – especially if information requests can instead be harvested by using a £15 a month SaaS service they subscribed to this morning over the web.

SaaS services are a form of liberation to users with locked down laptops (providing they don’t have Internet Gateway Police stopping the fun) and whilst being disruptive to IT are a god send to users as they are able to gain control of their own destiny. Making their own choices of what to use, using apps they like, apps that they find easy to use, apps that make their job easier, is a no-brainer.

But on the flip side, more SaaS apps means more contracts to review, more potential vulnerabilities in IT infrastructure and less cohesion towards a consistent enterprise software strategy.

So how do we resolve the conflict between IT seeking order and users seeking liberation?

SMEs that have their enterprise databases such as Oracle, DB2, Teradata or SQL Server and their enterprise applications such as JD Edwards, SAP, Dynamics, Oracle Financials etc. are likely to have these systems locked down.

All of these systems will include copious amounts of reporting and provide tools for power users and SQL / OLAP experts to potentially build their own reports off standard pre-defined views and cubes.

Inevitably, however, for many users finding the correct information for ad-hoc data requests or combining data from the corporate warehouse with other data sources – such as spreadsheet data or data off the web to feed into their CRM system – can end up forcing the user to resort to give up on requesting data extracts and instead use their own tools / expertise to wrangle the data necessary for them to do their job.

IT needs to find a compromise.

New technologies are emerging and becoming mainstream that are specifically designed for self-service. Self-service Data Preparation (or Data Wrangling), Search-driven BI, Smart Data Discovery and Prescriptive Analytics tools all help business users to produce the outputs they want without resorting to SQL, EXCEL spreadsheets or haranguing IT for bespoke reports and data marts.

These tools provide the freedom to end users expected from SaaS apps whilst retaining the security and governance controls expected from traditional enterprise software deployments.

Putting self-service analytic tools over the top of the corporate warehouse and enterprise business applications allows IT to regain control over the IT infrastructure and reduce if not eliminate the need for users to subscribe to bespoke SaaS services, whilst maintaining data governance and protecting the running of essential operational systems.

Finding tools that are easy to use, agile and enhance traditional methods of reporting can be difficult and is often overlooked by busy IT teams in favour of the more traditional BI and Data Warehouse setup.

The fresh perspective provided by Data Wrangling and Smart Data Discovery tools can accelerate the rate of business change following the adoption of a mainstream Data Warehouse and BI platform.

Rather than users doing their own thing and potentially “causing trouble and therefore stopping something from continuing as usual”, adopting new innovative methodologies can “change the traditional way that an industry operates, in a new and effective way”.

Turn that bad disruption into cohesive business practice!

Richard Lewis, Founder & Director of Business Strategy, Connexica Ltd

Read more

Self-service Solutions
for Modern Data Challenges

Data Management & Discovery | Analytics & Reporting | Search Engine Powered

Request a demo

Specialist explores how businesses can minimise impact of skills gap

 

A recent report published by the UK House of Commons Science and Technology Committee stated that approximately 12.6 million adults in the UK lack basic digital skills. This IT skills gap is affecting businesses across industries, from financial services and local authorities to retail and manufacturing. Here, Greg Richards, Sales and Marketing Director of business intelligence specialist Connexica, explores how businesses can address this crisis and minimise the impact.

 

The technological revolution of the late twentieth and early twenty-first century has brought with it significant changes. Not only has it fundamentally changed the way businesses operate, it has significantly increased the volume of data available to us. We can now monitor and track every process in detail, gaining valuable information and insights in the process.

Unfortunately, as the House of Commons found, digital skills have struggled to keep up with demand. As such, business management teams repeatedly encounter difficulties with aspects of operation and even recruitment. In fact, 72 per cent of employers have expressed unwillingness to consider potential candidates lacking these skills. This is understandable, but problematic in the midst of a skills crisis.

Interestingly, this latest report was commissioned as a result of a previous report — the big data dilemma report in February 2016 — that identified, “the risk of a growing data analytics skills gap as big data reaches further into the economy”. This is a pressing concern, because the ability to analyse data effectively directly influences the strategy of decision makers.

For example, most businesses can use data analytics to identify opportunities to improve operational processes and achieve time and cost savings. However, this can only be done if staff have the skills to interact with this data and pick out the actionable information.

While this can be done by specialist staff, recruitment increases costs and relying on IT departments can limit the amount of real-time practical insight.

So how can businesses tackle the digital skills gap? The most obvious approach is by investing in upskilling programmes to ensure staff are fully competent using business IT systems. However, this is a long-term objective that will do little to make an impact in the more immediate future.

Fortunately, businesses can make some small changes to improve the upskilling process. While some software companies are already pushing towards self-service data analytics, which sees analysis tools move out of the IT department and into the wider workforce, only 16 per cent of business executives can adequately use those tools.

This is where search-based analytics software, such as Connexica’s CXAIR, can be used to bridge the skills gap. Using natural language search, the same format found in search engines, makes business intelligence accessible and actionable on a wider scale.  Changing the way that users interact with the tools directly can remove the unnecessary technical barriers to business intelligence.

Of course, this doesn’t remove the importance of trained data analysts and scientists. Technically trained staff can provide complex analysis maintain systems and build data models. These are tasks that cannot be completed without advanced digital skill sets.

It is clear that the UK government must introduce a digital strategy to improve the IT skills held by future generations, while businesses need to invest in upskilling schemes to boost the competencies of existing staff. Search-based analytics can ensure that business strategy does not suffer, but it remains essential that staff develop the skills to keep businesses ahead of the technological curve.

Read more

Self-service Solutions
for Modern Data Challenges

Data Management & Discovery | Analytics & Reporting | Search Engine Powered

Request a demo

Big data banking is only as effective as the analytics.

Big data has been a buzzword on the lips of business managers and CIOs for at least the last two decades. Every industry, from financial to medical and even local authorities, appears to be investing in ways to be a part of the modern information gold rush. But what does big data really mean for the banking and financial sector? Greg Richards, Sales and Marketing Director of business intelligence specialist Connexica, explores further.

Traditionally, the financial sector hasn’t been the most receptive to new technologies. As an industry that thrives by minimising risk and making carefully calculated business decisions, choosing to handle high value sensitive information with what may simply be the technological flavour of the month is not a decision many rush to make.

Despite these reservations, the Financial Conduct Authority (FCA) included investment in technology as the top priority of its 2015/16 business plan, a clear sign that the sector needs to be making a more conscious push towards digitisation.

Investing in big data

The next step is big data, a technological phenomenon that has stirred significant interest from the banking industry. The idea that data generated during the everyday processes and operations of a business can be used to inform strategies and achieve objectives is an exciting one, particularly in a post-crash economy where banks are faced with constant scrutiny over the detail of risk reporting.

However, there remains a question of what big data truly means for the financial sector. Although the information can yield a competitive advantage for banks, for it to be effective it has to be analysed effectively.

The majority of the conversation surrounding big data banking to date, has looked at what models and systems are best at making the data accessible for analysts. Yet the question banks should be asking is, “how can this information be actionable for us?”

Although many financial institutions are increasingly using cloud computing to host software platforms and to store data, unfortunately, the analysis itself is still reserved for specially trained individuals — typically data analysts rather than bank managers.

This approach limits the functionality of big data. One of the most valuable characteristics of big data is that it gives banks a real-time insight into multiple data sets. The places that customers regularly use their cards, for example, can be analysed to highlight opportunities for additional revenue streams by partnering with relevant retailers. This could take the form of targeted customer-cashback offers or even to provide anonymised commercial insights to the retailer.

Changing the data analysis game

When choosing a data analytics package, banks should look beyond SQL-based software into the different types of big data analytics for financial services — notably search-based analytics.

Search-based analytic software makes use of natural-language search, the same technology that internet search engines use, for a simple and uncomplicated approach to navigating and inspecting data sets. As a result, cross-referencing becomes an easy process and correlations can be spotted without the need for a technical skill set. This means people at all levels in the bank can benefit from actionable business insights. Software such as Connexica’s CXAIR, for example, can even draw this data from a wide range of disparate sources, meaning that banks that prefer the traditional bespoke systems can make use of the functionality without the need for migration to a new system.

Read more

Self-service Solutions
for Modern Data Challenges

Data Management & Discovery | Analytics & Reporting | Search Engine Powered

Request a demo

Democratising data helps local authorities combat fraud

 

In June 2016, independent auditing company Audit Scotland uncovered almost £17m lost to fraud and error. The startling find was only discovered thanks to the organisation’s biennial National Fraud Initiative (NFI), which involves local authorities and other public bodies sharing data between them. This exercise of identifying inconsistencies across large data sets is not limited to large organisations. The rise of big-data analytics, means that any business can now benefit from actionable insights into the overwhelming quantity of data now generated.

 
Here, Greg Richards, Sales and Marketing Director of business intelligence specialist Connexica, explores how councils and organisations can bring together service data to reduce both costs and the prevalence of fraud.

 
Fraud has been a recurring problem for UK local authorities in recent years. In 2013, the now-disbanded National Fraud Authority (NFA) reported that fraud cost the country a total of £52bn that year. That same year, it was reported that a fifth of London council tenancies showed indications of fraud.

It is easy to identify the high incidence rate of fraud, but it is significantly more challenging to identify fraud itself. Although the UK Government produces financial year estimations of what percentage of benefits and services are fraudulent, these are simply estimates. In reality, the figure could be much higher.

However, the reason so many incidents slip under local authority radars is due to a lack of resources to provide extensive analysis. For example, housing benefit fraud is often discovered by cross-referencing service bills, such as utilities, banking or even council tax, with housing records. Inconsistencies in this data flag up potential fraud cases.

It is the cross-referencing of this data that is difficult. Traditionally, local authorities have stored accumulated information in rudimentary databases and, sometimes, even Excel spreadsheets. This makes the process of extracting insights tedious and time-consuming, which is further exacerbated by the high volumes of data that is generated in our big data driven society.

Likewise, more advanced analytics requires the need for specially trained personnel to make sense of the data. This means councils will either need to invest a large sum of money in extensively training certain members of staff, or alternatively spend even more in hiring a data analyst. As local authorities are regularly subject to budget cuts and expected to do more with less resources, neither option is desirable.

In order to combat this, there needs to be a technological shift towards what we call the democratisation of business intelligence. This calls for an understandable means of interfacing with accumulated data, allowing most staff within an organisation to gain actionable insight from their business intelligence — all without the need for specialist analysis.

Software such as Connexica’s CXAIR achieves this by replacing the traditional dashboards of incomprehensible raw data with a search-powered business analytics approach. Essentially, the software creates a Google-type approach to navigating and bringing together data streams on one dashboard, while still allowing for visualisations such as graphs to be subsequently generated. The software draws from a large variety of data sources, such as council tax records and parking permits, to allow extensive cross-referencing.

This kind of approach makes the information easy to understand for staff from junior management to C-suite, letting organisations spend less time making sense of data and more time using it to make decisions. In fact, local authorities in Kent are already using CXAIR to achieve a range of business objectives, including for counter-fraud purposes. It has even been used to reduce the costs associated with processing payments for council services.

If we are to learn anything from Audit Scotland’s discovery, it is that the best means of combatting fraud and its associated costs is by making effective use of business intelligence. Fortunately, a local authority’s greatest asset in doing so is the inescapable quantities of information generated daily — councils simply need to connect the dots.

Read more

Self-service Solutions
for Modern Data Challenges

Data Management & Discovery | Analytics & Reporting | Search Engine Powered

Request a demo

Following on from the major financial crisis of 2008 and the resulting global recession, governments across the world have established tighter financial regulations in order to prevent and mitigate a similar situation occurring in the future.

Financial regulators have been made accountable for the health of the banking and finance sector and thus play a crucial role in the setup, development and safety of the banking system, ensuring its continuity and profitability.

Although obviously more in the spotlight now, regulatory reporting is not a new concept. In the UK, the financial services sector was regulated by the Financial Services Authority but was dissolved following public criticism after the financial crisis. In its place today is the Prudential Regulation Authority (PRA) and Financial Conduct Authority operating (FCA) on a “twin-peaks” basis with regards to regulating the industry.

The reason for the split was so that the regulatory authorities could meet the regulatory demands of the financial services market. The PRA relies heavily on judgement and is very much “forward-looking”; they have been made responsible for providing prudential regulation for banks, building societies, credit unions, insurers and major investment firms. Whereas the FCA provides conduct regulation in retail, wholesale, financial markets and the infrastructure that supports those markets.

Why regulatory reporting is needed?

Built on lending, the nature of the financial services market means that organisations such as banks, building societies, insurers, etc. rely on having a healthy market in which to operate in, without this their services and ultimately their profitability suffers.

The data that regulators collect from financial services organisations serves to provide regulators with an indication of an entities’ financial health and the identification of any early on-set issues so that prompt preventative action can be taken. The act of setting and implementing monetary policy is also governed based on the data provided.

Technology challenges facing financial institutions

By tightening regulations, financial institutions are having to evolve in order to keep pace with the changing legislation and financial reporting requirements laid out by regulators. Existing systems in use at financial institutions aren’t equipped to meet the constantly evolving reporting requirements set out by regulators while at the same time meeting their own internal reporting needs. The main technology challenges financial organisations need to solve include:

It is crucial that these technology challenges are fast-tracked and solved quickly as regulators have stringent deadlines which govern when an institution needs to submit financial reports. If missed, the organisation will receive a financial penalty to deter late or inaccurate report submissions.

A practical solution – search-powered analytics

To meet these evolving reporting requirements, financial institutions will need to identify their legacy reporting systems and replace them with modern alternatives that are scalable to meet the requirements of the regulatory body now and in the future.

Our recommendation would be to implement a reporting system built on search-engine technology as these solutions can process data and requests quickly, can integrate with multiple systems seamlessly and can scale easily to the needs of the organisation.

Our solution CXAIR is a business intelligence tool built on search-engine technology and has recently gained popularity in the banking & finance sector because of the changing financial reporting requirements set out by regulators.

With CXAIR, financial institutions can expect to benefit from:

The growing regulations caused by the financial crisis of 2008 has provided numerous technology challenges to financial institutions who are now required to submit several reports to regulators in a variety of formats. To stay with the pace and meet future requirements these organisations need to evaluate their systems and identify those legacy systems that need to be replaced or upgraded. If not, they put themselves at risk of financial penalties because of missed deadlines or incorrect data submissions.

Read more