Site icon Yellowfin BI

Is your Data Quality sufficient to underpin Business Intelligence?

OK. Yes. It sounds really obvious – without quality, clean, relevant and up-to-date data, Business Intelligence (BI) software is rendered helpless. But, a lot of business strategies and projects fail due to, what on face value appear to be, glaringly obvious oversights.

A recent discussion on LinkedIn’s Business Intelligence Group, posed the question: “Is data quality [DQ] or business intelligence more important to the organization?”

From what I can gather, the question was used as a lightening-rod to promote a book. However, the reaction that the query received is noteworthy, particularly for organizations’ looking to implement a BI initiative for the first time.

The Business Intelligence Group is brimming with seasoned BI professionals. Their responses were, for the most part, unanimous and overwhelming. The collective sentiment can be summed up in this way – without quality data assets, BI is:

Martin Doyle, CEO of UK-based DQ Global and self-proclaimed Data Quality Improvement Evangelist, was stark and uncompromising in his assessment, stating that: “This is not a chicken and egg question. You cannot have information without its supporting data.”

Data Quality: Too much of a good thing?

Having definitively established that good DQ underpins good BI, the only practical counter argument was one of degrees – that is, how perfect does your data really need to be?

To answer this, Enterprise BI Architect (VP) at The Bank of New York, Ron van der Laan, went back to basics. Laan stated that organizational data sets should be optimized to the point where they can easily and continually generate accurate reports for accurate decision-making, and no further.

“Maximizing your DQ makes a lot of sense up to a certain level. Beyond this level, the cost often exceeds the gains. Addressing the last of the DQ issues is always the most expensive,” said Laan.

“Based on your budget constraints, increase your DQ to a level that is more than good enough for the business to make solid decisions, and use your remaining funds to add value to your BI layer. This way you will optimize your investment in your BI solution.”

This attitude is shared by data science evangelist, Mike Loukides, who made the following statement in a blog post by OpenBI co-founder, Steve Miller, on Information Management:

“Do you really care if you have 1,010 or 1,012 Twitter followers? Precision has an allure, but in most data-driven applications outside of finance, that allure is deceptive. Most data analysis is comparative: if you’re asking whether sales to Northern Europe are increasing faster than sales to Southern Europe, you aren’t concerned about the difference between 5.92 percent annual growth and 5.93 percent.”

Poor Data Quality lingers within modern Business Intelligence programs

Aside from the practicalities put forward by Laan and Loukides, the strongly worded opinions of the aforementioned BI experts, and others like them, reflects a growing sense of exasperation. Exasperation that, in an era of advanced web-analytics and reporting tools, many BI programs continue to break-down and fail to achieve ROI due to poor DQ. A recent report analyzing BI implementations in the insurance industry – Business Intelligence in Insurance: Current State, Challenges, and Expectations – found that 50 percent of respondents said that their BI projects experienced significant problems due to data inconsistencies (varying forms of data), with over 60 percent citing issues with DQ.

“As BI capabilities expand, carriers acknowledge that Master Data Management [MDM] principles are important, but relatively few have them in place or have prioritized them for the short term,” the report, authored by Novarica Principal Martina Conlon and Analyst Kimberly Markel states. “The challenges of normalizing enterprise data and motivating business user change are significant, but we believe that insurers who do develop strong business intelligence capabilities and act upon the insights revealed will have a significant competitive advantage in the difficult market ahead.”

Another recent industry study suggests that poor DQ still plagues BI deployments in general. The study revealed that around 90 percent of organizations lacked confidence in the accuracy of their data, believing that their data is of sub-standard quality, and the cause of bad decision-making.

So how can companies manage their DQ more effectively?

Benefits of establishing a Master Data Management program to control Data Quality

According to a recent research report, controlling DQ through efficient data management is crucial to ensure optimum performance from any BI solution.

The Aberdeen Group study – Data Management for BI: Fueling the Analytical Engine with High-Octane Information – states that the underlying data that underpins an analytics and reporting system must be managed properly to make sure it’s clean, relevant and delivered in a timely manner to maximize the ability of enterprise BI solutions to deliver actionable insights.

The report identified several significant benefits of an efficient and effective data management system, including:

Definition of efficient and effective data management
The research report divides respondent organizations into three categories based on the efficiency and effectiveness of their data management processes:

Keys to establishing a Master Data Management program to govern Data Quality

Establishing a documented MDM action plan is integral for maintaining a high level of DQ to help underpin accurate and timely organizational decision-making. A documented MDM action plan also helps service providers ensure that they meet their contractual obligations.

Establishing a MDM action plan and procedures will help to:

Obtaining perfect data quality is mission impossible

Whilst ensuring good levels of DQ is paramount to the success of any analytics program, perfect DQ, is both unachievable and unnecessary for financial and practical reasons. Further, marginal data deficiencies are unavoidable within any BI program, as maintaining and monitoring DQ is ostensibly a reactive exercise, as posited in several comments regarding Jim Harris’ recent Information Management blog post, The Data Quality Wager.

“Data quality is a reactive practice,” explained Richard Ordowich. “Perhaps that is not what is professed in the musings of others or the desired outcome, but it is nevertheless the current state of the best practices. Data profiling and data cleansing are after the fact data quality practices. The data is already defective.”

“For as long as human hands key in data,” responded Jayson Alayay, “a data quality implementation to a great extent will be reactive.”

DQ is the craft of the absolutely necessary and inevitably imperfect.

Exit mobile version