Marketing Needs Quality Data Before Big Data and Predictive Analytics

Recent marketing hype has been about new analytics and big data, and becoming marketing technologists. However, there are some fundamentals which must first be addressed, and a key stumbling block to effective marketing is the general poor quality of data. Data quality is non-negotiable. In a recent study, Britain’s Royal Mail Data Services found that the average impact on businesses was a cost of six percent of annual revenue. While there was some variance among respondents, clearly no company can afford to ignore this problem.

This concern with data quality is not limited to the United Kingdom. Experian’s data quality arm, in their annual benchmark report on global data quality, reported that while most businesses globally (and 95 percent in the U.S.) use data to meet their business objectives, less than 44 percent of them trust their data.

Customer experience is top of mind for 2017

Some 56 percent of the respondents in Experian’s report want to serve their customers better in 2017 and recognize that a key factor in achieving this is better data. Providing a rich customer experience is the name of the game, and poor or erroneous information about that customer could cause the end of that relationship. It has become apparent to most businesses that winning a new customer is costly (between three to ten times the cost of maintaining an existing client), while retention is much more cost-effective. Loyalty is not automatic and has to be nurtured, especially in a world where the bulk of customers are millennials, who are very quick to move if the current business offering does not suit them or the experience is less than satisfying. To this end, marketing’s focus for 2017, and probably the next few years is on building comprehensive customer analytics.

Marketing needs a single customer view

Most U.S. companies are quite aware of the opportunities now available to build a three-dimensional view of each customer through the avalanches of data that can be obtained via social media and the internet. They also understand why this is necessary. Experian found that 20 percent of U.S. businesses were focussing on building a single customer view (or SCV), and the chief reason for this drive was to increase customer loyalty and retention. The responsibility for this falls squarely into marketing’s portfolio, with assistance from IT.

In the drive to improve customer experience, marketing needs to develop this single customer view, which will allow extremely targeted marketing. It does not help if copious social and historic shopping data is collated and used to build a customer persona if the customer’s mobile number or email address was captured incorrectly. Likewise, duplicate records and “decayed” (out of date) data create annoyances both to the customer and to the marketing department. Much research has gone into why data is inaccurate, and the same answer is always found: it is due to human error.

While human error can create the initial quality issue, for instance, when customer information is being loaded by one of the company’s employees, benign neglect is also a contributor. Periodic reviews of whether customer contact details have changed are required, as well as scrupulous attention to returned emails and failed SMS messaging experienced during a marketing campaign.

It is interesting to note that “Inadequate senior management support” is given as a challenge by 21 percent of the respondents. Paradoxically, this same senior management are the least trusting of the organization’s data, generally believing that up to 33 percent of the company data is inaccurate. A rethink of the data strategy is obviously on the cards for companies with such a mismatch on data quality and the need for it.

Despite the feedback of poor management support, inadequate budgets and other woes, 90 percent of the companies interviewed had at least one data management project scheduled for 2017.  There are many facets to improving data quality, ranging from data cleansing to improved data governance, all of which will create value by improving sub-standard data.

How to improve data quality

First we need to understand that there are many reasons for the poor quality of data, not only the two aforementioned initial capturing errors or later errors concerning poor stewardship of the data. Master Data Management (or MDM) is complex and continuous and can require the recruitment of several specialists, depending on the company’s IT depth of skills. It is often beyond the capabilities of the company, and it might be advisable to contract data specialists for preparing data structures for analysis. While there are software tools available for data cleansing, it tends to be an iterative process, depending on the quality defects encountered.

A brief description of some of the techniques follows.

Data cleansing

Experian found that 33 percent of companies were embarking on a data cleansing exercise in 2017. Data cleansing is normally performed on contact data, such as addressing inconsistencies (e.g. postal codes do not match locations) and will have the immediate effect of improving any outward communications. Managing returned emails (or even snail mail and deliveries) and invalid telephone numbers should be a regular process that is accomplished alongside this.

Data integration

Thirty one percent of companies were planning a data integration project. This can take two forms: firstly merging two or more disparate databases, such as marketing and service desk, usually by purchasing software that provides a platform for the entire business and replacing the existing point solutions. Alternatively, the different systems can exist separately, but feed into a common integrated database.

Data migration

Moving from a legacy system which the company has outgrown to a new application requires quality data to avoid “GIGO” (garbage in, garbage out). It is recommended that data cleansing and data integration are executed prior to migration, to reduce down time caused by poor data.

Data preparation and data enrichment are normally performed as a precursor to data mining and business analytics. This is a form of data quality improvement.

Do not expect an overnight success

The management of data quality is an ongoing process, and if you consider the years spent in collecting the less than perfect data currently on hand, it is unreasonable to expect a quick fix. There is also some culture change and training required where employees are responsible for capturing customer data, so that less errors occur. A self-service option where the customer captures their own data via a tablet would also limit errors at the source. Where there are multiple applications involved, each with their own customer database, integration and deduplication of redundant data, ideally by migrating to a new shared platform will improve the quality to a great extent, but it is not a silver bullet either.

Data quality has been a topic for discussion for at least forty to fifty years; the interesting thing is that it is still a problem today, and that the defect rate is pretty much the same as, or more than, it was in the beginning. It is pointless chasing after the new shiny objects of AI and machine learning (although some data cleansing tools could be classified as AI) until you have a robust and reasonably clean data environment to operate from.

Photo credit: Shutterstock / Rawpixel.com

Leave a Reply

Your email address will not be published. Required fields are marked *