Most organizations today live in a data rich environment. For example, 15 out of 17 sectors in the US economy have more data stored per company than the entire US Library of Congress.
Some of the contributors to this data explosion have been the reduced cost of data storage, ubiquitous growth of handheld and other devices, growth in social media, and increased regulations about the type of data that organizations must capture and retain. While such growth in data holds the potential for more informed decision making, I find that a lot of organizations are struggling to store, catalog, and make sense of this seemingly endless inflow of information.
The term “Big Data” refers to datasets with a size beyond the ability of typical database software tools used to capture, store, manage, and analyze such data. It is unwise to define big data in terms of (exceeding) a particular file size … what was considered “big” a few years ago can probably be easily handled by current hardware and computing solutions! Likewise, as technology evolves over time, what we currently consider to be “big” data may be easily accommodated in the future. The threshold of “big” data could also vary by the specific industry or context – depending on the type of data commonly available and the software and computing technologies being used.
“Big Data” is typically characterized by three key characteristics – the three “Vs”. Volume refers to the large amount of data available to most decision makers. Variety refers to the multiple sources from which such data is gathered. And velocity alludes to the need to use such data in an expedient sense to inform decision making. A case in point – I just received a notification about suspected fraudulent activity on my credit card, within minutes of the transaction taking place. Until recently, such notification could take days.
I believe that while the intent of big data is to inform decision making, the concept has not yet lived up to its promise. Reasons include the inability of organizational processes to utilize these data effectively, a shortage of trained analysts, the lack of appropriate technology, and public concerns about privacy. Furthermore, organizations cannot simply assume that collecting large volumes of data will lead to more effective and informed decision making. To achieve such success, decision makers need to be mindful of the business problems being addressed, the technological tools that must be deployed, and the research process that should be implemented to use these data effectively. It is also critically important that organizations not build a big data strategy distinct from their traditional data strategy. That will fail! Big data and traditional data are both pieces of the overall strategy, and consistency will not only benefit the organizations, but also the customers and citizens by providing them with superior products to meet their individual needs.
As a Senior Vice President in Burke’s Decision Sciences group, Dr. Kunal Gupta has always pushed the boundaries of applying the ‘art and science’ of marketing research, to help companies optimize the ROI of their customer focused investments.