Collecting data is a trend among companies nowadays because of the promises of great benefits that most of the literature on big data claims to arise. But even if it is a hot topic nowadays, its definition is still considered confusing for many business people. And little is said about the dangerous path that companies need to cross in order to achieve that promising situation. In fact, most of the enterprises fail to implement a successful data management project.
What is Big Data?
Big Data is a new way of understanding and making better decisions by gathering, processing and analysing huge quantities of a wide variety of data that, by means of traditional tools, would be too expensive or/and time consuming (Barranco, 2012). This aim of this technology is to reveal valuable information from large and complex databases in time and to the right person (Hashem et al., 2015).
Four Critical Success Factors that causes companies to fail
Decision support technologies suffer from many small problems that not many decision makers discover in time. Stephen Brobst (2013) wrote four bullet points about the main reasons why Big Data projects fail. First of all the companies focus more on implementing state-to-the-art technology than the business opportunities. This implies starting projects in which they will invest large amounts of money and also time that will be wasted as they will not be able to tackle it in the near future (Brobst, 2013). However, every organisation represents a whole new different world. That is why it is important that the different data management applications are carefully examined to see if they fit with the firm’s business goals, sometimes a small solution can fit better than the latest technology (Prokopp, 2014).
Secondly, the data is not always be easily accessible for subject matter experts. This problem also leads to one of the biggest problem about big data: the lack of coordination and understanding among businesspeople and IT employees. This is because the popularity for this technology has lead businesspeople to only care about IT issues and, therefore, left the IT employees do some of their work. In fact, there are cases where there is no clue about the questions that our company is trying to answer by means of this technology. (Brobst, 2013)
Thirdly, they fail to achieve an enterprise adoption. The departments of many companies work in silos, and so does data too. In fact, in many organisations employees do not even want to share their data or knowledge, and this limits big data applications’ and decision making process influence. (Brobst, 2013)
The fourth factor is that enterprises lacks the sophistication to understand that the project’s total cost of ownership includes people as well as information technology system. In this point Brobst (2013) seems to make clear how important is to have the right people with the right skills in order to make sure that the technological part runs as expected (Brobst, 2013).
Therefore, it can be summarised that the main reason that companies fail when it comes to Big Data is that their view on the topic is purely technical. They miss a clear direction for the whole enterprise and that data should be delivered to the person with the right skills to process it so that new questions emerge or uncover relations and patterns that could lead to more valuable information, as the definition of big data explains. As Brobst explains, they seek the so-called three Vs: Volume, variety and velocity, but they forget to add the V for value (Brobst, 2013).
QlikView’s unique features in the spotlight
Today’s technology makes it possible to present big data chunks to end users by using visualization software. QlikView is a business intelligence and visualisation software developed and owned by Qlik, a software company based in Lund (Qlikview.com).
According to Johan from Qlikview their product has some unique features that differentiate it from other similar software solutions, helping employees of a company to make better decisions. A typical Qlikview salespeople’s pitch is that they use in-memory technology, which means that the user gets faster response times (. The power of grey is another of the strengths that makes this tool unique. This feature makes all the data that have a relation with the selected object turn white and the data that do not have a relation with the selected object turn grey. The idea is that all the data will be displayed and the end user will be able to explore it in an interactive and intelligent but simple way and be able to come up not only with answers but also with new questions, therefore making better decisions (Johan, QlikView, 2014).
However, there is also a negative side. According to an experiment by Becklen and Cervone from 1983 there is always data that just passes by unnoticed as result of inattentional blindness (Q. Choi, 2010). Furthermore, allowing too much unrelated data being displayed on the employee’s screen, as the power of grey feature does, can disable our decision making process, as a study carried out by psychologists from Princeton- and Stanford university demonstrated (Friedman, 2012). This is the reason why we like to call it the danger of grey. As Friedman (2012) states, we have been told that knowledge is power and, as a consequence, people are so obsessed about collecting tons of data that it is affecting the quality of the decisions (Friedman, 2012). It must be remembered that it is not about the quantity but the quality that really matters in the end. Generally, those surveys that companies want us to fill in for marketing purposes are short and simple because, who is willing to spend a long time in front of a screen full of enormous and doubtful questions for nothing in return? That is why firms prefer to focus those surveys on a couple of questions and increase the rate completed questionnaires.
Therefore, in order to leverage the advantages of the power of grey feature without taking risks the QlikView expert should decide what data is valuable enough for the DM to always display and what should disappear. As Bob Margulies says, no data collection or analysis should be made before understanding the needs, and not wants, of the decision maker (Marti, 1996).
The power of grey feature needs velocity and, as QlikView holds the data in the system memory, it manages to eliminate the latency associated with disk accesses. However, the application is constrained by how much RAM our system has or could have. That is why QlikView does not perform well on huge data volumes due to the time it would spent reloading data. In addition, when the need to upgrade to a better BI product arrives, as QlikView stores the data in proprietary files, its migration to that other BI product will not be easy (Birst.com, n.d.).
It should also be pointed out that raw data in the hand of a person with not much experience in this area can make bad decisions because of misinformation. It is vital for the employees in the company to get a better understanding on where the data comes from, what does it mean and how the data is processed so the end user can turn it into valuable information (Svensson, Department of Informatics, Lunds University, 2012).
The dangerous effects of collecting too much data
According to K.N.C journalist in the magazine The Economist, the hype and the amount of money invested into big data is so huge (according to Gartner 2012 the amount worldwide was $28 billion) that people still have blind faith on this concept (Gartner, 2012). Projects such as “Google Flu Trends”, that started in 2008 as a project to make use of big data in order to predict the next outbreaks of influenza in 25 countries, were aimed because of this existing hype (Hodson, 2014). The whole idea behind this project is very simple: when we are sick, we try to find information on the Internet about our illness, and Google want’s to use its searching algorithm in order to track all those online searches and predict flu cases. The problem is that it failed, it has actually overestimated the number of cases four years in a row. One of the main reasons, which has not yet been discussed, was the low quality of the information. It is true that Google owns and powered the project with its huge quantity of data about their users. However, as Kung (2014) states, this case proves how “more data does not lead to better analysis”. In this particular example, the approach that Google uses to retrieve the data that powers this project is not designed to produce reliable data. Therefore, it is important to look at data carefully. Data is raw and it needs to be analysed to become information. Decisions based just on data are dangerous because we would be missing the whole picture (K.N.C, The Economist, 2014).
Concerning the ethical part of this technology, it has to be said that the collection of data from customers goes against the personal integrity and human right to privacy. As Tucker (2013) suggests, Big Data has made anonymity impossible. Just take Google again as an example of a company that, by means of its different services, manages to gather a wide variety of information about each of us (personal details thanks to GMail, desired holiday locations thanks to Maps, contacts thanks to Google+, etc.). Carmichael (2014) was not wrong after all when he said that Google actually knows us better than we know ourselves (Carmichael, 2014).
Data, as suggested by Marr (2015), has “the potential to be used for evil, as well as good”. There are indeed examples, as the Snowden case, that show that data can lead people to spy on the citizens in foreign countries. In this case can actually show the dark side of data. Obama also brings up the problem that some people can be discriminated by big data because data categorize people. There are many possibilities but it all depends on what the questions that the companies are looking for in their data (E. Sullivan, PBS Newshour The Rundown, 2014).
The future of big data
Even if most of the big data projects fail, it cannot be denied that those that actually do not benefit greatly from it. Just think that if Google Flu project had have succeeded, Google would be capable of predicting where the next cases of influenza would be going to take place, therefore gaining vast amounts of money from medicine and vaccine companies and revolutionizing this whole business (Salzberg, 2014).
The future, in fact, seems promising for big data, as more and more companies want to use this technology to know more about their customers. There are big dreams for many organisations that they will be able to map their customers patterns so they can create custom made solutions, providing a competitive advantage. Nevertheless, the real challenge for enterprises in the future, as Brobst argues, is which kind of questions to actually ask to the big data application (Brobst, 2013). Bernard Marr states that there will be an even bigger scare over privacy already in 2015. The big security breaches in 2014 did not scare the public enough to stop sharing personal details (Marr, 2014).
Thanks for another informative site. Where else could I get that kind of information written in such an ideal way? I have a project that I’m just now working on, and I’ve been on the look out for such info.
The window displays a list of the posted checks from the cash account you specified.