3 all too common mistakes analysts make in their technical work
Continuing our focus on the mistakes analysts make, let’s look beyond the contracting stage to spot mistakes during technical work. This post builds on the previous list of mistakes 1 to 3 during contracting.
These are shared with the hope of us all raising our game & staying aware of these risks. They are not intended as a criticism of hard working analysts, who have a challenging enough role already. So, please hear them in the spirit of being shared by someone who has made all of them.
I’ve thought about what goes wrong for analysts within the framework of my own Softer Skills model. That has prompted me to identify 3 mistakes made in the contracting stage, 3 during technical work & 3 when delivering. My final observation will be grounded in the skill that supports those other 9 (the need for commercial awareness). So, without further ado, let me dish the dirt on the second 3 I’ve seen go wrong, even during robust technical work…
3 more mistakes that analysts make, during technical work
Let’s assume that robust contracting has already been completed. Analysts have an accurate understanding of the business need, have planned the time they need & secured buy-in. What else could go wrong? Well the next stage is often to gather the data needed and once again at this stage there is the risk of autopilot. So, let me unpick how failing to think about each stage of your analysis can result in failure.
Whatever workflow you are used to following (if you do use a methodology to guide your process), I will consider the stage of data, analysis & insight generation. At each of this stages of evolution mistakes can be made the undermine the quality of what is delivered. There is so much that can go wrong, but let me just highlight one for each of those stages. These are mistakes I have made myself and seen all too often in practice.
(4) Becoming too removed from the real world via data
It’s often helpful to remember that data is only a proxy for the real world. It codifies (however imperfectly) real world people, organisations, events, processes or ways of thinking about them. But it can be too tempting for analysts to just access or receive the data provided without stopping to think about it. Time is precious and data prep for your analysis can be time consuming. So, surely it pays to just get on with it as soon as you have access?
Hopefully such thinking has already been challenged post-GDPR. Analysts like everyone in business have woken up to their responsibilities to think about the provenance of data, permission to use it and any limitations they need to honour. But just applying a data protection lens is too limited to protect an analyst from mistakes at this stage. Two other things are needed.
First, remember to think about the real world. Do you know how this data was captured or created? What could go wrong? What could be misrepresented or partial about such data? Where possible visit the point of data creation to really understand the potential issues to consider.
Secondly, remember context. The goal of your analysis is hopefully to change things. To improve decisions or take action to improve your organisation. Think about what often happens when presenting your results. People ask questions, right? If you reflect on your experience of such questions or challenges, with hindsight you will see that many could have been predicted. There will be questions about robustness of results, influence of other events, viability of taking action recommended, impact on people & finances. To be able to answer those questions requires data on a wider context than your hypothesis or narrow problem scope. So, gather that data too at this stage.
(5) Rushing into proving your point via analysis
Excited by the technical challenge ahead and relieved to have completed data prep, analysts are keen to get on. Often with the attitude of a detective or barrister, they set about proving their hypothesis or making their case. But knowing the right answer is dangerous. Even rushing into use of robust statistical modelling can be premature.
In his seminal work “Exploratory Data Analysis” (published by Pearson in 1977), John Tukey made the case for more EDA. He was concerned that too much emphasis was being placed on testing pre-determined hypotheses without sufficient time given for EDA to suggest hypotheses. Aside from what might sound like an argument for within statistical societies, analysts miss out if they don’t protect time for this step.
EDA has multiple benefits for today’s analysts & data scientists. It helps them become familiar with what I call the topology of the data. Eyeballing distributions. Knowing the skews, nulls, range of values and frequency distributions will help them make better decisions during their analysis or modelling. During this work they may also identify dubious outliers or inliers together with an opportunity to fix other data quality issues.
Further, such exploration of the data landscape (especially through use of exploratory data visualisation) will give them ideas. Some will be relevant for this brief, some not. But even those that feel like diversions are worth noting. Many times I have seen that ‘oddities‘ or unexpected patterns discovered during EDA form the basis for both important insights or productive future work. So, I urge all analysts to protect time for EDA.
(6) Generating one new piece of knowledge rather than an insight
However excellent an individual piece of analysis may be, in the bigger picture it is but one more data point. Too often, buoyed with their success in discovering something new or proving a hypothesis, analysts rush to print. They would do better to pause to think again. Reflect back on the original business need. What is needed in order to take appropriate action? Often it is not just new knowledge on what is happening but also why?
Here I find that analysts can learn from market researchers or economists. Both those other professions are used to working with very incomplete data. They also often have backgrounds in psychology or the humanities that prompt an interest in human motivations. Insight Generation can be an overlooked step and one I have posted on previously. I recommend that analysts read those two posts on how to run an insight generation workshop.
But even when insight generation needs to be done by the analyst themselves, I recommend doing so. Pause to reflect on what else is already known. Is there past analysis, market research or BI that sheds light on either people’s motivations or the context of their actions? How could you converge evidence to generate robust hypotheses about motivations or mindsets? What experiments could you propose to test acting in response to those inner motivations? Can you help your organisation change behaviour, not just respond to it?
What else have you often seen go wrong during analysts’ technical work?
I hope those 3 mistakes were helpful ‘watch outs‘. What else have you seen go wrong? Are there other common mistakes that you would highlight, that undermine the quality of their analysis?
In the next post of this series, I will share 3 all too common mistakes that analysts make during the delivery phase of their work. Until then, keep remembering the real world, explore your data & think about people.