Executive Summary:
If your organization has adopted Six Sigma or other statistical decision-making practices, then you are probably familiar with p-values, Cp and Cpk numbers, and R-squared. These are great comparators, but a simple chart often tells a much more comprehensive story than the statistical value.
The Rest of the Story:
When I first became educated and effectively “dipped” in the Six Sigma methodology, and began using statistics to compare processes or outcomes, one very basic rule was drilled into my consciousness and subsequently into my behavior. The mantra was P-G-A, which stood for Practical, Graphical, Analytical.
In short, it was a process or procedure for analyzing data. First, we must simply look at the raw data, just the numbers. If we don’t see anything suspicious, we should graph the data and look at it. Often times, the graph or chart alone will reveal what we need to learn. Last, we take the time to perform the statistical analysis.
It seems that this particular practice is no longer emphasized, or at least it’s not often practiced. It seems like people jump right to the statistical analysis.
The problem is, that if there is a problem with the data, the statistical numbers might not tell you. It sometimes takes an experienced eye to see the statistics and suspect that they don’t match common sense expectations.
Here is an obvious and also common example. We receive some data from a supplier or from our quality measurement lab and we want to run a process capability study on it. We dump it into our statistics software, push the button and get our Cpk number. Outstanding! We have a Cpk of 1.33! Maybe we even tell someone about it.
However, if we look at the data we realize that all of the data points have exactly the same number. The measurement lab used a go-no-go gauge to assess the incoming product and they all got the same number. There is no variation in the data and so the capability study is invalid. Oops!
If we had just looked at it first we wouldn’t have wasted our time. Often times, if we just look at the data before getting carried away we can visually see a problem before we waste time graphing or analyzing it. We can identify outliers caused by typos, impossible numbers, or duplicate data lines that can play tricks with our analysis.
OK. Enough said about procedure. Let’s talk about pictures.
When we use statistics to compare results, outputs, or processes, especially executives, we often have others performing the analysis and reporting the numbers to us. We learn that higher Cp and Cpk numbers are better. We remember that p-values below 0.05 are significant, and that R-squared values better than 80% or 90% show strong correlation.
Our analyzers often give us the numbers and assure us that they were diligent in their analysis. If we’re lucky, they will warn us when they see small sample sets or otherwise suspicious information. They do their best to anticipate what is important and feed us the minimum information necessary for us to make a decision, in the interest of time. After all, no one wants to engage in a drawn-out explanation of heterostegasticity.
Unfortunately, if our analysts are doing all the work and just feeding us the numbers, what do they need us for? Once the statistics are in, the decision is usually pretty obvious. That’s the power of the statistics. Congratulations, we have made ourselves obsolete by giving all of the decision-making effort to our underlings!
OK, so we may not want to invest our upper management and executive staff salaries in conducting statistical analyses on piles of data, but we also shouldn’t just let a couple of indicators be the sole intelligence upon which we make significant business decisions. What we need to do is ask for the picture.
When your analyst shows you the number, ask for the data plot. From the data plot, we can tell if the sample set it large or small, we can see indicators of possible trends, we can see if it’s possible the data came from more than one population, we can visualize influences, and we can better judge if one data set is practically different from another.
By way of example, take a look at the scatter plot in Figure 1. Our analyst reported to us a very low R-squared number indicating that there is no reason to believe that there is any relationship between the factor on the x-axis and the output on the y-axis. If you look at the scatter plot, you will probably see why.
However, if the analysis is an attempt to help us find a way to reduce variation in a production process, would we accept the r-squared indicator and abandon our efforts, or would we ask for more information? When you hear that 15 data points were collected, it sounds reasonable. However, when you see 15 data points scattered around as much as they are, it looks like a very weak analysis.
A basic truth of statistics is that more variation requires more data in order to characterize the behavior. If we look at Figure 1, we might decide that more data would be useful to better assess the behavior.
Now there are numerical, statistical indicators that our analyst probably reviewed before showing us the r-squared value, and these indicators will also warn us if the data set is too small to draw any meaningful conclusions, but remember, we don’t want to spend all afternoon reviewing every number of every analysis, and our analysts know it.
Look again. It looks like there might be a relationship, an upward-sloping pattern if there weren’t so many gaps. Perhaps the range or spread of the values gets smaller as the process moves to the right of the chart and we just don’t have enough information to prove it. Less variation in the output would be great.
The picture helps us visualize what the statistics are probably also indicating. We should get more data. Our analyst agrees, and goes off to do so.
Now, let’s take a look at Figure 2 which shows 50 data points, instead of 15, for the same process and the same input-output relationship. The r-squared value didn’t change much, and some of the other indicators of statistical significance are much improved. The r-squared still suggests there is no correlation.
When we look for regions of less variation and, therefore, a potentially more controllable output, we see that moving to the right is not the answer. However, we do see an apparent hourglass-on-its-side shape to the data. If we hold the input factor at 15, do we produce an output consistently between 8 and 10?
The r-squared and other correlation factors won’t tell us that. If we didn’t look at the scatter plot we might not have seen it. If our analyst was in a hurry because he has twenty of these to do in a day, or if the habitual behavior is to look at the number and be done with the investigation, and he didn’t think to point it out to us, then a potential opportunity to solve the problem would have been missed.
Suppose it’s very little trouble to set the input factor at 15 and run it for a few cycles or shifts and run a capability study of the output variation. We can relatively quickly determine if there is a sweet spot in the process controls or if we are back to square one. It’s the power of visualizing the data and the process performance.
Do yourself a huge favor and when your analyst shows up with the numbers, get in the habit of asking for the plots to go with them. It won’t take long before your analyst begins bringing the plots without your requesting them. If you need a quick refresher so you can ask for the right plot, here is a simple list.
- For a picture of variation or process capability, we want a histogram
- For correlation or relationships between one thing and another, we want a scatter plot
- For sequential or time-dependent data, we want a line graph or run chart
- For relative frequency of distinct events or outcomes, we want a Pareto chart
My personal preference is a dot-plot in each case. In other words, I prefer to see dots for each data point in addition to, or instead of, the lines or bars typically presented. The lines and bars don’t always communicate the volume of data upon which the analysis was performed. The dots show you quickly if there is a lot of data or a little bit.
Statistics are powerful decision-making tools, but they aren’t a substitute for common sense or intelligence. If you perform your own statistical analysis be sure and make the P-G-A sequence a habit. If analysts perform statistics for you, do yourself a favor and ask for the plot of the data.
With the plot of the data you can better understand what is going on with your process or your inputs or outputs. You can better identify opportunities, and you can better visualize what the statistics are trying to tell you, or in some cases are incapable of telling you. Reviewing the visual data will lead to smarter decisions.
Stay wise, friends.
To be able to get a high-powered statistical method, one should always open their mind to possible ideas and innovation. In this way, they will be sure that they could serve what the public need.
ReplyDelete