We’re getting test data back from the lab and the numbers are looking pretty good. Our test results are within our requirement limits. So let’s write it up and have it a go. But hold on, let’s plot it out first. Let’s talk about plots and why they’re important, what we can do with them after this brief introduction.
Hello, and welcome to Quality during Design the place to use quality thinking to create products others love for less. My name is Dianna. I’m a senior level quality professional and engineer with over 20 years of experience in manufacturing and design. Listen in and then join the conversation at qualityduringdesign.com.
I attended a conference last week for reliability engineers. Well, it was hosted by the ASQ Reliability and Risk Division. It was “Reliability, Maintenance, and Managing Risk Conference”. While at the conference, I met some very interesting people, very friendly people. And I sat in on presentations of useful case studies and interesting ideas about reliability.
One of the presenters was Dr. Wayne Nelson. He is an expert on reliability and statistical methods. He’s won several awards and has also published books and papers on statistical methods. He was a highly respected contributor to the conference. He had a couple of presentations that I sat in and they had a particular theme (he told us about the theme, too). His theme was “Always Plot your Data.” As a reliability and quality practitioner, I think my go-to is to plot the data, but I never really thought of it as something that I would mention that I do. I just kind of do it. I thought it was a good reminder for the reliability engineers at the conference, but it’s also something good to talk about with design engineers. So let’s talk about why we want to always plot our data.
It doesn’t matter if our data is discrete or continuous, or if it’s counts or measures, we can still plot it. It helps us to understand what we’re looking at. The first thing I look at when looking at a plot is how uniform are our results. And if it’s not uniform that gives me some clue as to what’s happening behind the data. Is there natural variation in our product? It could be caused from the materials itself, from the way that it’s made or even the way that it’s assembled. Could it also be a stack up of design tolerances, where everything is made within spec, but the design allows for variation in the end product? Or is it our test method? Is it introducing issues? Could it be that the way that we hold the part isn’t ideal or maybe the way that we’re holding it or positioning it during test, isn’t really testing the area that we’re looking to test, but instead putting stress on a different part and causing to skew our results?
And does it look like we’re dealing with multiple failure modes? Are they competing failure modes? We talk about competing failure modes and what they are, what they look like and how to deal with them in a previous episode of the quality during design podcas., I’ll link to it in the show notes.
One type of plot may not be enough. I’ve found that plotting it out once sort of starts a breadcrumb trail. One plot will show me the data and may highlight something interesting. Then we follow the breadcrumb and start digging a little deeper into the data. We may add more inputs into the database and generate another plot that could help us investigate what it is we’re looking at.
Now, things to watch out for in our plots – and these are some common gotchas.
One of the things to watch out for are outliers. I know they’re pesky and don’t make for a pretty plot, but we don’t delete or eliminate them is rather a source of something interesting or a telltale sign of something wrong. It could be another failure mode, maybe a new one. So we’re going to check the test results or reexamine our samples. It could be those test method issues that we talked about or a special cause of a failure in a manufacturing method. Maybe it’s not the natural variation of our process, but something happened during the production of our parts that we tested. What was it? And do we need to prevent it from happening again in the future?
Another gotcha with plotting is just understanding the nature of your data before you start plotting it. Sometimes the data will inform how it is that we plot something. So that’s one thing to consider is just the nature of our data. And then some plots like probability plots, assume that your data meets certain criteria. You may have to plot or test your data against those criteria before you can assume the results of the plot are correct or accurate and before you can start making decisions with it. There are lots of other plots that don’t require this, like run plots and scatter plots. We just need to be aware of which plots make assumptions based on the equations they use to generate them. Something I learned from a coworker (and, honestly, through some software training) is just to go through the preferences and assumptions of whatever software you’re using to generate the plot. Go through each window and look at the inputs and the assumptions that you’re making when you’re generating the plot. That will give you some indication if there might be something else you need to account for.
Another gotcha with plots – and this is something that Dr. Nelson pointed out – he worked with an engineer that was comparing four different things, but the things that he was comparing didn’t have the same axes. For example, as engineers, we know to be careful to use calculations with the same unit of measure. And it’s the same thing with plots. When we create multiple plots and are comparing them, we want to make sure that we’re using the same units of measure: that we have the zero location the same for each plot and that we’re using the same limits and range on each axis. We want to compare like with like otherwise it could skew our view of the data and could lead us to some wrong decisions.
There are two common plots that reliability engineers look at, and it’s a probability density function and a cumulative density function. And these two plots we get into more in another episode of quality during design. And this one’s a video episode. So you can see a picture of what it is I’m talking about. I’ll link to it in the show notes. Also, there are lots of different plots and you may not be familiar with all the types of plots that you could look at.
I’m familiar with a lot of plots and I’m sure I don’t have all of them covered, but just try one that you think will help you. Plots are a tool to help us decipher important information from data so we can make decisions. So if the plot that we choose to try first doesn’t help, then try it a different way, try a different plot.
So what’s, today’s insight to action? When you get data, go ahead and preview it and see how things might be looking and then plot it out. You’ll really get to see how things are looking and you’ll be able to make better decisions from your data.
If you like the content in this episode, visit Quality during Design.com, where you can subscribe to the weekly newsletter to keep in touch. This has been a production of Deeney enterprises. Thanks for listening!