When our products don't work in their intended use environments, or we're starting to see field failures happen, we could look too bad product from producers or incorrect requirements or specifications. Or, maybe how testing was performed didn't match up with the use environment. Our product requirements might be right, but our test requirements might not be. Let's talk more about test design, after this brief introduction.
Hello and welcome to Quality during Design, the place to use quality thinking to create products others love for less. My name is Dianna. I’m a senior level quality professional and engineer with over 20 years of experience in manufacturing and design. Listen-in and then join the conversation at qualityduringdesign.com.
The way we test matters. We need to understand the forces and environmental exposure that the product sees in the field. If recreating or simulating it in a test lab, we can consider: is the way we test validated? Does our test method have the right requirements, and can it produce usable results? We can look at test methods like product in and of itself. The test method has to have requirements like our product requirements. Where is the test performed? How was it done? With what tools and equipment? Who is going to be performing the test and what level of experience or training do they need to have? What is the test’s output? What data are we trying to get from it?
When looking to set up our test, it's best to understand the use environment of our product to recreate forces and effects to the product. Usability engineering can have an input into product requirements and also test method requirements. We can use standards when we don't know how to recreate those forces. There are ISO standards that list minimum tensile requirements or other performance requirements for products. If they list these sorts of details, they usually also include a method of test. And ASTM has a catalog of industry standards also.
When we're creating a test for ourselves, we need a test method validation. Because tests also have requirements and need to produce usable results, we validate against those requirements. Validating our test method also ensures that it is precise and accurate. By precise, I mean reproducible and repeatable. Reproducible is also known as an appraiser variation. And it accounts for variation from different people performing a test. Repeatable could also be known as equipment variation. Or it's just variation from the test method. You may have heard of Gage, R&R or R&R and that describes the precision of the test method. A test method validation also evaluates if our methods are accurate. It's about the results that we're getting versus reality. It's measured in terms of bias, linearity, and stability. Bias is error in the measurement system. Linearity is how changes in what we're measuring affects the bias. And stability is about how our test method is over time and multiple uses. Stability is when if you measure the same part over time, do you get the same result? About precision and accuracy, we can visualize it like shooting arrows at a target. We’re precise, if our arrows are clustered around each other anywhere on the target. We’re accurate if our arrows are clustered about the bullseye.
The individual test method itself is important, but also important is the condition of the parts. When we test, we rarely test a part for one performance measure. We have a series of test to conduct. So, other things to consider about our test design in planning our test sequence is the effects of cumulative testing. To save on the number of parts and a test time acquired, we may decide to test one lot of product through multiple different performance tests. We need to make sure that we can isolate the effects of one test from a subsequent test. Or understand if they're going to build upon each other to make the part weaker or stronger during a later test. For example. Can we tensile test a joint and then flex test the body? Does the tensile test weakened our part body so now the flex test results are in question? Another example: are we exposing parts to a liquid as part of one test, and then that liquid eats away at our parts overtime, so a subsequent test fails? Another thing to consider about test sequence is does the test exposure make the parts stronger? For example, to age products in a lab environment, we can use temperature and humidity to accelerate the process. But temperature and humidity may also cure our products, making them stronger and giving us tested strength results that would be higher than reality.
Other things to consider about test methods: As part of the acceptance criteria of a test method, we can use visual quality standards. We covered that in a previous podcast. We also need to be careful when choosing the product to test. We talked about the importance of production equivalence in another earlier episode, also.
To conclude all of this, we can think about test methods themselves as having requirements. We validate our test methods against those requirements to ensure that we're getting usable results. Part of making sure results are usable is understanding the precision and accuracy of the test method. Having a validated test method is one step. Another step is ensuring that our total test flow of multiple tests doesn't interfere with results.
Test methods, test method validations, and gage R&R can get complex. You can talk your quality engineering friends about how to best execute on your test method validations. I'll also include some links at the podcast block.
What is today’s insight to action? Realize that sometimes test methods are tied to our products particular use environment. What was done for one similar product may not be suitable for another product. If you're planning to run a test for your product, make sure it's a validated test method. Ensure that the precision and accuracy of the test is right for your product’s requirements. When creating a test plan for your product, watch out for effects caused by cumulative testing on product.
Please visit this podcast blog and others at qualityduringdesign.com. Subscribe to the weekly newsletter to keep in touch. If you like this podcast or have a suggestion for an upcoming episode, let me know. You can find me at qualityduringdesign.com on Linked-In, or you could leave me a voicemail at 484-341-0238. This has been a production of Denney Enterprises. Thanks for listening!