Congratulations! You are in the last lesson of this course called “Monitoring Field Data”. Here's what you'll learn in this lesson:
- You're going to learn how to define categories to monitor in the field after the product is in the hands of our customers and being used. You have a unique perspective as a design engineer to add to this important part of new product development.
- We're also going to identify field monitoring as an input into continuous improvement of our designs and our products.
Let's start with defining categories to monitor. We've designed our products successfully, we've decided to release it to market, and now we have the responsibility of monitoring how our product is doing in the market.
Design engineers have great input into field monitoring. You have been front and center with the product throughout its development. So you know the product’s weak spots and strengths-things to watch out for in the field. We can use FMEA results for this, test results, and even looking at our requirements. These are all ways that we can help guide and facilitate what needs to be monitored in our product when it's being used in the field by our customers. There was a lot of information generated and iterated through the new product development process and we can utilize that to help define what's to be monitored in the field.
Sometimes we choose things to monitor that are just obvious but they may not be the most important as far as the functionality, safety and performance of our product. The pieces of our work that we may want to look for to help define field monitoring categories or things to look for are:
- Our design inputs like our requirements and our user needs, these design inputs with the basis of most all of our design decisions about this product.
- If there is a disconnect between the field use and what we used for developing our design, that could pose a problem and it's definitely something we would want to investigate.
- We may also look to critical design characteristics, those things that we used our risk analysis to help us define as critical to safety, critical to quality, or critical to performance. There may be aspects of these design characteristics that we want to monitor in the field.
- We can also look to our test results. Was there a performance or a feature test that the results of which it was passing our requirements but really was at the very edges of our performance limits? If we're still comfortable to release the product, that might be okay. But that might be something that we want to monitor in the field. And we can use our risk analyses to help us define what to watch for: either the causes the failures or the effects.
- We can also use risk analyses to help the people that are monitoring the data in the field or interacting with our customers. If our customers are experiencing a particular symptom, what could be the pieces or components that would be failing? If our customer facing partners have this information, they may be better able to help diagnose and help our customers resolve their problem. It may also help us when we're trying to figure out what is happening in the field and what could be causing problems for our customers.
Let's talk about field monitoring for continuous improvement.
We also have a responsibility to continue monitoring the risk of the product after it's in the field. We release the product, understanding that there is going to be a certain level of risk. Remember that we can't eliminate all risk. We just have to try to manage it.
- Is what we thought would happen in the field actually happening in the field?
- Are we getting field failures that we didn't think would happen or would not happen as often?
- Are the effects of those failures more severe than we thought?
- And even in the cases of its use and its use environment: are people using it in a place where we didn't think they would be using it, in a more caustic environment that we had planned? That might affect the long term reliability of a product.
- Is there a different set of users using the product for something that we hadn't anticipated?
We can use a standard quality tool to help us with continuous improvement through field monitoring.
The Plan Do Study Act is a continuous improvement cycle. The idea is that we're never truly done with continuous improvement. There's always something new we can learn and ways that we can act to make improvements.
in the case of field monitoring the Plan Do Study Act could look like this:
We PLAN for field monitoring. We're actually taking these design engineering inputs and helping our teams choose what to monitor and how to monitor it.
The DO part of our Plan Do Study Act is actually monitoring and collecting the data. Product design engineers may not have a lot of activities in this section but it's an activity that needs to be done. We need to collect data from the field.
Next we want to STUDY the data that we've collected and analyze the results. And now I feel like we have come full circle with our Course 1 as we're studying the results. We’re going to be looking at things like complaints. We’re going to be monitoring for activities or disconnects and our data per the plan. We're going to assess how bad the situation is - is it happening more often than we thought? Is it more severe than we thought it would be? We may decide that we need to take action. We need to start with a symptom that we're getting from the field and investigate it and eventually get to the root cause of the problem. We can use our FMEAs to help us evaluate:
- what it is that we're seeing
- can help us find the root cause
- help us assess whether or not something is okay
- help us to look at the controls, the current controls that are in place.
- We can evaluate if the action items that we have were implemented. If they weren't, why not? Are the current controls we have in place adequate?
- Is there another cause that's creating this failure?
- Is our effect what we thought it was?
And finally, we would want to ACT on the results. We want to do something about the information that we've learned.
- Do we need to change any part of our system?
- Are parts of our system that we don't have control over - have they changed? And do we need to react to that?
- Is our communication with our customer clear?.
And as a team, we need to make a decision on what actions to take.
When we design a product, we design it for a particular user, user group, use environment for a particular use to perform a certain function. Now in field use, if those things have changed we need to know about it and understand the risks of it. Are all of those baseline decisions that we use to make our product -.are they still true for actually how the product is being used? Whether this is a problem or an opportunity (or maybe both) depends on where we landed with the product when it went out the door, when we are finished with the design versus how it's being used.
Now we also may have produced a product that interfaced with other things. How have those other things developed and matured? Did the equipment change? Did the standards of practices for wherever product is being used – did they get updated and did they change?
These are all things that we need to monitor with our product after it's been in the field.
Let's review what we talked about in this lesson.
We talked about defining categories to monitor in the field using all the information that we created, generated and iterated on during new product development to define categories that are going to matter to us and to our users. We can use the results of field monitoring as input into continuous design improvement, either improving on the designs that we have now or as input into the next iteration of our product design.
And I want to congratulate you! You've reached the end of Course 1 of the Quality During Design Journey: Design for Problems and Risks. I'll see you in the course wrap up.