Data is a valuable commodity to a reliability program, because it provides a metric by which performance and longevity can be measured. Analysis of the collected data can also reveal specific characteristics of a part or component’s behavior (characteristic life, infant mortality vs. wearout failure), and even identify the Root Cause(s) of Failure, as well as possible causes of premature failures (e.g., production flaws, improper maintenance, etc.), when applicable. Furthermore, in the absence of failure data for the part/component in question, we can sometimes estimate its reliability (or some related metric) utilizing data from similar and/or legacy products with the same use conditions. Thus, there a number of benefits that emphasize the importance of establishing an effective Reliability Data Collection and Analysis program.

From the RMQ perspective, raw data comes from any of the Testing and Simulation being performed, from production processes on the manufacturing floor, and from actual operation of products/services after delivery to the customers (both before and after the warranty period). Organizations can also obtain and use data generated by others on similar types of products and either compare them with their own experience, or use them as surrogate data sources if they don’t have corresponding data of their own. That being said, a company can collect as much data as it wants, but if it doesn’t invest in the skilled resources necessary to analyze and properly interpret it, then the collected data provides no added value to the organization. Even worse, it can lead the organization to make costly decisions based on “bad” interpretations. However, when done properly, analyzed data becomes a valuable input for component and system Reliability Modeling and Prediction, and can also be useful for system Affordability estimates and considerations during conceptual design stages.

For RMQ activities, useful data comes in the form of accumulated hours, number of failures experienced, root causes of failures, failure modes, number of maintenance actions required (and how long it took to fix each one), quality process monitoring (accept/reject, process capability), effectiveness of corrective actions, dollars invested in performing individual RMQ activities, etc. By collecting quality data during testing and/or operation, and employing the appropriate combination of detailed statistical analyses (e.g., a Weibull Analysis), insight can be gained into product failure rates, predominant failure modes (and their failure characteristics), effectiveness of maintenance activities, identification of processes drifting out of spec, Return on Investment (ROI) for the RMQ Program, and how effective corrective actions implemented to “fix” any noted deficiencies have been. A structured approach to a process like this is known as a Failure Reporting, Analysis and Corrective Action System (FRACAS),  in which data is collected and analyzed throughout the product/system life cycle to identify behavioral trends and applicable solutions. This type of information is not only of value to the system for which it was collected/analyzed, but for future systems utilizing similar components and/or assemblies as well.