Testing can be performed for a variety of purposes. One simple example is the difference between failure discovery testing and demonstration testing. In one instance, failures are desirable as they enable engineers to identify how a product or system will fail, and then take steps to correct/modify the design to prevent or prolong such failures from occurring. This approach is commonly known as Reliability Growth Testing (RGT), though Reliability Growth can also be achieved without testing. RGT is typically performed on a collection of prototypes in the later design stages, such that there is time for additional modifications based on any design flaw(s) identified during the testing. Analysis-based reliability growth techniques have grown in popularity because they are performed earlier in the design process, when it is less costly to alter the design.
Demonstration testing, on the other hand, generally is used to evaluate a product’s or system’s performance by demonstrating a certain level of reliability (e.g., how long it will operate before it fails). This Reliability Demonstration/Qualification Testing (RDT/RQT) is typically performed in the latter stages of product development, at which point failures are undesirable and under performance may lead to costly and/or timely design reevaluations. A similar type of demonstration testing, known as Production Reliability Acceptance Testing (PRAT), is usually performed during the production phase of a product’s life cycle to ensure that the production process can achieve the reliability of the design – in other words, ensure the manufacturing process does introduce flaws that hinder performance.
Additional screening may be performed on a production lot to eliminate parts/systems with latent defects (defects that cannot be discovered through inspection). This Reliability Screening may include Environmental Stress Screening (ESS), Burn-In or Highly Accelerated Stress Screening (HASS), which differ in the type and severity of the stimuli applied to products to precipitate premature failures.
Once a general testing approach has been selected, it is not simply performed. Instead, there are a number of factors to consider and plan in order to achieve the desired results (or test the appropriate factor). The Design of Experiments is a common term and practice for the necessary due diligence in planning a test. Factors to consider include cost and/or schedule restrictions, applied stimuli (e.g., force, load cycling, temperature, voltage, etc.), available equipment, acceleration factors, and so on.
Despite the long history between testing and reliability, a more recent approach involves Modeling & Simulation (M&S) as well as additional analyses-based activities. Such alternatives should be considered, as Testing Simulation may provide a more suitable option based on the available resources.
Testing is performed to observe a component’s or system’s behavior, which is often quantified through various performance metrics. Collecting and analyzing the data is a critical component for any test, and ensures that meaningful knowledge is gleaned from the test results. A Failure Reporting, Analysis and Corrective Action System (FRACAS) is an important tool for collecting and analyzing data down to the Root Cause of failure. Weibull Analysis is an important data analysis technique for characterizing life and failure mode characteristics of a part, product or system.
This brief overview of the different reliability testing strategies only scratches the surface. A more in-depth discussion of the different types of tests, and the factors to consider when planning a specific type of test are described in the System Reliability Toolkit V.