I use to accelerate tests based on Temperature using Arrhenius models, specially for EE’s Devices.
For others Components Types I try to follow HALT and CALT Methodologies, where I try to find Design Failure Modes by testing at high load/stress levels and then I focus on main failure modes testing and their stresses testing them at high stress levels.
IN these fast tests I try to (1) detect failure modes/degradation and anticipate failure and (2) repeat tests at different stress levels searching the model that could help to project life at nominal stresses,
When part is critical and there’s enough ressources we run more detailed tests, experiments with more deep analysis, tests conditions, and try to build models and also understand at very detail level POF phenomenas,
In any case, the most typical case from statistical figures and Sample Sizes recommendations to reallity capacity of companies and projects Lead Times we are forced to play with Sample Size Statistics and build confidence on reliability based on design margin to stresses, to different usage conditions and anticipating degradation phenomenas instead of waiting until the failure,
I have an excel calculators to check the statistical sample size recommended in different cases; samples sizes based on 0 failures binomial distribution, or n failures, or MTBF demonstration given test time or stress levels…
hope this helps to you,
Reliability Engineer at HP