The economic return on investment of a commercial photovoltaic system depends greatly on its performance over the long term and, hence, its degradation rate. Many methods have been proposed for assessing system degradation rates from outdoor performance data. However, comparing reported values from one analyst and research group to another requires a common baseline of performance; consistency between methods and analysts can be a challenge. An interlaboratory study was conducted involving different volunteer analysts reporting on the same photovoltaic performance data using different methodologies. Initial variability of the reported degradation rates was so high that analysts could not come to a consensus whether a system degraded or not. More consistent values are received when written guidance is provided to each analyst. Further improvements in analyst variance was accomplished by using the free open-source software RdTools, allowing a reduction in variance between analysts by more than two orders of magnitude over the first round, where multiple analysis methods are allowed. This article highlights many pitfalls in conducting “routine” degradation analysis, and it addresses some of the factors that must be considered when comparing degradation results reported by different analysts or methods.