For the use of loA, TDI or CP methods, it is necessary to indicate the acceptable difference. It is important to note that this is a context-based decision that should be made by an expert who knows what it means that the devices are virtually equivalent. Whether or not the differences between devices are generally within the CAD depends on both the relative distortion between them and their accuracy. If the bias and inaccuracy are sufficiently low (as determined by the OAC), the devices can be interchangeable for practical purposes. This is an important decision because a misguided CAD leads to erroneous conclusions about the degree of the agreement. The limitations of the agreement approach are to determine whether the differences between the devices are, on average, sufficient to be considered clinically acceptable. This is determined by assessing the insparation of their limits of variation in the range of clinically acceptable differences. The probability of coverage (CP) proposed by Lin et al. [6] answers the same question more directly by calculating the probability that the differences between the devices themselves are at the limit of a tolerance interval - what Bland and Altman call the domain of clinically acceptable differences. Higher probabilities clearly indicate closer convergence. In practice, the researcher must decide whether the value of the CP is large enough for the two devices to be interchangeable.

Assuming a fixed device effect, Carrasco et al. [17] have shown that the CCC for repeated measurements corresponds to the Intra-Class Coefficient Correlation (ICC) and can be written as such as another Indian literature, which was first the subject of a statistical test to determine whether the grouping was important and, if not, the case , to conduct an analysis without adapting to the grouping [37]. This method is not recommended, because even if the clustering test is not statistically significant, the aggregation in the data may still be sufficient to distort the match index. Atkinson G, Nevill A. Comment on the use of correlation to assess the agreement between two variables. Biometrics. 1997;53:775-7. Pan Y, Gao J, Haber M, Barnhart HX.

Estimated coefficients of the Individual Expert Agreement (CIA) for quantitative and binary data with SAS and R. Comput Methods Prog Biomed. 2010;98(2):214–9. The need for intervals of trust, in addition to the limits of agreement, is clear in the literature and rightly so. However, we believe that it is just as important - if not yet - to declare the different components of variance (for example. B between the variance of the subject and the discrepancy within the subjects) and bias`s estimates alongside the thought cues, as these will shed light on the source of the discrepancies. In addition, it is important to be aware that differences of opinion between devices, as observed in the agreement indices, can mask differences in accuracy and measurement error between devices and reflect underlying average distortions that cannot be properly modeled by absolute averages.

## Recent Comments