Note: These years were adopted as some products revealed a far better performance over the 2018-19 years (such as lower false positives from M$). Malware Protection (M): 40,393 samples (40k) Real World Protection (R): 2991 samples (3k) Advanced Threat/Enhanced Real World Tests (A): 30 samples A pattern emerges when looking at 4 randomly chosen products- M$ Defender and Kaspersky demonstrate a logarithmic increase in failed protection (missed samples) with an increase in the number of samples tested [i.e., a relatively minor increase in failures for a massive increase in samples tested]; specifically, from 8 (for 3k samples) to 20-30 (for 40k samples). This is, intuitively, a reasonable result, from a mathematical point of view. The differential diagnosis is: the much larger (> 2 times) false positives belong to M$. The results from Bitdefender and Eset are counter- intuitive. Bitdefender: A low number of failures (5 of M, 11 of R, 8 of A) apparently independent of the sample size. However, the false positives are comparable to (just under) M$. It could be thought that Bitdefender is set slightly aggressively, favoring higher detection at the risk of higher false positives. Eset: A relatively high number of failures (marginally-to-considerably higher than M$) apparently independent of sample size. However, the number of false positives are (much) lower than the other 3 products. It could be thought that ESET is set slightly conservatively, favoring lower false positives for a slightly higher missed number of samples. I like to look at patterns. It’s just my 2c’s worth. feandur Acknowledgements: AV_Comparatives: https://www.av-comparatives.org/archive/ Andy Ful: https://malwaretips.com/threads/the-best-home-av-protection-2019-2020.106485/#post-927440