Hi everybody! Here are many discussions about that which AV-test is the real one. Before we are going to search wich are the real live viruses it is better to make a Histogram analysis of the outcome data. The large AV-tests that have more than some 20 antivirus programs are possible to evaluate at first by Histogram -analysis. Histogram -analysis clarifyes that if the test was made under statistical control. There are two important measurements, kurtosis and skewness in that analysis. The histogram pattern that displays a spread of data where the peak is lower or higher than normal bell shaped curve is kurtosis and it is a measurement of the flatness or peakness of a distribution. If the kurtosis is near 3, then the data is considered to come from a "normal" distribution. The histogram pattern that displays occurences "piled up" away from the center is referred to as "skewed". If the data is centered right the measurement is negative and if the data is centered left the measurement is positive. The bigger value what more the data is centered from the normal distribution's centerpoint. There are many statistical programs in the market to calculate those "strange" issues where the results, histogram bars and statistical calculations, are the final outcome of those programs. When we are making antivirus programs the main goal is to have the detection rate 100%, and it is in this evaluating case (= Histogram analysis) the only goal. When we have now with av-programs only one goal the skewness will be a little bit less than 0 in the ideal case and the kurtosis should be so near 3 as possible. The curve of the Histogram is skewed to that direction where the goal is. All calculated points have to be between -3 and +3 sigma. Otherwise when the calculations are far away from that mentioned, there is something in that test what disturbs the final outcome and the test is unacceptable. First of all here are the VirusBulletin WinXP 2002 combined On-Demand test results calculated manually. Antivirus Zoo test in VirusBulletin 6-2002; WinXP: The Zoo test is a summary of three categories, Macro - 4 056 objects, Polymorphic - 15 011 objects and finally Standard - 1 585 objects. The sum of each category was calculated manually from the list on this site: http://www.virusbtn.com/old/comparatives/WinXP/2002/test_sets.html Detected Objects missed (% from 20 652 objects) Eset NOD32 100.0000 0 GDATA AntiVirusKit 99.9952 1 Kaspersky KAV 99.9952 1 CA eTrust Antivirus 99.9855 3 F-Secure Anti-Virus 99.9806 4 McAfee VirusScan* 99.9564 9 NAI VirusScan 99.9564 9 Symantec NAV 99.9322 14 DrWeb 4.28 99.8305 35 GeGAD RAV 99.7385 54 CA Vet Anti-Virus 99.6320 76 Sophos Anti-Virus 99.5448 94 Command AntiVirus 99.4916 105 Frisk F-Prot 99.4722 109 VirusBuster 99.2737 150 Alwil Avast32 99.2640 152 SOFTWIN BitDefender 99.0170 203 Trend PC-cillin 98.6926 270 Grisoft AVG 97.9227 429 Norman Virus Control 96.8381 653 Panda Antivirus 94.6010 1 115 Leprechaun VirusBuster 91.1437 1 829 HAURI ViRobot 43.3324 11 703 CAT Quickheal 35.2460 13 373 *) McAfee results were corrected from the VB August number and there from the On-Demand test. Here are then the results of 3 other antivirus tests, AV-test.org in the Zoo test 11-2001, VirusP 11-2002 and finally "Saso Badovinac" av-test 22 from www.grc.com. http://www.av-test.org/sites/tests.php3?lang=en http://www.virus.gr/english/fullxml/default.asp?id=31&mnu=31 https://grc.com/x/news.exe?cmd=article&group=grc.security.software&item=84294&utag= Finally here are the histograms with statistical calculations about the 4 different av-tests. At first I picked out Hauri and Quickheal from the VB Histogram -analysis, because it was too obvious that they were too far from the common distribution. Histogram 5.-10. November 2002: VirusP AV-test Total number of objects 47 204 General Statistics: (Ungrouped sample data) Pts Plotted = 33 Offscale Pts = 0 Mean = 75.67303 Std Dev (Sample) =18.67772 Kurtosis = 2.14768 Skewness = -0.64119 3 Sigma Limits: 19.63986 TO 131.70621 Process Capability Indices: (based on +/- 3 sigma) Process Capability = 112.06634 USL = 100. CPU = 0.43415 Z (USL) = 1.30246 9.64% will be over the USL value of 100. Based on standard normal distribution (derived from sample values). Histogram Mar-22-2003: "Saso Badovinac" AV-test Total number of objects over 100 000 General Statistics: (Ungrouped sample data) Pts Plotted = 20 Offscale Pts = 0 Mean = 76.77129 Std Dev (Sample) =17.41526 Kurtosis = 2.67307 Skewness = -0.70072 3 Sigma Limits: 24.52551 TO 129.01706 Process Capability Indices: (based on +/- 3 sigma) Process Capability = 104.49155 USL = 100. CPU = 0.4446 Z (USL) = 1.33381 9.11% will be over the USL value of 100. Based on standard normal distribution (derived from sample values). Histogram Nov-1-2001: AV-test.org AV-test Total number of objects 33 617 General Statistics: (Ungrouped sample data) Pts Plotted = 20 Offscale Pts = 0 Mean = 96.30886 Std Dev (Sample) =4.053 Kurtosis = 3.51577 Skewness = -1.1743 3 Sigma Limits: 84.14987 TO 108.46785 Process Capability Indices: (based on +/- 3 sigma) Process Capability = 24.31798 USL = 100. CPU = 0.30357 Z (USL) = 0.91072 18.12% will be over the USL value of 100. Based on standard normal distribution (derived from sample values). Histogram Jun-1-2002: VirusBulletin WinXP 2002, 22 best AV:s Total number of objects 20 652 General Statistics: (Ungrouped sample data) Pts Plotted = 22 Offscale Pts = 2 Mean = 98.83018 Std Dev (Sample) =2.14387 Kurtosis = 9.07663 Skewness = -2.58717 3 Sigma Limits: 92.39856 TO 105.2618 Process Capability Indices: (based on +/- 3 sigma) Process Capability = 12.86323 USL = 100. CPU = 0.18189 Z (USL) = 0.54566 29.27% will be over the USL value of 100. Based on standard normal distribution (derived from sample values). We can see from the av-tables that those 3 av-tests are very similar and acceptable but the fourth test, VirusBulletin WinXP 2002 On-Demand test is skewed too much against the 100 % line and there are not many antiviruses on the left side of the curve. When we are estimating the kurtosis and skewness values, the result is the same and VirusBulletin's values are too far from the ideal value! Finally I made the biggest test from VirusBulletin data that passed the Histogram -analysis, and there were 18 best av-Programs within. You can look the results here. Histogram Jun-1-2002: VirusBulletin WinXP 2002, best 18 AV:s Total number of objects 20 652 General Statistics: (Ungrouped sample data) Pts Plotted = 18 Offscale Pts = 0 Mean = 99.65324 Std Dev (Sample) =0.38853 Kurtosis = 3.14462 Skewness = -1.02386 3 Sigma Limits: 98.48764 TO 100.81885 Process Capability Indices: (based on +/- 3 sigma) Process Capability = 2.33121 USL = 100. CPU = 0.29749 Z (USL) = 0.89247 18.61% will be over the USL value of 100. Based on standard normal distribution (derived from sample values). I think that VirusBulletin does not have a real in the Zoo test within, because there are too many AV:s which are capable to find all or almost all objects in their test. Personally I am the last to doom those 3 AV-tests because they are under statistical control and there are not the same top five in those tests which belongs to a free competition game. The second thing is that why there in those 3 tests are so many AV:s that are capable to find over 95 % of those objects! I am curious to see what are the reasons why only VB WinXP 2002 test is so far away from the other tests. It seems to me that here are people who can't stand the thuth! PS. Can You tell me shortly (with pictures if possible) how I can add those attachment GIF pictures to this comment, Please? "The truth is out there, but it hurts!" Best Regards, Firefighter!