Discussion in 'other anti-virus software' started by InfinityAz, May 23, 2007.
can I have one of those beers.
yep, IC didnt look too worried, i thought he would have argued it, would have been more interesting to read
isnt that his usual way lol
Sure. As we all know Canadian beer is superior to American brands so I know you'll enjoy it. I've done the testing so I can vouch for the results.
I was also expecting the Inspector to expand our knowledge with his input.
A few of the Castle Cops MIRT team stated the same thing on the SAS forums.
Obviously AV-Test.org and AV-Comparatives must be padding Symantec's results because it is impossible for them to be doing so well.
It looks like the flavor of the week is Kaspersky! However I think my K is a lot sexier.
lol, doesnt beat mine though....
Its kind of hard to say what real world results are. From my point of view an AV must be trouble free. That means very few false alarms and no noticeable system slowdowns.
KAV never disappoints with the numbers, but can sometime be a pain to use, although less so on today's very fast computers.
I don't like finding half or a worm in the apple. One example is with Nod32. It scanned some file on my machine for several weeks and detected nothing until I ran the darn thing. Probably it was packed with something Nod32 could not open. Fortunately, I run as a limited user and nothing happened. I also restored a fresh image backup for good measure.
What I look for in tests done by others is consistency. It looks like Nod32 is not being consistent. Many others are. By the way, Eset recently had a 3 day update outage for trial users.
Nothing is perfect. The first release of ZASS was a disaster with the AV crashing all over the place. Symantec, which usually has the lowest count of false alarms in most tests just had a major mess up for its Chinese users. I suppose they only sold 246 copies and decided to show those pesky Reds something.
Finally, the real surprise is AVG. While this one sometimes does poorly on tests, it is a favorite of many, especially those involved in repairing computers. Len Silverman the technology columnist for the Houston Chronicle recommended using it on Vista, in part because Vista's native anti spyware program offered a good compliment. It costs nothing, updates reliably and is easy to use. Full system scans are very slow.
norton has done consistently well for some time and to my great surprise people are now doubting AV-Test to not give the due credit to norton..
norton has really done some major improvements with their products and with their detection-rates....
so,people can choose to ignore norton's excellent performance,if they don't like it doing better than their favourite AV's...
So who is paying these 15 people at av-test full time?
the first 6 positions is not a big surprise
very good news about avast! and avg pro
bad news about bitdefender thow......
A direct comparision vs the most recent AV-comparatives is shown below for the overlapping entries. The difference column is a simple arithmetic difference, the normalized difference is centered on the sample mean (average difference) and normalized by the sample standard deviation.
As once might expect, the average difference of (AV-Comparatives) - (AV-Test) is positive - the testbed was restricted to material less than 12 months old, so detections should, on average, be lower and they are. A more detailed breakdown of the www.AV-Test.org results by category examined is provided here. The results there do imply that the lower detection rates of some products were not due to major issues in a single category, but reflected a global perspective of the testbed.
Still, owing to the large size of the testbed, the discrepancy seen for F-Prot and NOD32 is really not expected if the testbeds were globally equivalent. Now, for some products results are effectively equivalent, so there's a bit of a disconnect. For a much smaller testbed, these types of deviations should, in fact, be expected.
The simple occurrence of a deviation doesn't mean either result is "wrong", simply that they're measuring somewhat different attributes. Types of circumstances along these lines could include shifts in the geographic origin of the malware with associated somewhat different local coverage, non-malicious portions of the malware not being detected, and so on. One would have to examine the comprehensive scan results to assess why the results differ and whether that is a material difference.
I'm not an expert at anything related to computers.
But I do find this topic interesting.
Whether you look at the performance numbers or the rankings, it looks like Webwasher and AVK2007 are the top 2, closely followed by Avira.
Webwasher appears to be a corporate product and I haven't found anything from AVK in English so I don't see me going there.
I agree that there is always a difference between a controlled test and actual performance.
But I think that if I use one of the higher ranked av's with other products to give me a layered approach along with limiting user accounts, that I should be pretty safe.
I see the differences between the av testers but I doubt that any of them would be stacking the deck for some reason. That would be a quick way to lose any income they might be making.
And thanks to all of you for my ongoing education.
Thanks Blue for doing the comparison. Some of what you said brings me back to my old stat. class (which I hated). haha.
Yep. I've noticed the same.
My processor is similar to yours:
and I don't experience slow downs with version 6.0.621.
I wonder if the new NOD Beta 3 would do any better?
The only way to resolve this issue is to do a comprehensive analysis of the results. Unlikely to happen.
Apparently AV-vendors who attended the Frisk conference were given the malware before the test date.
thats totally not fair if this is the case........
Anyway to find out which participants received the malware, did all of them or just a select few?
If this is true the test is not worth a cent. Its an very unfair and important advantage given to some produkt companies!
If i read the forum messages correctly only some participants have received the samples.
Anyone can proof this statement?
This would damage the credibility of av-test.org very seriously!
Add ~15GB worth of malware in 7 days? Next to impossible. It is more likely Andreas Marx/AV-test had already finished testing by the time the results were announced....Because in most cases the tests are done beforehand and results announced later. Besides, I doubt Andreas openly announced that "Hey, I'm releasing a test next week", so the question of whether vendors "optimized" their sample set is MOOT. I'm pretty darn sure Andreas is aware of this possibility and hence would have kept it in mind while doing the testing.
Besides, if this had been the case, many AV products that are not at the top would be so, for example, NOD32, F-Prot. Even AVG scored well in this test despite there being no Grisoft representative in the conference.
What Andrey explained in the VBA32 forum is just his theory, this doesn't mean anything about whether this is true and amounts to cheating or not.
I do not know Andreas Marx personally but I have seen a few of his writings and I do know he's far from stupid and is quite well oriented technically....
The first day of the conference was 15th May, the AV's were last updated on the 18th.
Even if vendors were given the samples on the first day would they have time to add them all?
Credit to Andreas to passing on the samples to vendors who did not attend.
i do agree that the tests were already done, however.....
its still an unfair advantage to the av companys that didnt go, as 15gb of new samples arrives at their doors for nothing, with no effort.
just a sidenote: there were at least 3 peoples from Grisoft there
i agree with firecat. and now that other compagny get a link, nothing seems to be unfair. it's just, perhaps, malwares taht av-test do not use anymore, malwares older than 12 month, dos and 16bit. who know.
Separate names with a comma.