How Can We Know?

Discussion in 'other anti-virus software' started by JerryM, Apr 21, 2013.

Thread Status:
Not open for further replies.
  1. JerryM

    JerryM Registered Member

    Joined:
    Aug 31, 2003
    Posts:
    4,306
    In looking at Norton and Webroot I wonder if anyone can design a test, except the developer, that will do them justice? They claim that various test organizations do not know how to properly test them to bring out their real capabilities.

    So if no one knows how to test the various applications which make such claims, how can we know what is effective and what is not?

    Don't waste your time with advice such as, "Test it yourself." That is a non-starter for the vast majority.

    So back to the original question? If no one knows how to test the various AV applications how can we know if the particular ones are any good or just developer hype claims?

    Jerry
     
  2. Triple Helix

    Triple Helix Specialist

    Joined:
    Nov 20, 2004
    Posts:
    13,269
    Location:
    Ontario, Canada
  3. JerryM

    JerryM Registered Member

    Joined:
    Aug 31, 2003
    Posts:
    4,306
  4. Firecat

    Firecat Registered Member

    Joined:
    Jan 2, 2005
    Posts:
    8,251
    Location:
    The land of no identity :D
    The difference between Webroot and Symantec is that Webroot has never outright refused to participate in tests like AV-C and AV-Test. The AV-C debacle did come about but Webroot continued till they suspended the testing and released the joint statement. They have not called AV-C's test as bogus either but instead emphasized their other technologies.

    There is a difference in the attitude of the two companies if you notice carefully. That being said; it's probably true they are probably more effective in the real world than what AV-C's tests show.

    Real-world tests are probably a better indicator; however - to be perfectly honest there is no single accurate way of testing. Each test is a different but perfectly legitimate scenario and it is a fallacy to discount anything, except when features have been intentionally disabled.
     
  5. TheWindBringeth

    TheWindBringeth Registered Member

    Joined:
    Feb 29, 2012
    Posts:
    2,171
    Regarding this:

    "AV-Comparatives recognizes that Webroot’s approach to protecting the user (preventing unauthorized data transmission, combined with protocolling changes and reversing them where possible) would require a different test procedure which is not applied by any testing lab so far."

    Given that it comes right after "preventing authorized data transmission", the "protocolling changes and reversing them where possible" line may be interpreted by some as a means to "undo" unauthorized data transmission. Is that what was meant? Does Webroot advertise the ability to "undo" information leaks?
     
  6. JerryM

    JerryM Registered Member

    Joined:
    Aug 31, 2003
    Posts:
    4,306
    Thanks for the replies. Of course, any vendor wants a fair trial for his product, and I respect that. A guy like me has no idea as to whether or not the criticisms are well founded
    Jerry
     
  7. guest

    guest Guest

    the Rollback features of Webroot does not include "informations leaks" (AFAIK), i hardly see any product tracing the leaks to its source then penetrate the system and erase the stolen datas ^^

    to be simple:

    1- A suspicious process is detected, WSA "monitor" it (journaling all system areas affected by it), but allow its presence, then a query is transmitted to the cloud for a response:

    2- Positive response: the process is considered as safe, all limitations are removed.

    3- Negative response: the process is considered malicious, it is quarantined/removed. All actions done by it are reversed (grace the jounaling system).

    now, what is important to know, is that this rollback ability is not immediate, the query may took seconds to days until the process is checked.


    That is why any test labs procedures can't be applied to WSA.

    correct me if i'm wrong.
     
  8. JerryM

    JerryM Registered Member

    Joined:
    Aug 31, 2003
    Posts:
    4,306
    I am not able to correct you, but acknowledging that you are correct how can we know that Webroot is able to do as they claim?
    Thanks,
    Jerry
     
  9. er34

    er34 Guest

    Jerry, there are many security specialists outside AVC (which you trully love and believe in). So, you can find one, ask him/her and trust this specialist re. opinion about the product in question.
     
  10. RejZoR

    RejZoR Lurker

    Joined:
    May 31, 2004
    Posts:
    6,426
    Real-world test from AV-C is the right thing. I just wonder what's the methodology to decide what was compromised and what wasn't. For situations with Auto Sandboxing systems like the one in avast! or Comodo (especially Comodo since it only restricts on the host level).
     
  11. JerryM

    JerryM Registered Member

    Joined:
    Aug 31, 2003
    Posts:
    4,306
    Hi er 34,
    I don't know any security specialists. If I did I would do as I do with tests. I do consider the ones that seem to have a good reputation, but in truth I do think AV-C is the best and least biased.

    I do have a lot of confidence in Firecat if he expresses an opinion.

    Thanks,
    Jerry
     
  12. sm1

    sm1 Registered Member

    Joined:
    Jan 1, 2011
    Posts:
    570
    AV-C clearly mentions in its file detection test reports that only one aspect (component) of the products are tested. It recommends to use a product personally to know its suitability to the user's needs. It is the user who is at fault if he skips the introduction part in the report and go straight to the results page and arrives at a conclusion. File detection test is carried out with internet connection enabled. I give more weightage to this test since if malware is detected in the scan there is no chance for it to be executed in the first place whereas other components will chip in after malware is executed and I am not sure whether all changes made by malware will be completely rolled back by the antivirus software and not all users are comfortable with sandboxing techniques. Hence there is no point in calling a test bogus if the testing methodology and what component of security software is tested is explained in the test report.
     
  13. silverfox99

    silverfox99 Registered Member

    Joined:
    Jul 14, 2006
    Posts:
    204
    The way i see it, Webroot's 'rollback' function is that it is a repair feature, not a detection feature. Often we seem to talk about 'rollback' as if it somehow mitigates against a poor performance in a detection test, but it doesn't, it just supports the repair process post infection. All AVs have 'repair' features - i'm not sure why webroot needs a 'special' test to evaluate it's 'repair' module(s)?

    Test lab features can be applied to WSA same as any other AV. AVs tend to be better or worse at whatever feature when compared to another AV (see recent Symantec file-detection test).

    Some time ago when Webroot was performing very poorly in detection tests, ie lower than 95% detection score (outside top 10 on AV-C if include 'user dependent'), some users at Wilders defended the product on the basis of the journal system which would 'roll-back' changes at an unspecified future time post infection, and that this roll-back function somehow meant that a poor detection score in test scenario was 'ok' - or at least that's what these posters implied. I never really got that. But some here think thats a valid argument, and that's fine. There was also an implication that Webroot had some mystic powers that was so amazing that no mortal test company would be able to test it properly. I never really got that either.

    My argument is this. Even if some AV impelmented a new 'mega clean that repairs 100% of any infection - guaranteed' I would say fine, that seems like a useful feature - however if your detection is below top quartile, the AV is out for me, as there is no guarantee of detection response time post infection (a critical issue no?). So a banking trojan could capture and send out data, well before AV detects, so even if machine is 'cleaned' post infection at some unspecified time, there is no way of getting the lost data back? Correct me if i am wrong.

    I'm pleased Webroot has a 'roll back' repair feature, which should be really useful if the user gets infected. An infection which, unfortunately, was more likely if using a webroot product as opposed to some others in top 10/quartile (see AV-C Dec 2012).

    I have nothing against Webroot, I have tried the product and like it's lightness and ease of use. I just thought it's detection wasn't quite up to scratch.

    I will likely now be told that i do not understand how Webroot works or that i should go away. Fair enough. But i think i understand quite well how webroot works and that it's functions can be tested no problem. (I do realise that recently Webroot have upped their game detection wise which is great).
     
    Last edited: Apr 22, 2013
  14. TheWindBringeth

    TheWindBringeth Registered Member

    Joined:
    Feb 29, 2012
    Posts:
    2,171
    Yes, I would say that once the effects become external to the protected device the protection software can't reliably reverse them. So FAIL scenarios would include things like:

    - Undesired uploads to a server or desired uploads that didn't happen on a timely basis
    - Undesired manipulation of a peripheral device or desired manipulation that didn't happen on a timely basis
    - Undesired display of information to a human or desired display of information that didn't happen on a timely basis

    I imagine even some "internal" effects can be difficult to reverse. Can you reliably perform rollback type operations *without* losing any work? How many firmware systems are susceptible to bricking these days?

    I think it reasonable to give a product partial credit for discovering and alerting the user to something after the fact, and also some partial credit for being able to rollback some of the effects. Theoretically, a rollback could attempt to undo an upload of data to a (open) remote FTP server by connecting to it and attempting to delete the associated file. However, since there is no way to be sure that the remote file hasn't been looked at and any/all copies are destroyed, I personally wouldn't give any partial credit for that.
     
  15. Antiviruser

    Antiviruser Registered Member

    Joined:
    Jan 12, 2009
    Posts:
    5
    No, it should make it impossible for keyloggers to capture your keys.

    youtube.com/watch?v=uKMZ1Ukw_7I
     
  16. Techwiz

    Techwiz Registered Member

    Joined:
    Jan 5, 2012
    Posts:
    541
    Location:
    United States
    Not every company refuses to be evaluated by these online comparative (product test) sites because it's good publicity. I known that Norton/Symantec refuses to on ground that the tests do not adequately evaluate their applications. There are a couple different dynamics thought that can make third party tests and even self-tests unreliable. So I wouldn't place too much faith in these services to steer you to the so called "best" possible product on market. Best is very subjective considering the environment you'll be installing will not match the test machine. Not to mention that these applications are being developed at different rates by developers with different focuses in regard to core components and services. Should it have hips? Should it have a sandbox? Should it scan proactively or reactively for/against threats? What samples are the signatures protecting you against? To what degree is the product being supported? When you consider the diversity within the market and ignore the sales pitches, its still comes down to what suits your needs. I'd take any thing you read online from the developer and these online sites with a grain of salt. It's going to come down to what criteria you think the applications needs to be weighed upon. If you can find a site that shares your perspective, then it might be a good fit. Otherwise, you might need to draw on many sources of information to make a decision.

    My personal criteria (ranked in order of importance):
    (1) Compatibility with my system and other applications
    (2) Effectiveness (prevention, detection, disinfection)
    (3) Performance (supported, persistence, etc.)
    (4) Resilience (is this easily disabled by malicious agents?)
    (5) Appearance
     
    Last edited: Apr 22, 2013
  17. guest

    guest Guest

    Detection will more and more become an obsolete technology, while prevention and virtualization features will grows.

    In a world with so many sophisticated 0-days malwares (the recent one is able to delete any of its downloaded files so its isolation and analyze is impossible), people and businesses can't wait for a signature release that may take hours or days (not saying FPs as we saw with MBAM)

    Not so long time ago, most products were focused only on their AV engine for high detection/heuristic; then one day some vendors started using HIPS/BB, virtualization and sandboxing with remarkable results in prevention.

    Now a solution without this kind of protection is almost discarded right away considered as weak.

    about WSA, yes before its detection was very average (not to say weak^^), but alongside, it possesses prevention features that compensated the low detection rate, and securing the user well enough.

    Comodo, Norton and others now focus on Prevention rather than Detection.
     
  18. JerryM

    JerryM Registered Member

    Joined:
    Aug 31, 2003
    Posts:
    4,306
    So is it true that some of the AVs that refuse to be tested rely on voodoo? If they can't be tested then what else is it claiming?

    If it cannot detect how can it prevent? Or is this double speak too?

    Jerry
     
  19. No, WSA can be easily tweaked to use it HIPS/Behavioral Monitor on threatgates and use community check as a whitelist.

    Decided to put it on the laptop of my wife, after having PrevX4 tested with fresh malware samples during private pre-beta, closed pre-beta and public beta phase of Prevx4. It protected 100% against lots of 0-days and malwares I threw at it during thise months.

    Few tweaks

    1. The monitored aps can't touch system objects, all file/registry activity is monitored (so in case of infection it can clean up), so set all internet facings aps (browser, mediaplayer, mail) to monitored in stead of trusted.

    2. Heuristics limit running of programs from external sources (Internet, USB) to executables which have been seen by a large part of the community (this basically prevents 0-day infections, see https://www.wilderssecurity.com/showpost.php?p=2218399&postcount=16 )

    3. Increase identity protection to prevent browser changes for HTTP traffic also.


    Other benefit is that with a SSD or Hybrid harddisk WSA really feels very light (I do not keep a small local copy of AV-blacklist data base on HD, instead I increased / maxed out heuristics for off-line usage ).

    Only wish list I would have is an option to run untrusted/unsigned executables automatically in the sandbox (when no connection with Webroot servers).
     
    Last edited by a moderator: Apr 23, 2013
  20. guest

    guest Guest


    Prevention act before Detection

    just this simple analogy:

    you are in charge of a military camp (the system), you set mines (HIPS/BB/Webfilters/sandbox) to prevent intruders (malwares) to penetrate and sabotage the camp; while sentinels and guards (AVs/heuristics) will look for them inside the camp.

    If the guards catch them inside, they will arrest (quarantine) or kill (delete) them.

    if they sabotaged (modify) some areas (files/services) , the camp maintenance will fix (clean/restore) the damages.
     
  21. silverfox99

    silverfox99 Registered Member

    Joined:
    Jul 14, 2006
    Posts:
    204
    Not sure that helps much. In a test environment what we're saying is with all that stuff you mention (sentinels, tanks, guards, canons etc etc) how many times did the machine get compromised? Marketeers will always be keen to tell you how many tanks and soldiers they have with the latest snipers and knights on horseback, but the relevant issue test wise is how effective is this defense? Of course not all tests test this ie detection tests detection but if you look at say MRG banking tests they are testing whether machine compromised ie did trojan manage to record and send personal data out?

    Antiviruser: "No, it should make it impossible for keyloggers to capture your keys."


    Not sure i would use the word 'impossible' with any AV, the only way you can make infection impossible is to disconnect and turn your machine off and leave it that way. I'd be wary of any AV vendor who made an 'impossible' claim.
     
  22. JerryM

    JerryM Registered Member

    Joined:
    Aug 31, 2003
    Posts:
    4,306
    I really do appreciate the comments and explanations, BUT they are not convincing.
    If in a test malware gets through it doesn't matter if the test organization does not fully understand how the AV works. If the "computers" were set up and attacks occurred that penetrated, then all the excuses in the world don't make any difference.

    I am not very interested in how an application works, I want it to give me protection. Why does a Real World test by AV-C or any test organization not duplicate my world on the internet?

    Now I recognize that my knowledge of all this is miniscule, but I do understand that if an AV does not protect me I don't want to use it.

    So I am back to my original question, "How can we know an AV is or can protect us if it does not show up well in test by professional test organizations?" I just don't buy the argument that it will protect, but no one knows how to test it for proof.

    Regards,
    Jerry
     
  23. silverfox99

    silverfox99 Registered Member

    Joined:
    Jul 14, 2006
    Posts:
    204
    I get where you're coming from Jerry and broadly agree. To continue the military analogy if you and your fellow soldier come accross an enemy lying on the ground who has been killed in action you say "He's dead" (infected/compromized). If your fellow soldier says "Yeah, but he's got the latest night sight goggles which make a massive performance enhamcement in the dark". You could say, "Hmm ok, but.... he's dead". Your colleague could again point out another feature "But he's got the latest Kevlar bullet proof all in one torso-suit, that should assist in preventing bullet penetration (virus compromize), meaning he has a higher chance of survival in a dangerous environment." You might again say "Yeah buddy, but look at him.... HE's DEAD!!!"
    Point being that whatever gazillion layers of defense any AV has or claims to have the test is compromize/infection. Excuses after the fact are just that, or could call it obfuscation.

    Read carefully the following MRG Comparative Efficacy Assessment from Feb 2013, in particular pages 4 and 5 covering methodology and test failure criteria.

    http://www.mrg-effitas.com/wp-content/uploads/2013/04/Comparative-Efficacy-Assessment-of-Wontok-SafeCentral.pdf

    If we are talking about WSA, MRG are saying that in respect of data capture prevention, WSA failed to prevent data capture 50% of the time (3 out of 6 tests) ie inputed paypal account login data was captured by the trojan, and could have been sent on.

    Going back to what Antiviruser said re WSA apparently the data capture should be 'impossible'. But it doesn't appear to have been so in this case?

    Not that WSA is any better or worse than any others, but i wonder if some have been so amazed by 'roll-back' or whatever other feature in the AV that they defend actual prevention/detection performance (tested as compromized/infected) that is not top quartile?
     
  24. guest

    guest Guest

    No product will totally protect you against malwares, especially sophisticated rootkits/trojans (created by governments and multi-national companies) ; a product will just reduce to a certain degree the risks of infection.

    another analogy, an armored door is not created to deny a robber to enter your house, but just to hamper/delay the intrusion so someone may alert the authorities.

    what i want to say is many people seems to think that anti-malwares will protect them from everything even from their bad behaviors; that is not possible, even the best product, with absolute protection will not protect someone with bad practices (running infected cracks/keygen/excutables, visiting suspicious websites, etc...)

    test commissioned and won by a product against other products...yes if they says so...
    tests done in VMs !!! (as if they don't have spare computers with real system installed)...


    i don't say that WSA is bullet-proof and was cheated but a commissioned test...to show "we are better than famous vendor x"...please...


    of course, no one deny it.

    i don't believe on tests labs, who knows what happened behind the curtain (aka $$$) , not saying that no one will encounter more than 100 malwares in his life (unless he looks for them), so tests labs launching thousands of samples (some even not widespread) is quite ridiculous.

    i will answer you by this:

    do you own a car?

    if yes, you have airbags, seat-belts, reinforced structures, etc...
    the only proof you have to know if the model you have is safe is by the crash tests done by the constructor itself... now do you stop using your car?

    you can't have any real proof, since every system is different and used differently, at best you will get inputs from other users that claims no to be infected anymore like before since they use your product.

    Personally i never trust tests labs, i take their result as an "opinion" and "info" about a product, I just my trust my own experience with it.

    i used to tests some products with fresh samples, if they can stop at least 95% of them, i am satisfied, i will then try to reduce the % left with additional products (check my sig)
     
    Last edited by a moderator: Apr 23, 2013
  25. JerryM

    JerryM Registered Member

    Joined:
    Aug 31, 2003
    Posts:
    4,306
    Hi guest,

    I am afraid that your analogy of safety devices in automobiles is not sufficient to prove your point. Those devices have been tested by reputable organizations and determined to work as promised, and also experience in thousands of accidents nail it down.

    I disagree that one cannot have any real proof.
    The best test labs do in fact conduct tests that approximate real world operation in protecting from attempted penetrations. The fact that none is 100% perfect does not negate their tests and findings.

    I would agree that no AV is 100%, although it might be against the threats presented on a particular day. Again, that does not negate a ranking of effectiveness.
    I don't think there is such thing as 100% protection against any and all threats, but I want the best I can get within reason.

    If you only trust yourself, have you performed tests with as many samples as the testing labs?
    I know people who for years used the lowest ranked AV, but never got infected. They did not do risky things, but that does not establish that AV as one I would want to use.

    EDIT
    I just looked at the MRG test and I have confidence that the test results are correct.
    http://www.mrg-effitas.com/wp-conte...anking-and-Endpoint-Security-Report-20121.pdf

    I wonder what the arguments are from those who failed. Is it that MRG did not understand their prioducts?

    Anyway, I have stated my case for whatever it is worth. If tests, such as those done by AV-C. show an AV ineffective that is enough for me not to use it. To each his own. Thanks for the inputs.

    Thanks, again for the reply,
    Jerry
     
    Last edited: Apr 23, 2013
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.