Why is my cloud better than your cloud?

Discussion in 'other anti-virus software' started by Pleonasm, Jul 6, 2009.

Thread Status:
Not open for further replies.
  1. Pleonasm

    Pleonasm Registered Member

    Joined:
    Apr 9, 2007
    Posts:
    1,201
    PrevxHelp, yes -- you are correct. I am making the assumption that “Symantec has exactly the same implementation and protection as Prevx” as you stated in post #70.

    Thus, the question still remains: does the larger Symantec userbase result in less risk exposure for the average Norton Internet Security 2010 user -- all else being equal? If not, why not?

    Raven211, you’re not “busting in.” Your comments (and those of all other forum members) are, of course, welcome and encouraged!
     
  2. PrevxHelp

    PrevxHelp Former Prevx Moderator

    Joined:
    Sep 14, 2008
    Posts:
    8,242
    Location:
    USA/UK
    I should have made that clearer :D I was referring to a hypothetical situation where Symantec would have the same implementation as us - they don't.

    If everything was equal, which it isn't, larger would be better assuming the implementation is scalable and cost effective. A word of caution: I'm unsure what Symantec has in terms of a free product (I believe none?) but offering a product free to users and having it adopted in massive quantities causes major strain on a company if they are not fully scalable and their cost per user isn't low enough. We ran into this a while back with Prevx1 and since then have redesigned everything to have a very low operation cost (despite the significant up-front cost of our servers).
     
  3. TonyW

    TonyW Registered Member

    Joined:
    Oct 12, 2005
    Posts:
    2,741
    Location:
    UK
    Here's another interesting article I found over at Viruslist.
     
  4. Pleonasm

    Pleonasm Registered Member

    Joined:
    Apr 9, 2007
    Posts:
    1,201
    PrevxHelp, now that we’ve established the fact that a larger userbase contributing to reputation ratings results in superior protection (all else being equal), the next question is whether Prevx can overcome the 5+ fold advantage that Symantec possesses in this respect to actually deliver superior detection of (and protection against) malware.

    We’ll need to await independent empirical reviews of Norton Internet Security 2010 and of Prevx in order to make that assessment -- especially since “all else” isn’t “equal.” Some aspects of the Prevx solution may be superior to those of Symantec, while facets of the Symantec approach may be superior to those of Prevx. (Competition is good for the industry and for users!)
     
  5. PrevxHelp

    PrevxHelp Former Prevx Moderator

    Joined:
    Sep 14, 2008
    Posts:
    8,242
    Location:
    USA/UK
    Comparing Symantec's reputation ratings and Prevx or Panda's centralized analysis is like comparing bicycles to supercars - the technologies have entirely different intentions, implementations, and results.

    This direct review of the core technology has already been done by PC Magazine, where we received and hold the Editor's Choice, against Norton 360 v3: "Prevx detected 94 percent of the threats, more than any other product tested with this same collection. Norton 360 version 3 was next, with 92 percent, followed by Cloud Antivirus with 89 percent." - http://www.pcmag.com/article2/0,2817,2346861,00.asp

    From what I've seen, the Norton reputation warnings are not malware warnings so users should treat them differently. Therefore, blocking a file by a "reputation" warning in Norton should be tested differently - I don't think the technologies can be directly compared (correct me if I'm wrong, however) and I'd suspect Norton 360 v3 would have similar detection rates as NIS2010 if this is the only major technology change.

    The warnings produced by Norton sound like they require user education to answer and much more often than not, users choose the wrong answer - especially if they've just downloaded a shiny new program they want to use.
     
  6. PrevxHelp

    PrevxHelp Former Prevx Moderator

    Joined:
    Sep 14, 2008
    Posts:
    8,242
    Location:
    USA/UK
    Another point on this - it does not contribute that heavily to additional protection. In a vast majority of cases, we detect malware either on the instant it was seen by the first user or on the second user, and in nearly all cases within the first 5 minutes of an infection's life. Infections tend to be static in function (irrespective of any polymorphic layer the function is still the same) so regardless of if we get 2 reports or 2,000 reports, we will see the same thing.

    The only benefit which Symantec may have with a larger userbase if we were to be using identical technology is to protect against a threat a negligible amount of time sooner - not x users sooner. If a threat spreads to one person per minute in the Prevx community, and they have 5x the identically-spread userbase, they would see the sometimes-"crucial" second user 45 seconds sooner.

    Not exactly the end of the world, being that we would find the file on the second instance as well, it would just be 45 seconds later but 0 infected users later.

    Similarly, we focus on threats seen by our users so if a threat has only been seen within the Symantec community, it would be reasonable that we would not find it as there would be no need to protect it. However, once the threat does reach our community, we will catch it in the same timeframe.
     
  7. Pleonasm

    Pleonasm Registered Member

    Joined:
    Apr 9, 2007
    Posts:
    1,201
    PrevxHelp, this review is woefully out of date, given the advances integrated into NIS10. I highly doubt it would be accurate to extrapolate testing results from Norton 360 Version 3 to NIS10.

    PrevxHelp, we can argue definitions -- but, from a functional perspective, the user is either prevented from downloading malware or is strongly advised not to do so. Protection is achieved in either case (assuming that the advice of Norton Internet Security 2010 is followed).

    PrevxHelp, I agree that some users will ignore the “unknown software - check back in a few hours” warning of NIS10. However, you can’t fault NIS10 for “user error,” in my opinion. And, the alternatives are even more unattractive: allow the user to blissfully install “unknown software” without any warning (as is done by Prevx, I believe); or to prevent the user from installing any “unknown software” whatsoever.

    PrevxHelp, since the dependent measures are malware detection and prevention, I don’t see any reason why the Prevx product and NIS10 could not be compared in a manner similar to the Norton 2010 Beta Benchmarks tests. Please correct me, however, if I am mistaken.

    P.S.: PrevxHelp, whatever Prevx may think about Symantec, please ask Mel Morris (the CEO of Prevx) not to author a blog post about Norton Internet Security 2010 similar to Prevx Suggests Panda's Cloud Antivirus Is Bloatware With A Fancy Name. It’s beyond embarrassing....
     
  8. PrevxHelp

    PrevxHelp Former Prevx Moderator

    Joined:
    Sep 14, 2008
    Posts:
    8,242
    Location:
    USA/UK
    In this case, we should develop a new product which warns on every new program, tells users to check back, receive 100% scores in every test, and blame wrong decisions on user error :)

    In fact, this fantastic technology has already been developed, so feel free to use it on your home system: http://nob.cs.ucdavis.edu/classes/ecs155-2005-04/outlines/2005-11-17.html (see the bottom half)

    Without getting horribly off topic (as we've hashed and rehashed this many times), the concept of an on-demand test like the one outlined in that post is fundamentally flawed being that nearly all threats today exist for less than 24 hours so any attempt to test against massive amounts of samples is pointless to actually see what the effectiveness of the protection against new threats.

    Detection of the static list of 680,000 samples is trivial - in fact, I'm surprised that vendors do not achieve 100% on every one of these tests because in most cases, they are given the missed samples after the test.

    A test which results in 99+% scores is not indicative of the real world because obviously the thousands of threats bypassing AVs every day are not <1% of the true threat volume.

    Interestingly, if you read the entire article, you'll note:

    "All tests were performed on July 6, which is worth noting since Symantec, like everyone these days, is adding "in the cloud" detection for parts of their service. AV-Test says that the latest available AV updates were from July 1, although that doesn't seem to have mattered much."

    While our CEO worded it bluntly, emotions run high when a competing product loudly claims to be the first and is obviously not (and they knew they weren't, as explained in the post). A number of articles picked up on this after the fact (we were contacted by numerous news agencies) and I think in the end the public was made aware of the truth. Granted, the blog post was referring to the beta version and I'm certain that Panda will be streamlining future releases by removing beta code/optimizing GUI components which should cut down on their size and improve performance.
     
  9. raven211

    raven211 Registered Member

    Joined:
    May 4, 2005
    Posts:
    2,567
    What's new

    Download Insight/Quorum

    SONAR2


    Maybe you should start reading the posts, cause I wrote that Quorum is not the only thing that's new. This is only what's documented so far. I told you to not make it simple for you as well and then make "attacking assumptions".
     
  10. TonyW

    TonyW Registered Member

    Joined:
    Oct 12, 2005
    Posts:
    2,741
    Location:
    UK
    This is why I don't think you can really compare Symantec's offering against cloud-only products because of the various components being used in their program. Like I said before, if Symantec only used the cloud, we could. (In a few years time most AVs may turn to using the cloud exclusively as signature bases get too large to host locally as hinted in the Viruslist article I posted at #78.)

    Pleonasm believes we should be able to compare NIS2010 against cloud-only offerings, but isn't it like comparing apples and oranges?

    With reference to the NIS2010 benchmark tests quoted above, the article does say that "AV-Test only tested the classic anti-malware functions described above. Norton Internet Security does much more, but they have not yet tested the newer functions." They only tested the classic anti-malware functions - this suggests to me they haven't yet test the cloud component extensively. It looks like the local signatures kicked in during their tests.
     
    Last edited: Jul 16, 2009
  11. PrevxHelp

    PrevxHelp Former Prevx Moderator

    Joined:
    Sep 14, 2008
    Posts:
    8,242
    Location:
    USA/UK
    I think your self-admitted fanboy is really reading my posts with the wrong face :doubt: I don't see where I've posted any "attacking assumptions" but please let me know where so I can avoid them in the future.

    You've said that the reputation ratings cause a different prompt, so wouldn't it be logical that if the file wasn't detected as malware then it shouldn't be counted as a malware detection?

    Also, my post said: "I'd suspect Norton 360 v3 would have similar detection rates as NIS2010 if this is the only major technology change."

    Note the "if" - I'm not claiming to be an expert at Symantec's products. I haven't heard anything about SONAR 2 and this thread is about the cloud protection. Unless Symantec has moved SONAR 2 to the cloud, I don't think it is relevant in a cloud discussion - it is self-evident that a multi-component system might block more threats, assuming the threats have different infection vectors (like a file coming in via a spam email that could be caught by a spam filter).
     
  12. Pleonasm

    Pleonasm Registered Member

    Joined:
    Apr 9, 2007
    Posts:
    1,201
    PrevxHelp, looking from the “outside in,” I need to question this statement. Even if the assertion were only half true, then we should expect to see massive performance differences in malware detection/prevention between Prevx and Symantec. Since the latter is clearly not the case, then the former must be questioned.

    PrevxHelp, that may be true for Prevx, but based on Symantec’s description of how they are layering this technology across all components (e.g., firewall, intrusion prevention, heuristic detection, etc.) of Norton Internet Security 2010, reputation ratings may indeed contribute “heavily to additional protection.” If your hypothesis is true, then we should expect to see little difference in the malware detection/prevention performance between Norton Internet Security 2009 versus 2010. To the extent that a difference is delivered, then it would suggest that your hypothesis is false.

    PrevxHelp, you could do so -- but the obvious requirement is balancing the detection/prevention of malware against misses (i.e., failure to detect a true malware instance) and false positives (i.e,. incorrectly identifying an instance as malware). A good solution will maximize hits and minimize misses and false positives.

    This is a clear case where the size of the userbase matters. Since Symantec has such a huge size advantage, it is much less likely (using reputation ratings) to warn about “unknown software” because it has visibility to a larger population of the installed software globally. If Prevx were to adopt the same strategy to warn about “unknown software” based upon reputation ratings, the average Prevx user would see 5+ times more of these prompts than the average Symantec user.

    PrevxHelp, your sense of humor is appreciated! :)

    TonyW, I see your point. However, from my perspective, you can compare any two products -- provided that you are using the same outcome measures and testing methodology, even if the technology underlying the products may not be the same. For example, you can compare the acceleration of a hybrid versus a traditional vehicle, since the outcome metric (i.e., distance/time) is the same despite the underlying differences in technology. Correspondingly, in the case of anti-malware utilities, the key outcome metrics are malware detection and prevention.

    TonyW, I too wondered about this. While the statement isn’t clear, it suggests that the reported performance of Norton Internet Security 2010 would have been even better if the “newer functions” were employed. For example, did AV-Test disable SONAR in their testing?

    PrevxHelp, this is an interesting observation -- and a “grey” (not “black/white”) area. My own suggestion is to “compromise:” when a utility correctly warns about a file as possible threat and that file is actually malware, it should get one-half of a “point” added to the malware detection test score; and, when it warns about a file as a possible threat which is in fact safe, it should get one-half of a “point” subtracted. A reasonable solution?
     
  13. raven211

    raven211 Registered Member

    Joined:
    May 4, 2005
    Posts:
    2,567

    "You've said that the reputation ratings cause a different prompt, so wouldn't it be logical that if the file wasn't detected as malware then it shouldn't be counted as a malware detection?"

    - Then the same thing goes for your software, if that makes it fair.


    I'll quote myself: "SONARv2 (behavior analyzis, and not the old one - it's been vastly improved as well, Quorum is not alone to be new incase you thought so)"


    The point still stands; the components of Norton are all working together, one way or another, and most importantly with Quorum, which makes it hypothetically possible to count them in as a part of Quorum. SONAR works with Quorum, AP works with Quorum, Anti-Spam works with Quorum, Download Intelligence works with Quorum - they all work with Quorum, they're connected.
     
    Last edited: Jul 16, 2009
  14. PrevxHelp

    PrevxHelp Former Prevx Moderator

    Joined:
    Sep 14, 2008
    Posts:
    8,242
    Location:
    USA/UK
    There isn't much margin when dealing with 94% versus 92%, but Symantec can catch most of the threats in these tests using other methods. What we would really need to do to see the effectiveness side-by-side is survey users of both products to see what is left over. We guarantee our malware removal, which shows a very high level of confidence in our ability to not miss threats. The fact that this aggressive and potentially catastrophic business model is successful proves that our detection, cleanup, and prevention is extremely effective in the real world.

    I should have clarified what I was referring to. Your post was:

    "PrevxHelp, now that we’ve established the fact that a larger userbase contributing to reputation ratings results in superior protection (all else being equal)"

    to which I responded:

    "Another point on this - it does not contribute that heavily to additional protection." where "it" refers to the userbase, not the reputation ratings. I don't disagree that reputation ratings have a positive impact on protection.

    To clarify (as text can easily be unclear), we would produce the same number of prompts as Symantec (assuming the metrics to prevent the prompt are the same) but the portion of users receiving the prompt would be 5x higher, relative to the entire userbase.

    However, our reputation relies on a number of other measures on top of what Symantec uses, including logic to analyze the behavior of the program, so we reduce prompting algorithmically rather than relying on pure userbase size.

    I think this would be reasonable, assuming there are an equivalent number of good programs just as likely to produce FPs as there are malicious programs as likely to produce legitimate detections - not an easy task to operate fairly.
     
  15. PrevxHelp

    PrevxHelp Former Prevx Moderator

    Joined:
    Sep 14, 2008
    Posts:
    8,242
    Location:
    USA/UK
    I agree - it does and it should.

    Yes, I'm not disputing this point at all and unsure where I made it seem I was unaware of this point. Similarly, we use our reputation ratings in all aspects of malware detection and prevention.
     
  16. raven211

    raven211 Registered Member

    Joined:
    May 4, 2005
    Posts:
    2,567
    Okay, great that we came to an agreement on these points among others.
     
  17. raven211

    raven211 Registered Member

    Joined:
    May 4, 2005
    Posts:
    2,567
    Again, SONAR, which works with Quorum, among others, is analyzing that behavior to hopefully determine an automatic action straight away.

    Quite the contrary to your Age/Spread criteria, a legit program with a small user base, like you mention, can simply pass through (since it's not deemed malicious), whereas Outer Community-detection can in your case occur no matter if a file is deemed safe, unless this kind of protection-measurement is turned off completely. Either it goes to that kind of data-analyzing before, or it goes after (the default) the heuristical analyzis has been done - it will still lead to prompting the user because of being seen by a too small amount of systems. (I would not bring up the prompt looking in the same except for the text since I've already been answered by you personally that this will change in a future release.)
     
  18. PrevxHelp

    PrevxHelp Former Prevx Moderator

    Joined:
    Sep 14, 2008
    Posts:
    8,242
    Location:
    USA/UK
    I believe you have a few points incorrect - if a file is known good, we will never produce a warning even if it is only seen by a couple users. Age/Spread is identical to Community.OuterEdge - the warning changes depending on where you receive it (during a scan or in realtime) but it is the same engine.
     
  19. Page42

    Page42 Registered Member

    Joined:
    Jun 18, 2007
    Posts:
    6,941
    Location:
    USA
    I can see you have a lot to learn... quite likely things they will never teach you at university. First off, it doesn't matter whether you want people to take advantage of something you said. Same rules apply to everyone. Post less or think more clearly about what you say first, if you don't like to hear from others about what you write. Secondly, it also does not matter whether you decide to admit to being a fanboy or not... in your case I'd venture to guess that it is evident to a great many members here that you are a fanboy. I find it amusing that it is okay for you to say it but you don't like hearing it from others. ;)
     
  20. raven211

    raven211 Registered Member

    Joined:
    May 4, 2005
    Posts:
    2,567
    *Sigh* ...
     
  21. raven211

    raven211 Registered Member

    Joined:
    May 4, 2005
    Posts:
    2,567
    Okay, great - point taken. Then my point with the previous post would be, Symantec acts in the same way.
     
  22. PrevxHelp

    PrevxHelp Former Prevx Moderator

    Joined:
    Sep 14, 2008
    Posts:
    8,242
    Location:
    USA/UK
    Indeed I was not trying to be condescending or manipulative of my use of his description, I was just merely requesting a clarification of the reasoning behind the attack-sounding posts.

    I'm sorry if I was offensive but I'm also not psychic - if someone says "Call me George" and then gets mad if I call them "George" I may as well give up holding conversations and should just revert to using pronouns :D

    But enough OT :)
     
  23. lu_chin

    lu_chin Registered Member

    Joined:
    Oct 27, 2005
    Posts:
    295
    As I said before in this thread, information on how to determine a cloud was better or worse than another one was "clouded" by pulling in other components. It kind of reminded me of debates about some so called firewall leaktests where product A would do better than B because product A had other components like HIPS. Some might argue that only the end result matters be it detecting/blocking malwares or passing every single leaktest but then the comparison would be much skewed. When it was already assumed that a cloud was equal to another cloud and then tacked on other components or a much larger user base before doing further comparisons, wouldn't the result be already predictable? May be in a few years when vendors move most if not all of their malware detection technologies to cloud then more commensurate comparisions be made.
     
  24. Pleonasm

    Pleonasm Registered Member

    Joined:
    Apr 9, 2007
    Posts:
    1,201
    PrevxHelp, yes -- that’s the point. Given the similarity of performance, it’s difficult to argue that Prevx’s technology is a “supercar” and Symantec’s is a “bicycle” (post #80) when the observed “speed” of each is essentially equivalent.

    Of course, with the introduction of Norton Internet Security 2010, the "speeds" will need to measured anew....

    PrevxHelp, yes, I agree. A side-by-side comparison to identify (1) threats missed by Prevx and detected by Norton Internet Security 2010 versus (2) threats missed by NIS10 and detected by Prevx would be quite insightful. I suspect that someone inside of Prevx has already performed this test?

    PrevxHelp, ah, my apologies. “It” was ambiguous, and I misinterpreted your intention.

    So, to summarize, your perspective is that a larger user base contributes to improved reputation ratings; but reputation ratings do not “heavily” contribute to improved protection?

    PrevxHelp, yes, I agree. Stated differently, the average Norton Internet Security 2010 user is 1/5th as likely to experience an “unknown software” prompt based upon a reputation rating as compared to the average Prevx user. Across the entire userbase, the number of prompts will be the same for both NIS10 and Prevx; but, the experience will be superior for an average user of NIS10 since the likelihood of receiving the prompt will be considerably less than that of the average Prevx user.

    Of course, as you noted, in practice a decision to flag a file as "safe" isn’t based solely upon the reputation rating. But, to the extent that a reputation rating is important in classifying a particular sample, the company will the larger userbase will have an important advantage as measured by the rapidity of the maturation of the reputation rating and by the user experience.

    * * * * * * * * * * * * * * *​

    P.S.: By the way, I just wanted to again express my appreciation of your willingness to engage in these conversations. :thumb: I can’t speak for others, but I certainly learn from the interactions. (And, I wish that a representative from Symantec was involved in this thread to the same extent.)
     
  25. PrevxHelp

    PrevxHelp Former Prevx Moderator

    Joined:
    Sep 14, 2008
    Posts:
    8,242
    Location:
    USA/UK
    The similarity of performance is dependent on the threat volumes. Prevx/Panda's solutions are more future-looking and cover much more depth than Symantec's, therefore, while both the supercar and bicycle get you from point A to point B and the bicycle may be effective now, it won't be in the future unless major changes are made.

    NIS10 is still changing and we have not performed extensive tests against it - our current focus is testing/maintaining compatibility alongside with it.

    Reputation ratings may contribute to improved protection but the size of the userbase does not affect them that dramatically. To quote key points of my original post:

    "We detect malware either on the instant it was seen by the first user or on the second user, and in nearly all cases within the first 5 minutes of an infection's life. Infections tend to be static in function (irrespective of any polymorphic layer the function is still the same) so regardless of if we get 2 reports or 2,000 reports, we will see the same thing.

    The only benefit which Symantec may have with a larger userbase if we were to be using identical technology is to protect against a threat a negligible amount of time sooner - not x users sooner. If a threat spreads to one person per minute in the Prevx community, and they have 5x the identically-spread userbase, they would see the sometimes-"crucial" second user 45 seconds sooner.

    Not exactly the end of the world, being that we would find the file on the second instance as well, it would just be 45 seconds later but 0 infected users later."

    However, for whitelisting/identifying good samples - you are correct in saying that the company with the larger userbase has the benefit just because they have a wider view on what programs are used widely assuming the underlying technology is identical.

    However, this benefit only applies to programs very early in their life cycle and our centralized analysis which is not based on file hashes is able to bridge the gap created by their larger userbase. Frankly (and logically), there are very few users who use software that is not used by large numbers of other users so the reputation warnings become irrelevant for any semi-popular software.

    The only case where Symantec's or our implementation is likely cause FPs on is beta software, software written locally by a software developer using the security solution, or software which is extremely unpopular (i.e. browsing through Download.com and looking for an obscure program).

    The average, everyday home user is highly unlikely to go and install a new piece of software that hasn't been seen by hundreds of people already.

    My pleasure :) I think there are a lot of misconceptions about the ominous being aptly named "the cloud" and it's worth any effort possible to clarify them.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.