Do statistics count ?

Discussion in 'other software & services' started by eyes-open, Jan 17, 2007.

Thread Status:
Not open for further replies.
  1. eyes-open

    eyes-open Registered Member

    Joined:
    May 13, 2005
    Posts:
    721
    Have you ever played 'Top Trumps'™ ?

    I think trading cards have a similar sort of layout - a graphic of the subject and then some facts & figures that accompany the layout.

    In 'Top Trumps'™ a category from the card is chosen and then each player lays down their card ...... and the player whose card has the highest score in that category wins the round.

    I mention the above because every now and then, a comparison thread starts and it's as if just such a game has begun. The subject of the deck is chosen - now that might be anti-spyware, registry cleaners, Hips programs - whatever ...... and pretty soon more players appear, fresh posts follow and the cards begin to be dealt face up.........Just like with the real game, you know everyone is looking on eagerly to see how the value of their card stacks up against other players offerings. Each of us hoping our card is a high scorer. That way we get to keep enjoying our hold on the card that represents our software choice, happy that for now at least, it's a good one....becoming slightly unsettled if the card representing our software isn't played or worse still, is played.......and scores badly.

    Where the simile fails is often, the cards are simply played, the players stepping back to allow the others to look and then move on. It takes a moment or two to realise what's missing - actual, definable values, with which to judge which card wins which hand ? Why is your software, definitively better than the others ?

    If for instance, you were going to create a set of statistics for product comparison - you might think the following categories worthy of inclusion:-

    Size of install, CPU/ RAM usage, GUI, Cost, Support and of course, last but not least Efficacy (eg.detection/removal).

    How is efficacy even measured in a way that each software type is compared on a like for like basis ? Where do you go for your comparison statistics ?

    In some threads, despite them being contributed to by experienced folks, the best information you might get is little more than a hint, "light on resources", "great support", "good heuristics" etc ......... but how often, software for software, do you see this defined in a way that can be compared and tested ? Do you really get enough information to build the sort of stats that would make it possible for you to create a trading card that's worthwhile ? If not how are you selecting your software ?

    Do you even have an expectation or standard to hold products to - or do you simply accept that when someone says somethings good, that it's good - and that as long as a product is benefiting from a momentum of approval, the statistics don't matter ?

    How sure are you that the cards you've got still stack up well .......... and that your statistics are up to date ?
     
  2. Mrkvonic

    Mrkvonic Linux Systems Expert

    Joined:
    May 9, 2005
    Posts:
    8,697
    Hello,

    Major factor to take into consideration = fun.
    If you're having fun, you have chosen wisely. After all, that's the purpose of life of intelligent creatures like cockroaches and barnacles and security geeks.

    For me, the choices are:

    Never detection/removal, because if the damn machine gets to the point where it thinks it's smarter than me, I'll blowtorch it. See who's smarter then.

    Advice and comparisons are all nice and well, but they should be taken into consideration with certain amount of common sense, which varies from one person to another, be it the person giving advice or the one taking it.

    It comes down to personal experience. And rather than trying to bring it down in numbers, it can be easily summed as an overall feeling. When the machine purs nicely and quietly and all the pieces fall into place.

    Great support, heuristics etc... really differ from one person to another. Take any 'I need a firewall plz' thread. You'll get 43 answers from 32 different people, each one convinced his choice is the best. And they all really are.

    Statistics don't matter for me. It's about experience. Fun.

    Mrk
     
  3. NGRhodes

    NGRhodes Registered Member

    Joined:
    Jun 23, 2003
    Posts:
    2,331
    Location:
    West Yorkshire, UK
    Stats - cant rely on them for anything, even my own data becomes outdated and/or erroneous as circumstance change.

    There is no best.
    There is only best for your needs.

    Also no matter how good a product someone claims is it, I would always test it.
    What runs well on someone else's machine might run terrible on your own.
     
  4. Pedro

    Pedro Registered Member

    Joined:
    Nov 2, 2006
    Posts:
    3,502
    No one can digest it all. It's due to the fact that we're not perfectly racional beings. So it's only natural that so many opinions arise, since so many products are out there, with different features/ approaches. One cannot read and test it all and claim to know what the best product is. The concept of Bounded rationality (economics) describes this.

    Also check this wiki-link about Herbert Simon, the author. For further reading, check transaction costs, from Williamson and Coase (related). This is only if you really want to delve into this part of Economics.
     
  5. eyes-open

    eyes-open Registered Member

    Joined:
    May 13, 2005
    Posts:
    721
    Cheers all,
    That's where it kinda tickles me and I agree, it does seem to be how it works for a huge swathe of people, myself included, to a degree, but isn't it just a little weird ?

    I mean, beyond whether or not you believe a sort of program is necessary, if you do opt for choosing one, bringing it down to how you feel about it...... how does that work ? These are just computer programs aren't they, based on processing logic.... Just 1's & 0's, programmed to switch on and off according to instructions. How then do feelings become the yardstick for selection ?

    I get we say that mileage may vary due to differing environments. Okay, that results in one individual experience, not necessarily being a model for the next persons - I can see that creates the rationale for then producing a scale based on the sum of users experiences.

    It's all an understandable progression - and It makes sense too, that as assessments move away from the individual experience to an average experience, language is softened and made more accessible to accommodate a potentially larger user base. It all aids the warm & fuzzy feelings that can then create a sense of familiarity. That in turn can become brand loyalty and make it less likely a user will look around at other options.

    On the other hand, lots of us have seen what happens when the user interface is suddenly changed and some users become unsettled - even though the actual engine may have improved - it is no longer what is familiar and once loyal users start to look around again.

    It all appears to flow.......mostly I suppose, it remains reasonably safe because the software we're choosing from, has already attained a certain degree of credibility through testing & validation elsewhere. Still, if a computer developed it's AI to a point where it could make & express such decisions - "I chose your security cos it made me purr" would be really cool - but not necessarily all I had hoped for ..... :)
     
  6. Ice_Czar

    Ice_Czar Registered Member

    Joined:
    May 21, 2002
    Posts:
    696
    Location:
    Boulder Colorado
    I dont focus on the minutia of direct software comparisons within a given class
    but rather the synergistic effect of the different classes\technologies combined in comparison to the emerging threats.

    Any individual testing is in that light invalid, at best describing a current state and the past.
    Their value is simply to point out known flawed products\strategies, not guarantee against emerging threats

    Most of us are only vaguely aware of the mechanics of programming with just empirical knowledge and interpreted analogies to work from. We get our "knowledge" of emerging threats the same way and judge the analogies against each other.

    Thats why Im more interested in "best practices" and gaining a deeper understanding of actual programming \ exploitation.

    I think of it as avoiding single point failure


    interesting stuff ;)
    bears more research
     
    Last edited: Jan 18, 2007
  7. eyes-open

    eyes-open Registered Member

    Joined:
    May 13, 2005
    Posts:
    721
    Hi Ice_Czar :)

    I'm not 100% clear on where you stand. Are you saying through the medium of 'best practices' as you put it, you have chosen to work in a different way ? Maybe preferring emerging practices such as, maintaining a clean install through the use of virtual machines, online file checking, aligned with what's termed safe hex, partitioned data storage etc to a degree where the value of additional installed security software is as a whole, negligible to you.......... Therefore any assessment of onboard security software becomes redundant ?

    If not, and you are still maintaining a fairly traditional response, then not to miss the point entirely I hope, but if you say you have no interest in as you put it the 'minutia of direct software comparisons within a given class' - how do you look to ensuring the integrity of the synergistic effect if you haven't first given any thought at all to the relative value of the components ? synergism might sometimes offer unexpected results, but in this context it's unlikely to be created with the aim of creating random results.

    If thats true, then it seems equally unlikely for you not to at least have a minimum standard for something you deliberately install. Where that is the case I would have expected you would have to have a way of measuring the relative worth of any component.
     
  8. Ice_Czar

    Ice_Czar Registered Member

    Joined:
    May 21, 2002
    Posts:
    696
    Location:
    Boulder Colorado
    Think you summed up my position in the first part rather well

    I feel the value of comparative testing is of limited utility value
    it weeds out the truly old and weak like a pack of wolves keep a herd of caribou healthy,
    of course this pack of wolves is wear both white & black hats :p

    But as a justification for purchasing one similar product over another because it detected a few more bits of nasty code presupposes its the sole defense for that vector.

    As you point out I think redundancy is exactly the point, but not redundancy of technologies (multiple aps employing the same signatures and similar heuristics or behavior parameters)

    Rather as you so eloquently put it cutting down the attack vectors in front, tricking or breaking the malware that does make it past the "traditional" detection phase, and above all indirect detection, where the actual subversion and what it is, isnt as important as knowing something has gone wrong and a recovery strategy needs to be quickly implemented.

    I think its better to assume your going to have failures and subversions
    What components fill the various niches in your strategy need to be robust growing and healthy, but they dont necessarily need to be the alpha leader.
     
  9. eyes-open

    eyes-open Registered Member

    Joined:
    May 13, 2005
    Posts:
    721
    Cheers Ice_Czar - that's clear :thumb:
     
  10. Rmus

    Rmus Exploit Analyst

    Joined:
    Mar 16, 2005
    Posts:
    3,943
    Location:
    California
    Security, first and foremost, is a state of mind. If you are convinced that you need a certain type of protection, then you will get it.

    The important thing is to set up a strategy based on your perceived risks, then implement the tools you need. That puts your mind at peace. That is the important factor.

    In seeking advice, for every person who has had a bad experience with a product, others will claim the opposite. Every system is different and reacts differently to any given product.

    Careful users run their own tests and draw their own conclusions to compare with those of others.

    By and large, most products in any given category provide good protection. It becomes a matter of choice based on factors that others have already mentioned.

    EDIT: I meant to add that you bring up a good point about statistics. Not being good in math, I get headaches trying to sort out statistics, so I usually don't pay attention to them!

    regards,

    -rich

    ________________________________________________________________
    "Talking About Security Can Lead To Anxiety, Panic, And Dread...
    Or Cool Assessments, Common Sense And Practical Planning..."
    --Bruce Schneier​
     
    Last edited: Jan 19, 2007
Loading...
Thread Status:
Not open for further replies.