AV-Comparatives has added the ValidEdge Malware Intelligence System

Discussion in 'other anti-virus software' started by trjam, Nov 8, 2010.

Thread Status:
Not open for further replies.
  1. trjam

    trjam Registered Member

    Joined:
    Aug 18, 2006
    Posts:
    9,057
    Location:
    North Carolina
    This I just found and is very cool and just another step in how IBK continues to ensure for the most accurate testing around.:thumb:
     
  2. Cudni

    Cudni Global Moderator

    Joined:
    May 24, 2009
    Posts:
    6,956
    Location:
    Somethingshire
    This should be fun
     
  3. funkydude

    funkydude Registered Member

    Joined:
    Apr 5, 2004
    Posts:
    6,855
    So this will mainly be used to ensure files in the test base are legit malware files or?
     
  4. Noob

    Noob Registered Member

    Joined:
    Nov 6, 2009
    Posts:
    6,468
    Probably but i think they will use it on their small scale tests because i can't believe they will analyze their HUNDRED OF THOUSANDS of malware 1 by 1 :blink:
     
  5. ronjor

    ronjor Global Moderator

    Joined:
    Jul 21, 2003
    Posts:
    57,794
    Location:
    Texas
    We'll see. :)
     
  6. Noob

    Noob Registered Member

    Joined:
    Nov 6, 2009
    Posts:
    6,468
    Hahahaha, but acording to that article it gives a thorough analysis of the malware, so it means a person still have to decide if it's valid malware or not? :rolleyes:
     
  7. dr pan k

    dr pan k Registered Member

    Joined:
    Nov 22, 2007
    Posts:
    204
    they make it sound so simple that a new cloud based AV could be set up in days with this technology.
     
  8. scott1256ca

    scott1256ca Registered Member

    Joined:
    Aug 18, 2009
    Posts:
    144
    Almost complete PR. There is nothing in there that tells you how it identifies malware from any other piece of software that might need high level access. I'm not saying it doesn't work, just that PR piece doesn't provide any answers on how it separates the wheat from the chaff.

    So given that, what criteria will AV-Comparatives apply to determine new malware? Does this become the defacto standard? If this device says it is malware, then it is malware? If this device says it is safe then it is safe? How do we know it is accurate?

    Maybe some testing organization will test it and tell us the detection rate, and number of false positives :)
     
  9. RejZoR

    RejZoR Registered Member

    Joined:
    May 31, 2004
    Posts:
    6,426
    Basically it's something most of AV companies already use for ages. A dedicated malware analysis unit which analyzes every sample in depth and throws it in a massive criteria engine and sorts the junk. Most of stuff can be analyzed by machine, stuff that's inconclusive is then sent to human analysts for further inspection. That's how all companies work these days if they want to deal with massive numbers of samples gathered from all over the web. Or do you ppl really expect human analysts to check 10.000+ new samples a day by hand?
     
  10. dr pan k

    dr pan k Registered Member

    Joined:
    Nov 22, 2007
    Posts:
    204
    usually this technology costs a lot in resources and investment, and only the major players have this kind of expertise.

    the point is that if a specific company puts on the market a well conceived software with high capability of detection with low fp's then u have yself a new AV cloud engine. if the price is fair lots of minor companies could actually flood the market with cloud based AV's
     
  11. PJC

    PJC Very Frequent Poster

    Joined:
    Feb 17, 2010
    Posts:
    2,959
    Location:
    Internet
    :D :thumb:
    No way!
     
  12. Baserk

    Baserk Registered Member

    Joined:
    Apr 14, 2008
    Posts:
    1,317
    Location:
    AmstelodamUM
    So it's the (seemingly exponential?) growth in malware samples that requires AV-C to have this ValidEdge MIS 1100, in order to be able to continue tests based on large enough and especially relevant sample sets?
     
  13. Pedro

    Pedro Registered Member

    Joined:
    Nov 2, 2006
    Posts:
    3,502
    Even if they need to re-check each detected sample to validate it for the test, they still save time and make a bigger (relevant) test as a result.

    Say they have time to check 100 samples by hand for each test.
    Before they would get 30 invalid samples, so 70 malware samples to test AVs.
    Now with this system (patent pending :gack: ) , say 20 invalid samples are detected as such. They need to check for 100-20=80 samples by hand. Leaves them with the same 70 malware samples, and time to check for 20 more.

    Assuming of course the system is actually good :D
     
  14. scott1256ca

    scott1256ca Registered Member

    Joined:
    Aug 18, 2009
    Posts:
    144
    In the existing system, a user, or employee finds a previously unknown questionable file, uploads it to the vendor where it is analyzed (maybe by machine), and a new sig gets generated, which the end user gets a few hours later. Explain to me please how your suggestion makes it easier for a new company? When the user or employee finds the questionable file, it still needs to get back to the vendor, yes? It still needs to get analyzed, and the end user then needs to get alerted to it.

    So if the user plugs in a usb stick and tries to execute some malware, traditional anti-malware checks a sig and quickly says yes or no. Since the file is unknown, probably says yes and you get infected. But even in a cloud AV, the file is STILL unknown. So the malware still must be uploaded to the vendor, analysed and then you get an answer and can execute or deny the new questionable file? Since the file has to get to the vendor either way, where does the cloud help? You could upload directly to the vendor with no cloud and wait for an answer. I don't see the improvement except perhaps in the time it takes to roll out a new signature to the users by cloud instead of sig database updates.
     
  15. scott1256ca

    scott1256ca Registered Member

    Joined:
    Aug 18, 2009
    Posts:
    144
    I guess this was directed at me.

    Recently here someone referenced a paper by Peter Košinár, Juraj Malcho, Richard Marko, David Harley named "AV TESTING EXPOSED". I still have the paper, but can't remember where on this forum it was referenced. In this paper, they raise this question, and one of the methods they suggested might be getting used, and I'm paraphrasing here, is to run the files past all the AV tools and then just deal with the ones where a valid consensus is not reached (i.e. some AVs say it is malware, some not). They suggest that the test org would frequently send these back to the AV vendors for more in depth analysis. I have no idea whether or not this approach is used frequently or not.

    But to answer your question, no I didn't expect them to analyze all samples by hand. I would think that a lot of malware would lend itself to automated identification, even if there was no signature. As in, run it and see what bad things it tries to do. Clever malware would be more difficult to detect so I did expect them to analyze some by hand, but I honestly couldn't begin to tell you what criteria the test org or AV vendor would use to determine if an unknown executable was malware or safe.

    Edit:
    Here it is. I should have searched by paper name, not author
    http://www.eset.com/resources/white-papers/Kosinar-etal-VB2010.pdf
     
    Last edited: Nov 9, 2010
  16. dr pan k

    dr pan k Registered Member

    Joined:
    Nov 22, 2007
    Posts:
    204
    to create a software that uploads unknown files to a central server and then automatically the "machine" tells u if they are viruses or not aint that difficult. in fact this kind of software is usually of tiny proportions compared to the classic AV's. the hard part is to have an automated process that can effectively make the cut between real and fp. most of this is made through software, programmers make really few definitions by hand.

    sorry if we r going off topic here
     
Loading...
Thread Status:
Not open for further replies.