IBM researchers easily trick ChatGPT into hacking

Discussion in 'Security in a World with AI' started by hawki, Aug 9, 2023.

  1. hawki

    hawki Registered Member

    Joined:
    Dec 17, 2008
    Posts:
    6,130
    Location:
    DC Metro Area
    "Tricking generative AI to help conduct scams and cyberattacks doesn't require much coding expertise...

    Researchers at IBM released a report Tuesday detailing easy workarounds they've uncovered to get large language models (LLMs) — including ChatGPT — to write malicious code and give poor security advice...

    All it takes is knowledge of the English language and a bit of background knowledge on how these models were trained to get them to help with malicious acts...

    The research comes as thousands of hackers head to Las Vegas this week to test the security of these same LLMs at the DEF CON conference's AI Village...."

    https://www.axios.com/2023/08/08/ibm-researchers-trick-chatgpt-hacking
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.