6 reasons to pump the brakes on AI

Discussion in 'other software & services' started by ronjor, Apr 9, 2018.

  1. ronjor

    ronjor Global Moderator

    Joined:
    Jul 21, 2003
    Posts:
    162,658
    Location:
    Texas
  2. hawki

    hawki Registered Member

    Joined:
    Dec 17, 2008
    Posts:
    6,061
    Location:
    DC Metro Area
    A 7th reason to pump the brakes on AI:

    "MIT Creates An AI Psychopath Because Someone Had To Eventually...

    Like so many of these projects do, the MIT researchers started out by training Norman on freely available data found on the Web. Instead of looking at the usual family-friendly Google Images fare, however, they pointed Norman toward darker imagery. Specifically, the MIT crew stuck Norman in a creepy subreddit to do his initial training...

    The MIT team hit the nail on the head when it said: 'Norman suffered from extended exposure to the darkest corners of Reddit.' Fortunately, there’s still hope for this disturbed AI...

    Either way, it reinforces one very important fact about AI: that the worldview of an AI is very much determined by the information it gathers while learning..."

    https://www.geek.com/tech/mit-creates-an-ai-psychopath-because-someone-had-to-eventually-1741948/
     
  3. EASTER

    EASTER Registered Member

    Joined:
    Jul 28, 2007
    Posts:
    11,126
    Location:
    U.S.A. (South)
    :gack: Another tool aka: Pandora's Box about to be unleashed. Oh joy o_O
     
  4. RockLobster

    RockLobster Registered Member

    Joined:
    Nov 8, 2007
    Posts:
    1,812
    That AI should run for office it would fit right in...
     
  5. guest

    guest Guest

    Bots can say some creepy things, but ‘psychotic’ AI is still fiction
    June 27, 2018
    https://venturebeat.com/2018/06/27/...epy-things-but-psychotic-ai-is-still-fiction/
     
  6. xxJackxx

    xxJackxx Registered Member

    Joined:
    Oct 23, 2008
    Posts:
    8,616
    Location:
    USA
    The same thing is true of people. I wish there was more concern for that. :eek:
     
  7. itman

    itman Registered Member

    Joined:
    Jun 22, 2010
    Posts:
    8,591
    Location:
    U.S.A.
    Just like humans do. The difference is human decision making is conditioned upon sensory input and other intrinsic factors such as reasoning, emotions, etc. and like capability than can differentiate and filter raw data streams.

    Bottom line - until science can fully understand the workings of the human brain and its interactions with the rest of the body, AI is more fiction than science. And if that day arrives, the likelihood of applying that knowledge externally is far in the future.

    AI has limited current potential to enhance human's lives. It likewise misapplied has potential to negatively impact human lives.
     
  8. guest

    guest Guest

    Pumping the Brakes on Artificial Intelligence
    October 3, 2018
    https://threatpost.com/pumping-the-brakes-on-artificial-intelligence/137838/
     
  9. ronjor

    ronjor Global Moderator

    Joined:
    Jul 21, 2003
    Posts:
    162,658
    Location:
    Texas
  10. guest

    guest Guest

    New AI fake text generator may be too dangerous to release, say creators
    The Elon Musk-backed nonprofit company OpenAI declines to release research publicly for fear of misuse
    February 14, 2019

    https://www.theguardian.com/technol...musk-backed-ai-writes-convincing-news-fiction
     
  11. guest

    guest Guest

    AI researchers debate the ethics of sharing potentially harmful programs
    Nonprofit lab OpenAI withheld its latest research, but was criticized by others in the field
    February 21, 2019

    https://www.theverge.com/2019/2/21/18234500/ai-ethics-debate-researchers-harmful-programs-openai
     
  12. guest

    guest Guest

    When Is Technology Too Dangerous to Release to the Public?
    February 22, 2019
    https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html
     
  13. ExtremeGamerBR

    ExtremeGamerBR Registered Member

    Joined:
    Aug 3, 2010
    Posts:
    1,351
    Great! I totally agree. There is nothing new about that.
     
  14. guest

    guest Guest

    An AI for generating fake news could also help detect it
    March 12, 2019
    https://www.technologyreview.com/s/613111/an-ai-for-generating-fake-news-could-also-help-detect-it/
     
  15. guest

    guest Guest

    OpenAI Said Its Code Was Risky. Two Grads Recreated It Anyway
    August 26, 2019
    https://www.wired.com/story/dangerous-ai-open-source/
     
  16. guest

    guest Guest

    Taylor Swift threatened to sue Microsoft over its racist chatbot Tay
    September 10, 2019
    https://www.theguardian.com/music/s...-to-sue-microsoft-over-its-racist-chatbot-tay
     
  17. guest

    guest Guest

    OpenAI Releases Text Generator AI That Was Too “Dangerous” To Share
    November 8, 2019
    https://fossbytes.com/openai-releases-text-generator-ai-too-dangerous-to-share/
     
  18. guest

    guest Guest

    Patent Office Rejects Two Patent Applications In Which An AI Was Designated As The Inventor
    January 3, 2020
    https://www.techdirt.com/articles/2...ons-which-ai-was-designated-as-inventor.shtml
     
  19. guest

    guest Guest

    AI cannot be recognised as an inventor, US rules
    April 29, 2020
    https://www.bbc.co.uk/news/technology-52474250
     
  20. guest

    guest Guest

    Australian court rules an AI can be considered an inventor on patent filings
    August 2, 2021
    https://www.theregister.com/2021/08/02/ai_inventor_allowed_in_australia/
     
  21. guest

    guest Guest

    AI cannot be the inventor of a patent, appeals court rules
    September 23, 2021
    https://www.bbc.co.uk/news/technology-58668534
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.