A 7th reason to pump the brakes on AI: "MIT Creates An AI Psychopath Because Someone Had To Eventually... Like so many of these projects do, the MIT researchers started out by training Norman on freely available data found on the Web. Instead of looking at the usual family-friendly Google Images fare, however, they pointed Norman toward darker imagery. Specifically, the MIT crew stuck Norman in a creepy subreddit to do his initial training... The MIT team hit the nail on the head when it said: 'Norman suffered from extended exposure to the darkest corners of Reddit.' Fortunately, there’s still hope for this disturbed AI... Either way, it reinforces one very important fact about AI: that the worldview of an AI is very much determined by the information it gathers while learning..." https://www.geek.com/tech/mit-creates-an-ai-psychopath-because-someone-had-to-eventually-1741948/
Bots can say some creepy things, but ‘psychotic’ AI is still fiction June 27, 2018 https://venturebeat.com/2018/06/27/...epy-things-but-psychotic-ai-is-still-fiction/
Just like humans do. The difference is human decision making is conditioned upon sensory input and other intrinsic factors such as reasoning, emotions, etc. and like capability than can differentiate and filter raw data streams. Bottom line - until science can fully understand the workings of the human brain and its interactions with the rest of the body, AI is more fiction than science. And if that day arrives, the likelihood of applying that knowledge externally is far in the future. AI has limited current potential to enhance human's lives. It likewise misapplied has potential to negatively impact human lives.
Pumping the Brakes on Artificial Intelligence October 3, 2018 https://threatpost.com/pumping-the-brakes-on-artificial-intelligence/137838/
New AI fake text generator may be too dangerous to release, say creators The Elon Musk-backed nonprofit company OpenAI declines to release research publicly for fear of misuse February 14, 2019 https://www.theguardian.com/technol...musk-backed-ai-writes-convincing-news-fiction
AI researchers debate the ethics of sharing potentially harmful programs Nonprofit lab OpenAI withheld its latest research, but was criticized by others in the field February 21, 2019 https://www.theverge.com/2019/2/21/18234500/ai-ethics-debate-researchers-harmful-programs-openai
When Is Technology Too Dangerous to Release to the Public? February 22, 2019 https://slate.com/technology/2019/02/openai-gpt2-text-generating-algorithm-ai-dangerous.html
An AI for generating fake news could also help detect it March 12, 2019 https://www.technologyreview.com/s/613111/an-ai-for-generating-fake-news-could-also-help-detect-it/
OpenAI Said Its Code Was Risky. Two Grads Recreated It Anyway August 26, 2019 https://www.wired.com/story/dangerous-ai-open-source/
Taylor Swift threatened to sue Microsoft over its racist chatbot Tay September 10, 2019 https://www.theguardian.com/music/s...-to-sue-microsoft-over-its-racist-chatbot-tay
OpenAI Releases Text Generator AI That Was Too “Dangerous” To Share November 8, 2019 https://fossbytes.com/openai-releases-text-generator-ai-too-dangerous-to-share/
Patent Office Rejects Two Patent Applications In Which An AI Was Designated As The Inventor January 3, 2020 https://www.techdirt.com/articles/2...ons-which-ai-was-designated-as-inventor.shtml
AI cannot be recognised as an inventor, US rules April 29, 2020 https://www.bbc.co.uk/news/technology-52474250
Australian court rules an AI can be considered an inventor on patent filings August 2, 2021 https://www.theregister.com/2021/08/02/ai_inventor_allowed_in_australia/
AI cannot be the inventor of a patent, appeals court rules September 23, 2021 https://www.bbc.co.uk/news/technology-58668534