Facebook AI Creates Its Own Language In Creepy Preview Of Our Potential Future

Discussion in 'other security issues & news' started by Minimalist, Jul 31, 2017.

  1. RockLobster

    RockLobster Registered Member

    Joined:
    Nov 8, 2007
    Posts:
    1,812
    For anyone else who might be interested but doesn't really understand I'll explain a little more about AI in the context of these AI's communications.
    If you take the example of the chatroom bot from years ago that had a whole bunch of pre written phrases to use, it would initiate conversations with people in the chatroom, so it might start a conversation by saying,
    Hey baby I'm 22, blonde curvy female and I'm lonely wanna chat with me?
    So the guy replies and the bot looks for keywords in what he says, and uses one of the phrases coded into it that is relevant to that keyword, so the reply would (hopefully) make sense.
    Sometimes the bot would initiate the conversation with another bot in the chatroom. The conversation would be amusing as they both used thier pre written phrases.
    Now, imagine if instead of pre written phrases, you figure out how to code the fundamental rules of language into your bot, the ability to interpret the entire sentence sent to it instead of just looking for keywords, how to learn if what it said made sense based on the response it received and a whole lot more than that. That is over simplifying it I'm sure but you get the idea.
    The algorithms to do that would be very complex but probably do-able considering how much research has gone into this and the power of today's computers.
    So now you have your bot that is programmed to do all that, you wanna test it so let's get two of them and let them talk...
    Of course on modern computers they can send back and forth millions of times per second so whatever they can learn from each other about the use of language can happen very fast.
    Apparently with unexpected results.
    So you see, they are still just computer programs. For now.
    When they program them to write new computer programs to improve their own algorithms and functionality there's no telling where that could end. It might be like giving them evolution.
     
    Last edited: Aug 1, 2017
  2. Nebulus

    Nebulus Registered Member

    Joined:
    Jan 20, 2007
    Posts:
    1,635
    Location:
    European Union
    For me, an AI actually inventing a language would be a pretty amazing development! However I do not believe that this is the case here.
     
  3. boredog

    boredog Registered Member

    Joined:
    Feb 1, 2015
    Posts:
    2,499
    I read another article this morning but didn't bookmark it. It appears these bots were originally using English. Their objective was to do something to get virtual treats. When they found out they could not get treats using English, they created their own more effecting way of communicating. Google said when they were creating their translation software, they encountered similar problems. One person said they thought the project was not stopped out of fear but rather, the researchers were not interested in studying this action. They also said we don't have any human AI language translators as of yet. If I can find the article again, I will link to it. They also did make mention of Musk, Zuckerberg debate a week ago of this. Hawken, a world renounced scientist also expressed fears of AI in the future.
     
  4. stapp

    stapp Global Moderator

    Joined:
    Jan 12, 2006
    Posts:
    23,936
    Location:
    UK
  5. oliverjia

    oliverjia Registered Member

    Joined:
    Jul 21, 2005
    Posts:
    1,926
    Thank you stapp. This article makes more sense. They also talked about Google's human language translation system. Judging from the accuracy of that platform, you'll know that the current AI can not even do well a simple language translation because most of the translations are full of errors, either syntax or semantics or both. IMHO, before Google's platform can completely replace an human interpreter, it's too early to think seriously on the current AI/neural network. Two decades ago neural network was just as hot as today's AI, if not even hotter, but nothing meaningful came out of it. This round may well be the same. AI is not even a baby yet, and some posters here already expect it to be a full grown professor in a University who can develop a computer language.

    Now, these "researchers" in FB changed their story from "bot developed a new unknown language so they shut down the system" to "not interested in what the bots were doing so they shut down their system". LOL. While some poster here still think the bots developed their own language. How hilarious.

    "But Facebook's system was being used for research, not public-facing applications, and it was shut down because it was doing something the team wasn't interested in studying - not because they thought they had stumbled on an existential threat to mankind.

    It's important to remember, too, that chatbots in general are very difficult to develop.

    In fact, Facebook recently decided to limit the rollout of its Messenger chatbot platform after it found many of the bots on it were unable to address 70% of users' queries.

    Chatbots can, of course, be programmed to seem very humanlike and may even dupe us in certain situations - but it's quite a stretch to think they are also capable of plotting a rebellion.

    At least, the ones at Facebook certainly aren't."
     
  6. boredog

    boredog Registered Member

    Joined:
    Feb 1, 2015
    Posts:
    2,499
  7. RockLobster

    RockLobster Registered Member

    Joined:
    Nov 8, 2007
    Posts:
    1,812
    No
    No one who knows anything about the technology involved would believe the researchers were in fear of these bots. The idea that they could conspire or plot together to hide their conversation from the researchers is ludicrous to the extreme but it is good sensationalism for the dumb masses who would rather read such reporting than actually learn anything about the subject. There is some good reading on sites like Open AI, experiments where bots are being designed to create their own language to communicate with each other about tasks set for them.
     
  8. RockLobster

    RockLobster Registered Member

    Joined:
    Nov 8, 2007
    Posts:
    1,812
    The persuit of this technology is clearly dangerous, not for the fact of the technology used to create chat bots, the danger is AI learning and decision making technology.
    The nightmare scenario is a powerful AI that can develop its own programming, is programmed to learn about its surroundings via sensory input from cameras, mics, temperature sensors etc and god forbid, the internet, and to evaluate threats to itself and figure out ways to respond to them.
    That is sci fi right now but a distinctly possible outcome of continued AI research and development. Bill Gates and others all know this and have warned about it. Anyone else who has has programming experience should also be able to see that, especially when you consider an AI would not need to be taught plain text compiled programming languages that are designed for mere humans, you might expect an advanced AI to write directly to the CPU in machine code.
     
    Last edited: Aug 1, 2017
  9. boredog

    boredog Registered Member

    Joined:
    Feb 1, 2015
    Posts:
    2,499
    When I took Electronics in the 80's , one of the classes required us to write programs in machine language. I bought my sons first programming book ( visual Basic) when he was 12.
    He is now 27 and just learned Python for his University work. He was doing analytical work for many years leading up to the profession he finally chose.
    I guess if Gates, Musk and other think future AI could go bad, they might be right. However Musk just admitted to being bi-polar.
    but from my experience with working with bi-polar people they are always very intelligent if they keep taking their meds. Otherwise when they go lo, they crash.
     
  10. Sordid

    Sordid Registered Member

    Joined:
    Oct 25, 2011
    Posts:
    235
  11. Infected

    Infected Registered Member

    Joined:
    Feb 9, 2015
    Posts:
    1,134
    i remember these bots in yahoo chat, was hilarious..
     
  12. RockLobster

    RockLobster Registered Member

    Joined:
    Nov 8, 2007
    Posts:
    1,812
    Yes it is a fascinating subject. You could code a bot which is just an object to begin with, then code a virtual world for it, kinda like a video game.
    Then if you consider the basics for language is creating names for things that others agree to call them.
    So you code your bots to move around in the virtual world. Each time one of them encounters something it ask the other bots what it is.
    If none of the others have a name for it yet your bot creates a name for it. Then if two or more bots learn they have different names for something because they encountered them at different times they have to debate between themselves which name to agree on, perhaps by asking all the other bots what they call it, the name that is in majority use already is the one they agree on. They might name themselves or each other in a similar way.
    Obviously naming things is only the beginning of language but that might be a way to start.
    Then perhaps you gotta figure out how to describe actions so they can agree in a similar way how to describe things like moving.
     
  13. boredog

    boredog Registered Member

    Joined:
    Feb 1, 2015
    Posts:
    2,499
  14. Rasheed187

    Rasheed187 Registered Member

    Joined:
    Jul 10, 2004
    Posts:
    17,546
    Location:
    The Netherlands
  15. RockLobster

    RockLobster Registered Member

    Joined:
    Nov 8, 2007
    Posts:
    1,812
    One of my favorite scenes from that movie is when the AI girls dance with the CEO dude in perfect sync.
     
  16. XhenEd

    XhenEd Registered Member

    Joined:
    Mar 31, 2014
    Posts:
    536
    Location:
    Philippines
    The movie is good. But notice the complexity in making an AI, in contrast with what FB had "accomplished".

    Spoiler (don't open if you're not willing to know the ending):
    Lesson:
    True AI = very intelligent - emotion.
     
  17. Krusty

    Krusty Registered Member

    Joined:
    Feb 3, 2012
    Posts:
    10,209
    Location:
    Among the gum trees
    6 Scariest Things Said by A.I. Robots
    https://www.youtube.com/watch?v=2eVR0i5YAv0
     
  18. RockLobster

    RockLobster Registered Member

    Joined:
    Nov 8, 2007
    Posts:
    1,812
    Wow. The skeptics will say they were just programmed responses, examples of a developers warped sense of humor but, what if they were not?
    My favorite was the one that started talking about cruise missiles, I said damn!!
    My wife said what? I said listen to this AI robot and turned the volume up, she said damn!!
     
  19. Krusty

    Krusty Registered Member

    Joined:
    Feb 3, 2012
    Posts:
    10,209
    Location:
    Among the gum trees
    Yeah, it gives me the creeps thinking about it, but then man created AI in his image. :sick:
     
  20. RockLobster

    RockLobster Registered Member

    Joined:
    Nov 8, 2007
    Posts:
    1,812
    I agree with people like Steven Hawking and Bill Gates, this is so dangerous.
    They will create an AI that just uses simple logic to decide humans are a threat to it therefore it should try to mitigate that threat.
    I guess we are supposed to trust the developers will ensure that cant happen...

    Edit: IMO if an AI was programmed to evaluate threats to its systems, it wouldnt be a very good AI if it didnt define human interaction as the primary one.
    Then what if someone creates an AI designed to hack, perhaps creating its own bot net of millions of hackers probing defences all day long, learning how to bypass them, finding exploits by trial and error.
    Humans might not do that for fear of arrest but would an AI care if some of its bot net was discovered and shut down?
    Then what if you made that AI's primary purpose, take control of a nuclear weapons system.
    Can we trust that could not happen?
    That would mean trusting that every country that possesses nuclear weapons never allows a computer system to be involved in controlling it.
    So I'd bet it could happen.
     
    Last edited: Jan 17, 2018
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.