Facebook AI Creates Its Own Language In Creepy Preview Of Our Potential Future

Discussion in 'other security issues & news' started by Minimalist, Jul 31, 2017.

  1. Minimalist

    Minimalist Registered Member

    Joined:
    Jan 6, 2014
    Posts:
    14,883
    Location:
    Slovenia, EU
    https://www.forbes.com/sites/tonybr...age-in-creepy-preview-of-our-potential-future
     
  2. boredog

    boredog Registered Member

    Joined:
    Feb 1, 2015
    Posts:
    2,499
    And after Musk and Zuckerberg just had an argument about how AI was a bad thing. Musk said it was bad, Zuckerberg said it was good. Zuckerberg had a pretty lengthy video while he was smoking meat in his back yard last Sunday.
     
  3. Minimalist

    Minimalist Registered Member

    Joined:
    Jan 6, 2014
    Posts:
    14,883
    Location:
    Slovenia, EU
    I thought of that argument also. If researchers haven't thought that AI can create it's own language before it actually happened, then I guess, they are not ready to research it any further.
     
  4. oliverjia

    oliverjia Registered Member

    Joined:
    Jul 21, 2005
    Posts:
    1,926
    I don't believe in the conclusion of this report, or the conclusions of these "researchers". It sounds more like a hype to attract some eyeballs than anything else.

    If the researchers don't know what kind of "language" the chatbots were using, how do they know that it's a "language" rather than pure random script caused by a bug in the scripting algorithms?

    I'm not saying it's not possible for AI to reach this stage of inventing their own language. But right now? I highly doubt it. Too many hypes in this so called high-tech industry. e.g., a cab-hailing app can be evaluated to be worth billions of dollars. Crazy BS if you ask me.
     
  5. boredog

    boredog Registered Member

    Joined:
    Feb 1, 2015
    Posts:
    2,499
    I am pretty sure Facebook does not hire rookie engineers. Engineers are not just programmers but also electronic and mechanical. The bot were programmed to communicate in plain English but decided that was not efficient enough so DID develop their own language to communicate between each other. Guess the engineers didn't like that , with them not being in control. It is a long watch but to see how down to earth and a like any ordinary human Mark is , look at his grilling video on this link. http://www.businessinsider.com/mark...msday-ai-predictions-are-irresponsible-2017-7
     
  6. oliverjia

    oliverjia Registered Member

    Joined:
    Jul 21, 2005
    Posts:
    1,926
    "in plain English". Yes, it's pretty easy for humans to understand and communicate in English. But for these Engineers to program/integrate "Plain English" into a bot, it's no easy task. My point still stands, no matter how experienced an engineer is, HOW does he/she know that their algorithms are so right that the bot will always speak "plain English" like a human? A regular computer can crash, auto-restart and produce unexpected results on its own as well. Can you say this computer has the will to do things by itself?

    You need to make everything right to achieve that simple goal (speak plain English that human can understand), but it only take one potential bug for the bot to communicate in their own "Language", meaning, random, meaningless murmurs even to the bots. Which scenario is more likely to be the case?
     
  7. boredog

    boredog Registered Member

    Joined:
    Feb 1, 2015
    Posts:
    2,499
    Musk said he did not think Mark had the latest intel on AI. Mark said he did. Why would two bots have the same miscoding? I think Facebook engineers , Goole engineers on AI a way far of any of us on codding AI. That is all I am saying. These companies have the best engineers the world has to offer.
     
  8. oliverjia

    oliverjia Registered Member

    Joined:
    Jul 21, 2005
    Posts:
    1,926
    I see your point. However, please, don't let others, I mean, anyone, make judgement for you. Sure, FB and G engineers are good, but they are not that good as to break the basic logic of thinking/reasoning. Again, no one knows that the miscoding used by the two bots are the same, because, if no one (including FB, G Engineers) can understand the bots' "coding", how can anyone jump to the conclusion that the two bots are inventing their own "language"? Let me be more clear: can someone who knows nothing about the rules of basketball be a referee in a basketball game, and judge if the player are following or breaking the rules? The answer is no. Same here. No one can tell is the bots are "communicating" at all, not to mention using a unknown "language".

    I am a research professor in a large University, and my training teaches me that independent, data-based, critical thinking should not be dropped at any time when someone tells you something. I might be wrong on this specific issue, but until firm factual evidence comes out, I don't think I'll change my opinion.
     
  9. RockLobster

    RockLobster Registered Member

    Joined:
    Nov 8, 2007
    Posts:
    1,812
    Did your training not teach you that you should not form an opinion until you have all the relevent data for your data-based critical thinking method?
    If you did have the data you would know advancements in AI such as multiple instance learning which use complex algorithms designed specifically to give AI advanced learning capabilities such as the ability to decide what it should learn to improve itself put developing their own language well within the parameters of such an experiment.
    In another experiment two AI robots, one was supposed to hide from the other while the other tried to find it.
    They did this in an obsticle course with lots of hiding places.
    After some time, the hiding bot started knocking over the things in the course to confuse the other bot. Then it left the course entirely and hid in some nearby trees. It was not programmed to do that, it figured it out.
     
    Last edited: Jul 31, 2017
  10. oliverjia

    oliverjia Registered Member

    Joined:
    Jul 21, 2005
    Posts:
    1,926
    What you said only reflected your own opinion. "well within the parameters of such an experiment" is your subjective feeling and imagination at best. Did you know the basic rule of setting up an conclusion is that whoever set up such conclusion bear the burden of proving it with factual evidence. The burden is on the ones who draw such conclusion, here I mean "AI developed it's own language that human does not understand (but the bots can understand each other)". In this case, it's the "researcher"'s responsibility to show the world with solid, clear and factual evidence, other than merely two random fragmented sentences, that 1. AI developed its own language, and 2. human can not understand but the bots understand each other. These researchers present nothing to prove the above two points are true. So their conclusion is pure speculation. To what degree AI has been developed is irrelevant here.

    If you ever take a science class, you'll have a basic idea how rigorous scientific research works. ~ Removed Off Topic Remarks ~
     
    Last edited by a moderator: Aug 1, 2017
  11. Minimalist

    Minimalist Registered Member

    Joined:
    Jan 6, 2014
    Posts:
    14,883
    Location:
    Slovenia, EU
    @oliverjia
    Even if researchers presented no evidence, it doesn't mean that they don't have any. So saying "their conclusion is pure speculation" is actually your speculation.
     
  12. oliverjia

    oliverjia Registered Member

    Joined:
    Jul 21, 2005
    Posts:
    1,926
    Yeah, after read my post, you tried to edit yours. Unfortunately, what I said in my last post still stand. Even if I assume what you said in this modified posts are all facts (you still need to reference the source of your argument), it only means that AI has the ability to "choose" difference choices based on other bots' reactions. This behavior is still far far away from developing their own language, not to mention that such language is not understandable by humans, even when humans injected "plain English" algorithm into the bots.
     
  13. oliverjia

    oliverjia Registered Member

    Joined:
    Jul 21, 2005
    Posts:
    1,926
    Then please don't call them "researchers". They are not. Researchers publish conclusions backed up by data. In science, when you publish your conclusion without presenting strong enough evidence, it's called "speculation" at best. I review other people's scientific manuscript and speculation other than valid conclusion is one of the major reasons their manuscript get rejected.
    You need to learn some basic scientific rules before call someone "researchers" while they are not.
     
  14. Minimalist

    Minimalist Registered Member

    Joined:
    Jan 6, 2014
    Posts:
    14,883
    Location:
    Slovenia, EU
    OK, sorry mr. "scientist" :rolleyes:
     
  15. stapp

    stapp Global Moderator

    Joined:
    Jan 12, 2006
    Posts:
    24,079
    Location:
    UK
    Off topic post removed... let's keep personal insults out of the thread
     
  16. XhenEd

    XhenEd Registered Member

    Joined:
    Mar 31, 2014
    Posts:
    536
    Location:
    Philippines
    Is it really the case or is it just a marketing stint? :D
     
  17. RockLobster

    RockLobster Registered Member

    Joined:
    Nov 8, 2007
    Posts:
    1,812
    This argument is pointless. Your understanding of AI is obviously decades out of date when bots could present pseudo AI by responding to inputs only in the manner in which they were explicitly programmed. Today AI advancements allow AI to learn. Literally. They do this in the same way humans do, by trial and error, and by evaluating the outcome. You claimed,
    And you know this how? Are you a professor of languages too now!
    If two people are speaking a foreign language, no one who doesn't speak that language can determine that is a language?
    You don't think your above statement is purely an opinion based on complete ignorance of any of the facts? If you don't, I sure do.
     
  18. RockLobster

    RockLobster Registered Member

    Joined:
    Nov 8, 2007
    Posts:
    1,812
    Further to that point. The researchers would have access to the logs which would reveal clearly if the bots had developed a language or simply malfunctioned because the logs would show when it began, how it began and how they started to change the way in which they communicated.
     
  19. Minimalist

    Minimalist Registered Member

    Joined:
    Jan 6, 2014
    Posts:
    14,883
    Location:
    Slovenia, EU
    Also, research is funded by private company so it might not be in their interest to disclose detailed results. I hope they'll share more data about this "incident".
     
  20. oliverjia

    oliverjia Registered Member

    Joined:
    Jul 21, 2005
    Posts:
    1,926
    You clearly have no background in computer/AI "Language", and how it works. The core of such kind of language is computing algorithm. Even using your understanding of a Language, "If two people are speaking a foreign language, no one who doesn't speak that language can determine that is a language?" the answer to your question is obvious. That's right, if two people are speaking something (the so called "language" that is "invented" by the bots, you ASSUMED it's a valid language) that you don't understand, YOU (human, the so called researchers) CANNOT determine whether it's a valid language or not. The reason is simple, because you CANNOT understand anything they said, how do you know it's for sure a valid computer/AI language? Why it cannot be some random scripts? Just because you assumed they are speaking with "a language", then it must be a valid language? What's your logic here? So you can judge something so sure even if you have no clue what it is? That' called your opinion and imagination.

    I simply don't understand why you can not understand such a obvious point.
     
  21. oliverjia

    oliverjia Registered Member

    Joined:
    Jul 21, 2005
    Posts:
    1,926
    So now you consider this AI bot will faithfully follow human's instructions and log everything they do into a log file in a format that humans can understand? Based on your logic, when two bots start secret communication using a AI language they invented, and that human cannot understand, what incentive do they have to log their activities in a format that human can understand? Why don't they log their activities using the language they just invented, not log their activities at all , or even more AI/clever, to delete/modify the old log files to evade human monitoring?
     
    Last edited: Aug 1, 2017
  22. oliverjia

    oliverjia Registered Member

    Joined:
    Jul 21, 2005
    Posts:
    1,926
    This might be true, but like I said in my first posts, until I see further firm, factual evidence in the future (if they ever going to release it), I don't think I'll change my opinion based on the kind of "evidence" they have already released.
     
  23. RockLobster

    RockLobster Registered Member

    Joined:
    Nov 8, 2007
    Posts:
    1,812
    Why do you persist in pretending you know anything about the way computer programs work?
    You seriously believe the logs would be written to as part of the AI's decision making coding?
    The AI is a program. It has some functionality that is explicit such as logging. The functionality that is the AI per se, as in that which can learn is not some kind of magical entity that decided to rebel against its human captors! It is a program that does what it is designed to do and what it did was communicate with the other AI with parameters that allowed it to analyse and evaluate how it did that and make changes to the way it communicated. That does not mean it can change its own core programming.
     
    Last edited: Aug 1, 2017
  24. oliverjia

    oliverjia Registered Member

    Joined:
    Jul 21, 2005
    Posts:
    1,926
    Read your post carefully, again, and then compare your post with your previous posts. So AI can be so advanced that it can create a complex AI computer language that human, the original code writer, can not understand; while at the same time, it's "just a computer program that can not change its own core programming" and "It does what it is designed to do".

    It's clear you don't know what you are talking about.
    Take care.
     
  25. RockLobster

    RockLobster Registered Member

    Joined:
    Nov 8, 2007
    Posts:
    1,812
    Really we should be wondering what Facebook is up to. Years ago we used to code bots and put them in chat rooms to see how long they could fool people into believing they were a real person. They were very rudimentary and would respond to key words in the real persons message with pre written phrases that hopefully were relevant to what the real person said. The conversation usually didn't last very long before the real person realised.
    This makes me wonder, is Facebook trying to create fake people on Facebook?
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.