False positives, data mining, and detection of radicalisation.

Discussion in 'privacy general' started by deBoetie, Jan 20, 2016.

  1. deBoetie

    deBoetie Registered Member

    Joined:
    Aug 7, 2013
    Posts:
    1,832
    Location:
    UK
    Here's confirmation of the dangers to the innocent of false positives arising from asking institutions and social media companies to detect radicalisation (as is now required by law in the UK):-

    http://www.bbc.co.uk/news/uk-england-lancashire-35354061

    A schoolboy of 10 was interviewed by police after a school reported him, when he had written that he lived in a "terrorist house" (rather than a "terraced house"). Bad spelling is now a criminal & national security matter.

    One can imagine how this could become rampant when algorithms search for radical speech by data mining, as is being contemplated by the social media companies in response to the widely-reported meeting last month with US LEAs.

    The Intercept has an article on this topic:

    https://theintercept.com/2016/01/20...-to-look-for-terrorists-heres-why-theyd-fail/
     
  2. mirimir

    mirimir Registered Member

    Joined:
    Oct 1, 2011
    Posts:
    9,252
    Well, "terrorist" and "terraced" do sound a little alike. Maybe more so in his local dialect. But seriously, maybe his teacher should have tried to correct his spelling, before bringing in the police.

    And yes, one can imagine that the kid's first search after coming home after that interview would be about terrorists :eek:
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.