The Bad of Artificial Intelligence

The malicious use of artificial intelligence

I see a lot of our work as being the antidote to the poison of FUD (Fear, Uncertainty and Doubt).  Oxford puts FUD in its place nicely by suffixing the definition with “,usually evoked intentionally in order to put a competitor at a disadvantage.” Superlative laden stories about the biggest, worst, most expensive, unstoppable, etc. hacking attack preaching the coming of a “black mirror-esque” world.  These stories often obstruct rational and pragmatic discussions with an undeserved emotional context.  If what we fear is the dark unknown, facts and reason are the beacon we should light.

Consequently, elucidation is the central pillar of our work.

I mention this, because it is the mindset with which I read “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”.  This report highlights the possibilities for the malicious use of artificial intelligence technologies.  A report on such a hot button topic that relies on hypothetical scenarios and descriptions is the kind of thing that can often be in a FUD contributor. A careful read reveals authors being careful not to push the scenarios too far into the unknown and skirt the difficult nature of talking about potentially dangerous leading-edge security implements through conservative hypotheticals.   I believe that this piece is in the spirit of inspiring discussion and asking us to consider how we can work together to identify and prevent poor outcomes of powerful tools without safety measures.  They make four broad recommendations:

  1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse- related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
  3. Best practices should be identi ed in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
  4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

–pp.5.

The document is formatted in a way that is easy to read and provides enough to start thinking about what kinds of evidence we would need to make firm decisions about artificial intelligence type technologies.

Powerful tools generally require evidence supported, reasonable discussion so that they can be made available in a way that provides a net benefit to society.  Its clear to me at least that we would all be better of avoiding a path that could lead to a 24-hour news pundit soundbite starting:

“Artificial Intelligence doesn’t kill people, …”.

 

 

Cite:

Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., … & Anderson, H. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv preprint arXiv:1802.07228.

Source:

https://arxiv.org/pdf/1802.07228v1.pdf

 

Leave a Reply

Your email address will not be published. Required fields are marked *