Debate over the risk and promise of Artificial Intelligence (AI) and other technologies can be depressingly simplistic, often polarised into two camps with views that are extreme to the point of caricature. On the one hand we have those whose understanding of AI seems to have been predominantly informed by 1980s movies like The Terminator and Wargames, in which killer AI is bent on destruction of the human race. At the other end of the spectrum we have those who seem to believe that AI can do no harm, and that left unregulated it will lead to a risk-free cornucopia of rainbows and unicorns.
There are highly intelligent people who attempt to explore the middle ground, but they often have a tendency to get side-tracked by extremely obscure academic tangents which are of course largely ignored by significant players such as the military, companies fulfilling lucrative contracts for the military, and indeed small-medium companies working on applications which don’t resemble traditional conceptions of AI or robotics. Across the entire field of AI/technology risk, there is a dangerous myopia at work.