By Daniel E. Levenson
December, 2024
Published in Homeland Security Affairs, The Journal of the NPS Center for Homeland Defense and Security
The intersection of AI (Artificial Intelligence) and national security offers fertile ground for experts, pundits, and fear mongers alike. Such conversations range from sober discussions of practical implications and potential moral harm to fever dreams of the sort that make dystopian near-future television programs like Black Mirror appear cozy by comparison. Given this reality, it should come as no surprise that avoiding the temptation to anchor oneself in either the unlikely promise of world peace courtesy of altruistic algorithms or the nightmarish vision a digital Dante might conjure is no mean feat for those wrestling with the implications and actual use of this technology.

One critical issue at the heart of practically every meaningful discussion on this topic is that those in a position to guide or control its use seem to be significantly out of sync with the rapid rate at which the technology itself is advancing and becoming pervasive. There are a range of potential reasons for this, and in an ambitious new book entitled Four Battlegrounds, Power in the Age of Artificial Intelligence, author Paul Scharre explores the question of why the relevant figures and the bodies they lead (be they private or public) often seem to move so slowly compared to the technical advances in AI, at times even compounding confusion instead of offering desperately-needed clarity. Throughout this book, the author turns again to the two essential elements of the discussion—the technology itself and how those who control it may choose to use it as an instrument of power. As Scharre makes clear, it is the nature of this relationship, between human and machine, that will determine the ways in which AI will contribute to or erode the security of nations.
Comentários