Brain-computer interfaces give humans the ability to directly control machines with their minds. Before this emerging technology matures, it's important for developers to weigh the opportunities against the risks.
Anticipating the risks and opportunities posed by all kinds of change is a RAND specialty. In 1964, using RAND's now-famous Delphi method, experts pondered topics like medical advancements, space, artificial intelligence, and controlling the weather.
The pandemic is an unprecedented public health crisis. But the response from science, technology, and innovation communities has been remarkable. It proves that innovation and learning, interdisciplinary methods and collaboration, information and data sharing, and adaptability are more important than ever.
Quantum computers are expected to be powerful enough to break the current cryptography that protects all digital communications. But this scenario is preventable if policymakers take actions now to minimize the harm that quantum computers may cause.
The Catholic Church joined with technology companies in February to release the “Rome Call for AI Ethics,” which it hopes will lend meaning if not governance frameworks for the use of artificial intelligence. Making sure that “everyone can benefit” from AI by making its discoveries widely available will be important. This is perhaps where the church can be most effective.
Will artificial intelligence (AI) change warfare? It's hard to say. AI itself is not new, but AI as a critical factor in competitions is relatively novel and, as a result, there's not much data to draw from. Perhaps the most interesting examples are in the world of chess.
At RAND, we examine complex issues in dozens of policy areas. And when our researchers aren't busy coming up with solutions to some of the world's biggest problems, sometimes they step in front of the camera to highlight their findings. Here are our top videos of 2019.
The United States should apply lessons from the 70-year history of governing nuclear technology by building a framework for governing AI military technology. An AI for Peace program should articulate the dangers of this new technology, principles to manage the dangers, and a structure to shape the incentives for other states.
Deception is as old as warfare itself. Until now, the targets of deception operations have been humans. But the introduction of machine learning and artificial intelligence opens up a whole new world of opportunities to deceive by targeting machines.
How will artificial intelligence change the way wars are fought? The answer, of course, depends. And it mainly depends on what type of wars are being fought. And how will AI affect the type of wars that the United States is most likely to fight?
Contrary to the promise that AI would deliver an omniscient view of everything happening in the battlespace—the goal of U.S. military planners for decades—it now appears that technologies of misdirection are winning. Military deception, in short, could prove to be AI’s killer app.
We are hurtling towards a future in which AI is omnipresent. This AI-enabled future is blinding in its possibilities for prosperity, security, and well-being. Yet, it is also crippling in its fragility. What might it take for it all to come to a screeching halt?
Unless the Pentagon embraces a more open approach to artificial intelligence, it will be left behind. Private sector innovation in this space is too fast. But what are the risks of disseminating potentially sensitive AI technology? And what should not be disclosed?