Self-driving laboratories (SDLs) promise to reshape our very understanding of research. But, as with all groundbreaking innovations, SDLs bring their own set of intriguing questions and potential challenges.
This weekly recap focuses on the costs and benefits of a four-day school week, how artificial intelligence is bringing a new era of social media manipulation, the effects of placing police officers in schools, and more.
The consequences of ignoring the problem of adversarial attacks in algorithmic trading are potentially catastrophic. In a world increasingly reliant on machine learning models, the financial sector needs to shift from being reactive to proactive to ensure the security and integrity of our financial system.
This weekly recap focuses on the challenges facing U.S. immigration policy, what it would take to close America's Black-white wealth gap, risks and opportunities associated with artificial intelligence, more.
William Marcellino, a senior behavioral and social scientist at RAND and professor at the Pardee RAND Graduate School, discusses the rapidly expanding reach of artificial intelligence, the challenges it could pose for both society and policymakers, and how the research community is poised to help.
The U.S. government should consider offering a public cash bounty to anyone who can crack the new forms of encryption that are being rolled out to defend against quantum computers. If a bounty helps catch a vulnerability before it's deployed, then the modest cost of the bounty could prevent much higher costs down the line.
If all the shortcomings of humanity were stripped away, equity would still be an elusive goal for algorithms for reasons that have more to do with mathematical impossibilities than backward ideologies. But even if attaining equity is fundamentally difficult, seeking it is not futile.
During the pandemic, misinformation and conspiracy theories have spread more virulently than ever before. The vast scale of the problem means scalable solutions like machine learning could be needed to rein in the bots, trolls, and conspiracy theories being spread by bad-faith actors.
Artificial intelligence is being used to develop sophisticated malign information on social media. But AI also provides opportunities to strengthen responses to these threats and can foster wider resilience to disinformation.
This weekly recap focuses on keeping COVID-19 vaccines moving to save more lives; why we need a national commission to investigate the U.S. Capitol attack; media literacy as a tool to counter "Truth Decay," and more.
Natural disasters in the United States cause billions of dollars of damage to electric infrastructure every year. Applying artificial intelligence and machine learning in a disaster-recovery context for electrical utilities might significantly improve cost estimating capability and responsiveness.
Disinformation has become a central feature of the COVID-19 crisis. This type of malign information and high-tech “deepfake” imagery poses a risk to democratic societies worldwide by increasing public mistrust in governments and public authorities. New research highlights new ways to detect and dispel disinformation online.
While AI-enabled robots will have human-like characteristics, they will likely develop distinct personalities of their own. The military will need an extensive training program to inform new doctrines and concepts to manage this powerful, but unprecedented, capability.
Deception is as old as warfare itself. Until now, the targets of deception operations have been humans. But the introduction of machine learning and artificial intelligence opens up a whole new world of opportunities to deceive by targeting machines.