This week, we discuss what's being done to fight disinformation online; the unintended consequences of policies that punish pregnant women for drug use; the crucial intelligence-sharing pact between South Korea and Japan; the importance of public trust in artificial intelligence; helping victims after man-made disasters; and why it’s time to prepare for a future Alzheimer’s treatment.
Want to Fight Disinformation? Use These Online Tools
People have access to more information than ever before. But it can still be hard to distinguish accurate information from low-quality or false content. That's why RAND created a database of tools aimed at fighting the spread of disinformation online. These include websites run by human fact-checkers, apps that use artificial intelligence to detect bots, and games that teach players how to spot disinformation. The database is part of our Countering Truth Decay initiative, which aims to restore the role of facts and analysis in American public life.
Punishing Pregnant Women for Drug Use May Backfire
In the United States, the number of women with an opioid use disorder at the time of giving birth quadrupled from 1999 to 2014. To address this problem, some states have adopted punitive policies, such as considering drug use to be a form of child abuse. According to a new RAND study, this approach may have unintended consequences. Punishing pregnant women for substance use is linked to higher rates of opioid withdrawal among newborns. This suggests that policymakers should instead focus on prevention and expanding access to treatment.
Seoul Should Consider Sticking with Intel-Sharing Pact
South Korea has announced its intent to withdraw from an intelligence-sharing arrangement with Japan next week. According to RAND's Scott Harold, there are several reasons why Seoul should reconsider. The agreement has served the national security interests of both parties, as well as the United States, he says. What's more, breaking the pact now would likely be greeted by North Korea as a sign of weakness.
How to Ensure Trust in Artificial Intelligence
Virtual assistants can automate our homes. Autonomous vehicles can reduce traffic congestion. Algorithms can detect illness earlier than ever before. A world enabled by artificial intelligence could be rich with possibilities. But this future may screech to a halt if AI causes a high-profile accident that causes public support to collapse. To prevent this from happening, the global community must take steps to ensure that algorithms are robust and verifiable, says RAND's Danielle Tarraf.
Helping Victims After Human-Made Disasters
When human activity causes a disaster, the potentially responsible parties sometimes step up to support victims early on. This assistance can be as simple as handing out vouchers for hotels or meals. Or, it may be a sophisticated program that processes victims' claims for medical expenses, property damage, and other losses. A new RAND report explores real-world examples, the perceived benefits and drawbacks, and what policymakers could do to encourage early support. Notably, the authors find that early assistance can help fill gaps in disaster response that aren't always addressed by NGOs or first responders.
It's Time to Prepare for Future Alzheimer's Treatments
Past RAND research has found that a number of countries, including the United States and some European nations, aren't prepared to meet demand for a future Alzheimer's treatment. A new RAND report finds that Australia's health system will face similar challenges if such a breakthrough occurs. The most pressing problem would be a lack of medical specialists to evaluate and diagnose patients who may have early signs of Alzheimer's.
Listen to the Recap
Get Weekly Updates from RAND
If you enjoyed this weekly recap, consider subscribing to Policy Currents, our newsletter and podcast.