How has interest in military careers evolved over time and by geographic location? And what are potential recruits' biggest concerns related to the Army? Anonymous data from Internet searches can provide insight.
RAND-Lex is a computer program that can scan millions of lines of text and identify what people are talking about, how they fit into communities, and how they see the world. The program has shed light on how terrorists communicate, how the American public thinks about health, and more.
Conversations about unconscious bias in artificial intelligence often focus on algorithms unintentionally causing disproportionate harm to entire swaths of society. But the problem could run much deeper. Society should be on guard for the possibility that nefarious actors could deliberately introduce bias into AI systems.
Artificial intelligence (AI) systems are often only as intelligent and fair as the data used to train them. To enable AI that frees humans from bias instead of reinforcing it, experts and regulators must think more deeply not only about what AI can do, but what it should do—and then teach it how.
Osonde Osoba has been exploring AI since age 15. He says it's less about the intelligence and more about being able to capture how humans think. He is developing AI to improve planning and is also studying fairness in algorithmic decisionmaking in insurance pricing and criminal justice.
The Criminal Justice Technology Forecasting Group discussed near-term effects that major societal trends could have on criminal justice and identified potential responses. This brief summarizes a report of the results of the group's meetings.
The Criminal Justice Technology Forecasting Group deliberated on the effects that major societal trends could have on criminal justice in the near future and identified potential responses. This report captures the results of the group's meetings.
RAND experts held a wide-ranging discussion about artificial intelligence and privacy. They raised questions about fairness and equity regarding privacy and data use, while also highlighting positive trends and developments across the evolving AI-privacy landscape.
The greatest opportunities to improve health happen pretty much everywhere but the doctor's office. Collaborative programming that merges strategies from housing, education, or labor could make a big difference.
As artificial intelligence (AI) becomes more prevalent in the domains of security and employment, what are the policy implications? What effects might AI have on cybersecurity, criminal and civil justice, and labor market patterns?
This issue highlights recent RAND research on post-9/11 military caregivers; RAND-Lex, a computer program built at RAND that can analyze huge data sets of text; and the implications of climate change on Arctic cooperation.
Data collection, and our reliance on it, have evolved extremely rapidly. The resulting algorithms have proved invaluable for organizing, evaluating and utilizing information. How do individuals' rights come in to play, when data about their lives is compiled to create algorithms, and the resulting tools are applied to judge them?
Branching from Open Science is 'citizen science,' -- the increased involvement of amateur scientists in the various stages of the scientific research process. This publication explores the definitions, opportunities and challenges for citizen science.
The potential of health data to improve health R&D, innovation, healthcare delivery, and health systems is substantial. Realising the benefits of health data will require a supportive health data ecosystem and addressing associated challenges.
Machine learning algorithms and artificial intelligence (AI) influence many aspects of life today. These agents are not exempt from errors or bias because they are designed, built, and taught by humans. While AI has great promise, using it introduces a new level of risk and complexity in policy.
Governments are amassing a wealth of data on citizens, a trend that will continue as technology advances. But with no reliable way to ensure that the data is accurate, risks abound. In the criminal justice system, for example, poor quality data could affect individual freedoms and employability.
Data and computer models are becoming more and more important for making policy decisions on everything from prison sentences to tax bills. But citizens should be able to “check the math” on decisions that affect them.
Personal devices like fitness trackers and smartphones are likely to be used increasingly in criminal investigations. Such technology offers new tools to law enforcement, but raises unique issues regarding constitutional rights such as self-incrimination.