How has interest in military careers evolved over time and by geographic location? And what are potential recruits' biggest concerns related to the Army? Anonymous data from Internet searches can provide insight.
RAND-Lex is a computer program that can scan millions of lines of text and identify what people are talking about, how they fit into communities, and how they see the world. The program has shed light on how terrorists communicate, how the American public thinks about health, and more.
Health data is re-used for a variety of reasons by pharmaceutical companies across the R&D pathway. RAND Europe suggests seven ways which might help create a sustainable ecosystem in which health data is reused effectively.
DoD and the U.S. military services have had some success with data-enabled outreach and recruiting. But they could benefit from expanding their adoption of private-sector approaches. For example, recruiters could better target prospects through the use of personally identifiable information and third-party data.
Trust may be important in shaping public attitudes to genetics and intentions to participate in genomics research and big data initiatives. This study examined trust in data sharing among the general public in the USA, Canada, UK and Australia.
Living in an information society opens unprecedented opportunities for hostile rivals to cause disruption, delay, inefficiency, and active harm. Social manipulation techniques are evolving beyond disinformation and cyberattacks on infrastructure sites. How can democracies protect themselves?
Cortney Weinbaum studies topics related to intelligence and cyber policy as a senior management scientist at RAND. In this interview, she discusses challenges facing the intelligence community, the risks of using AI as a solution, and ethics in scientific research.
This research brief addresses congressional concerns about the use of data analysis, measurement, and other evaluation-related methods in U.S. Department of Defense acquisition programs and decisionmaking.
Congress asked about acquisition data analytics in the Department of Defense. This report identifies and measures capabilities and recent progress. Barriers to improvement include a culture against data sharing due to security and burden concerns.
An analysis of how ethics are created, monitored, and enforced finds which ethical principles are common across scientific disciplines, how these ethics might vary geographically, and how emerging topics are shaping future ethics.
Facebook's Mark Zuckerberg has called for new internet regulation starting in four areas: harmful content, election integrity, privacy, and data portability. But why stop there? His proposal could be expanded to include much more: security-by-design, net worthiness, and updated internet business models.
As technology and the ability to gather ever-growing amounts of data move further into the realms of biology and human performance, communication and transparency become increasingly important. Experts should consider whether they are using the words, examples, and models that connect with a broad audience most effectively.
Instead of worrying about an artificial intelligence “ethics gap,” U.S. policymakers and the military community could embrace a leadership role in AI ethics. This may help ensure that the AI arms race doesn't become a race to the bottom.
As tech-based systems have become all but indispensable, many institutions might assume user data will be reliable, meaningful and, most of all, plentiful. But what if this data became unreliable, meaningless, or even scarce?
Video technology is changing the ways that law enforcement works and interacts with the public. In this report, the authors explore some of the challenges posed and innovation needs in this emerging area.
Conversations about unconscious bias in artificial intelligence often focus on algorithms unintentionally causing disproportionate harm to entire swaths of society. But the problem could run much deeper. Society should be on guard for the possibility that nefarious actors could deliberately introduce bias into AI systems.
Artificial intelligence (AI) systems are often only as intelligent and fair as the data used to train them. To enable AI that frees humans from bias instead of reinforcing it, experts and regulators must think more deeply not only about what AI can do, but what it should do—and then teach it how.
Osonde Osoba has been exploring AI since age 15. He says it's less about the intelligence and more about being able to capture how humans think. He is developing AI to improve planning and is also studying fairness in algorithmic decisionmaking in insurance pricing and criminal justice.
The Criminal Justice Technology Forecasting Group discussed near-term effects that major societal trends could have on criminal justice and identified potential responses. This brief summarizes a report of the results of the group's meetings.
The Criminal Justice Technology Forecasting Group deliberated on the effects that major societal trends could have on criminal justice in the near future and identified potential responses. This report captures the results of the group's meetings.
RAND experts held a wide-ranging discussion about artificial intelligence and privacy. They raised questions about fairness and equity regarding privacy and data use, while also highlighting positive trends and developments across the evolving AI-privacy landscape.