RAND mathematician Mary Lee describes the wide variety of personal data collected by smart devices and applications, such as smartwatches, brain implants, and period trackers.
Facial recognition technology is developing rapidly and is increasingly being used in policing. What do policymakers need to understand in order to minimize the risks it poses, while also maximizing its benefits?
Instead of worrying about an artificial intelligence “ethics gap,” U.S. policymakers and the military community could embrace a leadership role in AI ethics. This may help ensure that the AI arms race doesn't become a race to the bottom.
This weekly recap focuses on consumer reactions to data breaches, understanding teen marijuana use after legalization, why the United States can't rely on Turkey to defeat ISIS, and more.
As tech-based systems have become all but indispensable, many institutions might assume user data will be reliable, meaningful and, most of all, plentiful. But what if this data became unreliable, meaningless, or even scarce?
Video technology is changing the ways that law enforcement works and interacts with the public. In this report, the authors explore some of the challenges posed and innovation needs in this emerging area.
Data breaches and cyberattacks cross geopolitical boundaries, targeting individuals, corporations and governments. Creating a global body with a narrow focus on investigating and assigning responsibility for cyberattacks could be the first step to creating a digital world with accountability.
Cybersecurity has become a team sport. But all participants on the field are playing without clear rules, without a team approach, and without knowing when to pass the ball or to whom.
Conversations about unconscious bias in artificial intelligence often focus on algorithms unintentionally causing disproportionate harm to entire swaths of society. But the problem could run much deeper. Society should be on guard for the possibility that nefarious actors could deliberately introduce bias into AI systems.
High-tech health care solutions are part of an emerging sector of medical technologies that monitor personal health data by essentially connecting your body to the Internet. As smart devices in health care evolve, the line between human and machine is blurring, and creating new concerns about consumer safety and privacy rights.
Electronic health records have helped streamline record keeping but providers aren't always able to reliably pull together records for the same patient from different hospitals, clinics, and doctor's offices. The growing use of smartphones offers a promising opportunity to improve record matching.
In a large data breach, there could be a real risk to victims' financial or personal security. Though responsible organizations should do everything in their power to ensure data is protected in the first place, they also should prepare a plan to ensure prompt victim response.
The rise of education technology brings increased opportunity for the collection and application of data. This presents challenges, including data infrastructure issues that could limit the usefulness of data, and privacy concerns.
Mobile phones and smartphone apps offer a promising approach to ensure that an individual's medical records when shared between different health care providers are matched correctly.
When health providers exchange medical records, the success rate can be as low as 50 percent. The ubiquity of mobile phones offers a promising opportunity to create a patient-empowered system to confirm identities that would allow hospitals and other providers to match records more accurately.
Artificial intelligence (AI) systems are often only as intelligent and fair as the data used to train them. To enable AI that frees humans from bias instead of reinforcing it, experts and regulators must think more deeply not only about what AI can do, but what it should do—and then teach it how.
This document is a proof-of-concept operational toolbox designed to facilitate the development of national-level cybersecurity capacity building programmes and of holistic policy and investment strategies to tackle challenges in the cyber domain.
Social media and social network analysis could help law enforcement monitor for safety threats, identify those at high risk for involvement in violence, and investigate crimes and crime networks. But computer security, privacy, and civil rights protections must be in place before using these tools.
Should consumers be in charge of self-regulating the data they share and how companies use it? What policy opportunities could Congress consider to better protect consumer data? In this RAND Congressional briefing, Rebecca Balebako and John Davis discuss the benefits and risks of data sharing, opportunities for protecting privacy at both the personal and industry level, and current U.S. laws and how they compare to European laws.
Osonde Osoba has been exploring AI since age 15. He says it's less about the intelligence and more about being able to capture how humans think. He is developing AI to improve planning and is also studying fairness in algorithmic decisionmaking in insurance pricing and criminal justice.
Researchers discuss the challenge of accessing data in remote data centers, summarize the discussion of an expert panel, and provide a list of needs identified and prioritized by the panel to inform concerned communities and stakeholders.
RAND experts held a wide-ranging discussion about artificial intelligence and privacy. They raised questions about fairness and equity regarding privacy and data use, while also highlighting positive trends and developments across the evolving AI-privacy landscape.