Policymakers might consider developing appropriate policy frameworks for emerging brain- and body-enhancement technologies to ensure that innovations harnessed for societal, economic, or military benefits do not create new vulnerabilities and that governments adequately defend and manage against potential attacks. The technology is quickly moving forward. Policy may need to play catch-up.
Any device can be hacked, including one inside the human body. We need to think through the privacy and security implications of devices that live with us. But we should also consider the life-changing, life-saving potential of technologies that know us inside and out.
Mobile phone surveillance can augment public health interventions to manage COVID-19 and might help countries prepare for the next outbreak. But these programs collect sensitive health and behavior data. That raises significant risks to personal privacy and civil liberties.
This weekly recap focuses on the future of U.S.-China competition, privacy concerns surrounding mobile tools used to track COVID-19, how telemedicine can help patients access specialized care, and more.
Douglas Yeung, a social psychologist at RAND, discusses how any technology reflects the values, norms, and biases of its creators. Bias in artificial intelligence could have unintended consequences. He also warns that cyber attackers could deliberately introduce bias into AI systems.
As technology and the ability to gather ever-growing amounts of data move further into the realms of biology and human performance, communication and transparency become increasingly important. Experts should consider whether they are using the words, examples, and models that connect with a broad audience most effectively.
Conversations about unconscious bias in artificial intelligence often focus on algorithms unintentionally causing disproportionate harm to entire swaths of society. But the problem could run much deeper. Society should be on guard for the possibility that nefarious actors could deliberately introduce bias into AI systems.
High-tech health care solutions are part of an emerging sector of medical technologies that monitor personal health data by essentially connecting your body to the Internet. As smart devices in health care evolve, the line between human and machine is blurring, and creating new concerns about consumer safety and privacy rights.
Electronic health records have helped streamline record keeping but providers aren't always able to reliably pull together records for the same patient from different hospitals, clinics, and doctor's offices. The growing use of smartphones offers a promising opportunity to improve record matching.
Artificial intelligence (AI) systems are often only as intelligent and fair as the data used to train them. To enable AI that frees humans from bias instead of reinforcing it, experts and regulators must think more deeply not only about what AI can do, but what it should do—and then teach it how.
In the UK, the National Health Service (NHS) was one of the organizations most severely affected by the WannaCry ransomware. The NHS and other public sector organizations need to improve their cybersecurity processes and quickly before a more severe cyber attack takes place.
The internet is being used for harmful, unethical, and illegal purposes. Examples include incitement and recruitment by terrorists, cyber bullying, and malicious fake news. Americans say they are unhappy with the tone of the online discourse, but are reluctant to consider potential remedies.
Absolute data breach prevention is not possible, so knowing what people want when it happens is important. Consumers and corporations alike should accept this risk as a “when,” not an “if,” and prepare for it.
The general public has a more nuanced preference for the privacy of electronic health records than previously thought. Survey respondents said that they would not be averse to individuals involved in the health and rescue professions having access to their basic health information.
The policy debate about unique patient identifier numbers should determine the best approach for reconciling two goals: optimizing the privacy and security of health information and making record matching as close to perfect as is practical.
With numerous data breaches and emerging software vulnerabilities, 2014 was the year the hack went viral. But realizing a few New Year's resolutions in 2015 could help defenders make strides in protection, tools, and techniques to gain the edge over cyber attackers in years to come.
The U.S. should make two key reforms. First, the over-designation of material as classified makes it is harder to protect the few real secrets; this must be change. Second, the FISA court must become a gatekeeper for NSA access to communications data.
For almost 15 years, Europe has led the world in protecting personal data. At the EU level, it has done this through the data-protection directive adopted in 1995. But surveys such as one carried out by Eurobarometer last year illustrate that Europeans now feel insufficiently protected, write Lorenzo Valeri and Neil Robinson.
As it considers ways to improve the efficiency and quality of U.S. health care, one issue that a new Congress should reconsider is the longstanding roadblock that has stalled efforts to create a system of unique patient identification numbers for every person in the United States, writes Richard Hillestad.