Quantum computers are expected to be powerful enough to break the current cryptography that protects all digital communications. But this scenario is preventable if policymakers take actions now to minimize the harm that quantum computers may cause.
The Catholic Church joined with technology companies in February to release the “Rome Call for AI Ethics,” which it hopes will lend meaning if not governance frameworks for the use of artificial intelligence. Making sure that “everyone can benefit” from AI by making its discoveries widely available will be important. This is perhaps where the church can be most effective.
As social media has increasingly become the main outlet for people to acquire news and opinion, there are concerns about the effect of algorithm-driven services on the spread of misleading information. But the issue doesn't merely lie with how social platforms use algorithms to deliver content.
Facebook's Mark Zuckerberg has called for new internet regulation starting in four areas: harmful content, election integrity, privacy, and data portability. But why stop there? His proposal could be expanded to include much more: security-by-design, net worthiness, and updated internet business models.
Douglas Yeung, a social psychologist at RAND, discusses how any technology reflects the values, norms, and biases of its creators. Bias in artificial intelligence could have unintended consequences. He also warns that cyber attackers could deliberately introduce bias into AI systems.
As technology and the ability to gather ever-growing amounts of data move further into the realms of biology and human performance, communication and transparency become increasingly important. Experts should consider whether they are using the words, examples, and models that connect with a broad audience most effectively.
Instead of worrying about an artificial intelligence “ethics gap,” U.S. policymakers and the military community could embrace a leadership role in AI ethics. This may help ensure that the AI arms race doesn't become a race to the bottom.
As tech-based systems have become all but indispensable, many institutions might assume user data will be reliable, meaningful and, most of all, plentiful. But what if this data became unreliable, meaningless, or even scarce?
Data breaches and cyberattacks cross geopolitical boundaries, targeting individuals, corporations and governments. Creating a global body with a narrow focus on investigating and assigning responsibility for cyberattacks could be the first step to creating a digital world with accountability.
Conversations about unconscious bias in artificial intelligence often focus on algorithms unintentionally causing disproportionate harm to entire swaths of society. But the problem could run much deeper. Society should be on guard for the possibility that nefarious actors could deliberately introduce bias into AI systems.