From autonomous vehicles to facial recognition, Artificial Intelligence (AI) is increasingly recognised as a crucial tool for improvement and innovation in public policy and services. A recent RAND Europe report took a look at these issues in relation to the European Border and Coast Guard—the collective of border and coast guard agencies ensuring the integrated management of Europe's borders.
The report identified a wide range of current and potential uses of AI by border security agencies, including automated border control gates, AI-enabled border surveillance and machine-learning optimisation.
As AI technologies continue to rapidly evolve, the EU has been faced with myriad questions as to how AI development and uptake by EU organisations can be fostered and aligned with relevant ethics and human rights safeguards. In this context, the EU has recently introduced plans to 'turn Europe into the global hub for trustworthy AI', making the EU 'the first influential regulator to craft a big law on AI'.
Through this series of legal rules and regulations, the EU hopes to encourage innovation and use of AI with a strong emphasis on guaranteeing its safe and ethical use. This recognises that using AI effectively requires an understanding not only of the opportunities associated with it in key sectors and public policy areas, but also the challenges and constraints that EU organisations face when considering how to use the technology.
The breadth of existing and potential future applications shows that border security agencies can draw on AI to support a wide range of activities, from operations carried out by border security agents in the field to supporting the analysis of maritime and geospatial data.
The EU hopes to encourage innovation and use of AI with a strong emphasis on guaranteeing its safe and ethical use.
Share on TwitterWhile the opportunities to harness AI as a tool for improving the effectiveness of border security functions are many, so are the barriers to its adoption.
First, issues such as a lack of transparency in AI algorithms and potential biases have constrained AI use. This is not only because it limits how effectively an AI system performs, but also because of increasing uncertainties among potential end users and the wider public as to how reliable AI technologies are. The EU's new AI regulation acknowledges these challenges by emphasising the importance of developing trust in AI, and outlines the necessary policy changes and investment needed to strengthen “the development of human-centric, sustainable, secure, inclusive, and trustworthy AI” accordingly.
The emphasis on trustworthy and inclusive AI could play a particularly important role in the border security context. This is because the use of AI-enabled technologies in border security has been criticised for potentially undermining human rights and privacy-related safeguards. Comprehensive risk assessments are needed and could be incentivised under the new guidelines from the EU. These would ensure technology developments comply with robust ethical and human rights safeguards.
Second, further to the technological barriers, the uptake of AI in European border security may also be constrained by how well-equipped individual organisations are to harness emerging technologies such as AI. A lack of technical expertise and gaps in skills relevant to innovation are common challenges for organisations in the public sector, limiting their ability to recognise and fully exploit the opportunities of AI technologies.
In this regard as well, the new EU AI framework promises to foster change through nurturing talent and skills for AI development, while a number of actions are also available to individual organisations such as border security agencies to improve their expertise and skills base. These options range from basic awareness training to create a robust baseline of understanding about AI technologies to targeted recruitment campaigns to attract new talent.
Third, while there has been significant interest in AI applications in border security and related contexts such as law enforcement and national security, evidence-base gaps are still prevalent. Notably, evaluations of AI technologies are often carried out in controlled environments, which does not allow an organisation to assess how technologies could perform in the field. While these gaps currently limit understanding of what impact AI technologies can actually have, knowledge of them can also provide EU agencies and organisations such as Frontex, the central European border and coast guard agency, with an opportunity to provide thought leadership and direct research efforts into areas of key interest, for example trustworthy and human rights–centered AI development.
Addressing these three barriers could require not only actions from individual border security agencies, but engagement in the wider AI innovation ecosystem that includes technology developers, other EU agencies, policymakers, and universities.
The EU's new regulatory framework for AI could provide a unique opportunity to foster this ecosystem, but individual organisations can also make contributions to the effectiveness this ecosystem demonstrates in enabling effective uses of AI across all sectors of the European economy.
This could include encouraging the exchange of information and knowledge between different end-user communities, directly facilitating coordination between these communities, or incentivising innovation, such as through technology demonstrations. These contributions and a collective ambition to foster AI across the European innovation ecosystem could be key to helping the EU achieve its newly defined ambition to emerge as 'the global hub for trustworthy AI'.
Linda Slapakova is an analyst with RAND Europe and its Centre for Futures and Foresight Studies which explores potential futures of impacts of emerging technologies, among other activities.