Artificial intelligence will be deployed to support caseworkers to make swifter decisions on asylum claims
The Home Office announced yesterday that artificial intelligence (AI) will be rolled out across asylum processing to help speed up decision making.
A press release stated briefly: "AI will be deployed to support caseworkers to make swifter decisions on asylum claims – preventing asylum seekers from being stuck in limbo at the taxpayers' expense, delivering quicker answers to those in need and removal of those with no right to be here. Caseworkers will use AI to speed up access to the relevant country advice, and summarise lengthy interview transcripts, streamlining asylum processing without compromising on the quality of human decisions. The tech could save decision makers up to an hour per case."
In addition, the press release stated that further amendments are planned to the Border Security, Asylum and Immigration Bill that will see sex offences excluded from asylum protections in the UK. The Home Office also proposes a 24-week target for the First-tier Tribunal to decide asylum appeals by people receiving accommodation support or non-detained foreign offenders, where reasonably practicable. Currently, there is no set timeframe, and appeals take nearly 50 weeks on average. The change aims to reduce delays, cut accommodation costs, and accelerate deportations.
In a research note published today, the Home Office evaluates the use of AI in the asylum decision-making process following two small-scale pilots conducted between May and December 2024. The Home Office piloted two AI tools: the Asylum Case Summarisation (ACS) tool, which condenses asylum interview transcripts, and the Asylum Policy Search (APS) tool, an AI assistant that retrieves and summarises country policy information.
According to the Home Office, the evaluation revealed promising results. The ACS tool saved an average of 23 minutes per case by streamlining the review of interview transcripts, while the APS tool reduced the time spent searching for policy information by 37 minutes per case. Importantly, the Home Office says neither tool appeared to impact the quality of decisions, as confirmed by a review of case outcomes. However, some limitations were noted, such as the ACS tool's lack of source references and occasional inaccuracies in summaries.
User feedback highlighted the tools' potential benefits and areas for improvement. While most users appreciated the time-saving features, some expressed concerns about the tools' accuracy and functionality. For instance, the APS tool was praised for its ability to provide relevant information but was seen as needing further development to integrate additional sources.
The Home Office emphasises that the AI tools are intended to assist, not replace, human decision-makers. Both tools were designed to ensure decision-makers retained autonomy, avoiding over-reliance on AI. For example, the research note explains: "In line with the 'human in the loop' principle, ACS has been designed so that decision-makers cannot use the tool by itself to make a decision, instead it acts as an aid in the usual decision-making process."
The research note concludes that the AI tools could significantly enhance productivity in asylum processing, but larger-scale evaluations are necessary before full implementation. Recommendations included addressing the tools' limitations, continuous monitoring during rollout, and ensuring equality impacts are assessed across diverse case types.
The Helen Bamber Foundation said last year that while the AI offers opportunities to speed up processing in the asylum system, it raises critical concerns about the potential harm to already vulnerable groups.
A 2024 editorial paper by the Helen Bamber Foundation and several academic collaborators, published in Medicine, Science and the Law—the official journal of the British Academy for Forensic Sciences—warns that AI systems risk perpetuating or amplifying existing biases in the asylum process, particularly when algorithms are trained on data shaped by discriminatory decisions. The corresponding author was Professor Cornelius Katona, Medical Director at the Helen Bamber Foundation.
One major issue identified is the "black box" nature of many AI models, where the logic behind outcomes is not transparent or explainable. This lack of accountability is particularly problematic in asylum cases, which involve highly sensitive, complex, and often traumatic personal histories. AI tools may misinterpret or oversimplify such narratives, especially when used for language translation or credibility assessments. For example, semantic nuances in testimony or cultural context may be lost in automated speech-to-text systems or algorithmic analysis, potentially leading to wrongful credibility findings.
The editorial paper also highlighted examples from across Europe where AI tools are already being piloted or used in asylum contexts. These include speech-to-text transcription tools in Italy, name transliteration and dialect recognition software in Germany, and case-matching systems in the Netherlands that flag similar asylum claims for consistency. While some of these tools aim to support human decision-makers, the authors warn that excessive reliance on them could distort outcomes. Tools like the Casematcher system used in the Netherlands, which compares new claims to existing case patterns, risk discrediting applicants whose stories appear either too similar or too different from past cases, creating a 'credibility paradox': "If the narrative is too similar to past claims, it may be deemed fabricated. Conversely, if the narrative is too dissimilar, it may be deemed unlikely."
Overall, the editorial argued that while AI may assist with administrative tasks, its role in decision-making must be carefully controlled. Asylum decisions should remain grounded in nuanced human judgment and governed by strong ethical frameworks. The editorial called for transparency about how and when AI is used, meaningful oversight, and clear communication with applicants whose data may be processed by such systems. Until there is more evidence about the safety and fairness of these technologies, the authors conclude that the risks of AI outweigh the benefits — particularly as the current uses of AI by immigration and asylum authorities are rarely designed to benefit migrants and asylum seekers themselves.