AI translation outpaces governance in defence work
Translation professionals have reported a widening gap between the proliferation of AI translation tools and the governance required to maintain consistent multilingual communications in defence-adjacent work.
Survey results shared by AI translation provider Lilt highlight a strong demand for accuracy across languages, alongside growing concerns regarding inconsistent terminology and fragmented vendor arrangements. The company stated that these findings are particularly relevant to high-stakes settings where organisations exchange mission updates, threat assessments, directives, procurement documents, and crisis communications across various agencies and partner groups.
Survey findings
Lilt said its nationwide survey covered more than 400 professionals responsible for translation and localisation. It reported that 96% of respondents said translation quality is mission-critical. Only 57% said their organisation maintains a consistent voice across languages.
The survey also indicated that AI translation has moved into routine use. Lilt said 79% of respondents described AI translation as part of a broader organisational AI transformation. It said IT teams lead that shift in many organisations.
Lilt reported that 81% of respondents plan to increase their use of AI translation in the next two years. It said 49% already use large language models such as ChatGPT, or similar tools, for rapid translation.
Vendor sprawl
The findings also highlighted how translation supply chains can affect quality and accountability. Lilt stated that 70% of organisations use multiple translation vendors, with 34% of respondents linking quality issues to 'vendor sprawl'. A further 31% reported communication breakdowns across providers.
Defence operations often involve coalitions and multi-agency work, including allies, humanitarian partners, intelligence sources, and civilian populations. The survey data suggests that inconsistent language workflows can add friction in such environments, particularly where different teams rely on disparate vendors and tools.
Human oversight
Despite increased use of AI tools, respondents indicated that organisations still expect people to review output. Lilt said 79% plan to keep a human-in-the-loop for AI translation. It said 52% rely on in-house linguists for post-editing.
Lilt reported that only 1.8% of respondents ship raw AI translations without review. The data points to a workflow pattern where organisations use machine output for speed, followed by professional checks for nuance, context and terminology.
Governance focus
Lilt framed the survey around what it called "operational language readiness". The company described this as a governance and process issue that sits alongside technology adoption.
It outlined four standardisation priorities based on its interpretation of the results. These included terminology control through shared glossaries across agencies and allies. It also highlighted secure translation workflows and warned against consumer AI workarounds. Lilt also pointed to human-verified quality assurance frameworks and vendor consolidation.
The company argued that language systems form an important part of operational communications. It said this remains under-discussed compared with debate about autonomous systems and other hardware-focused AI uses in defence.
"Most AI-in-defense coverage focuses on hardware and autonomous systems. A new data point suggests we may be overlooking a quieter, but more immediate risk: multilingual miscommunication," said Wahid Lodin, Founder, Loopr.
Lodin also pointed to a mismatch between the importance respondents assign to translation and their confidence in cross-language consistency.
"A 2026 nationwide survey of 400+ professionals responsible for translation found that 96% say translation quality is mission-critical, yet only 57% believe their organizations maintain consistency across languages, despite AI already being embedded in defense-adjacent workflows," said Lodin.
The survey data also reflects the extent to which large language models have entered day-to-day language work. "49% are already using large language models like ChatGPT or similar tools for rapid translation," said Lodin.
Lilt said more organisations will increase their reliance on AI translation tools over the next two years, while continuing to use human review as a standard step in the process.