What is linguistic quality assurance and how it works in localization


In pretty much any industry these days, the notion of quality is one that seems to crop up all the time. Sometimes it feels like it’s used merely as a buzzword, but more often than not quality is a real concern, both for the seller of a product or service and the consumer. In the same way, quality appears to be omnipresent in the language services industry as well. Obviously, when it comes to translation and localization, quality has rather unique characteristics compared to other services, however ultimately it is the expected goal in any project. In today’s article we will examine what the established practices are for monitoring and achieving linguistic quality in translation and localization and prepare the ground for a more thorough examination of these practices in the articles that will follow in this series.



Quality assessment and quality assurance: siblings, but not identical twins


Anyone who has ever been asked to ‘proofread’ a job (and they were sent the source text as well, with the expectation to revise its translation) knows that task definitions are important. Despite the fact that industry standards have been around for quite some time, in practice terms such as ‘quality assessment’ and ‘quality assurance’ (and sometimes even ‘quality evaluation’) are often used interchangeably. This may be due to a misunderstanding of what each process involves but, whatever the reason, this practice leads to confusion and could create misleading expectations. So, let us take this opportunity to clarify:
– [Translation] Quality Assessment (TQA) is the process of evaluating the overall quality of a completed translation by using a model with pre-determined values which can be assigned to a number of parameters used for scoring purposes.
– Quality Assurance “[QA] refers to systems put in place to pre-empt and avoid errors or quality problems at any stage of a translation job”. (Drugan, 2013: 76)

Quality is an ambiguous concept in itself and making ‘objective’ evaluations is a very difficult task. Even the most rigorous assessment model requires subjective input by the evaluator who is using it. However, if we can distinguish in a translation workflow between what a translator or reviewer can do while the project is still in progress and what can be done after the project is completed, then we can get a better sense of what each process involves and how we can best allocate our human and technological resources in order to improve on quality overall. When it comes to linguistic quality in particular, we would be looking to improve on issues that have to do with punctuation, terminology and glossary compliance, locale-specific conversions and formatting, consistency, omissions, untranslatable items and others. It is a job that requires a lot of attention to detail and strict compliance to rules and guidelines – and that’s why linguistic QA (most aspects of it, anyway) is a better candidate for ‘objective’ automation. We will explore this further in a future article.



Industry practices, good and bad


Given the volume of translated words in most localization projects these days, it is practically prohibitive in terms of time and cost to have in place a comprehensive QA process, which would safeguard certain expectations of quality both during and after translation. Therefore it is very common that QA, much like TQA, is reserved for the post-translation stage. A human reviewer, with or without the help of technology, will be brought in when the translation is done and will be asked to review/revise the final product. The obvious drawback of this process is that significant time and effort could be saved if somehow revision could occur in parallel with the translation, perhaps by involving the translator herself with the process of tracking errors and making these corrections along the way.

The fact that QA only seems to take place ‘after the fact’ is not the only problem, however. Volumes are another challenge – too many words to revise, too little time (and too expensive) to do it. To address this challenge, LSPs use sampling (the partial revision of an agreed small portion of the translation) and spot-checking (the partial revision of random excerpts of the translation). In both cases the proportion of the translation that has been checked is about 10% of the total volume of translated text, and that is generally considered agreeable to be able to say whether the whole translation is good or not. This is an established and accepted industry practice that was created out of necessity. However, one doesn’t need to have a degree in statistics to appreciate that this small sample (whether defined or random) is hardly big enough to reflect the quality of the overall project.



It’s time to say ‘welcome’ to the machine


The restrictions described earlier are the effect of the cost involved in using human revisers for all the tasks required in QA for very large volumes of translated text. It would make sense, then, to enlist the help of technology in processing these large volumes of text and provide the support necessary for a more thorough QA process. The technology is there, but there are a few things to consider: “[w]hereas translation memory tools came into the market approximately in 1985, translation quality assurance tools are rather young. The oldest quality check utilities were probably incorporated […] [b]ack in 1998. […]This means there is a 10-15 years gap in TM and QA tools development.” (Makoushina, 2007: 4) The progressive increase of the volumes of text translated every year (also reflected in the growth of the total value of the language service industry) and the increasing demands for faster turnaround times makes it even harder for QA-focused technology to catch up. The need for automation is greater than ever before.

Stay tuned for our next article, when we will explore the current state of linguistic QA technology and examine the benefits and drawbacks of using these tools in current industry workflows.

References
Drugan, J. (2013) Quality in Professional Translation: Assessment and Improvement. London: Bloomsbury.
Makushina, J. (2007) ‘Translation Quality Assurance Tools: Current State and Future Approaches’, Translating and the Computer 29. London: ASLIB.

Vassilis Korkas
COO at lexiQA

Share: