The present and future of Linguistic Quality Assurance (LQA)


In these times of global brands and localized marketing, what is translation quality? Like many other concepts where translation is involved, it depends on the context.


There was a time when quality assurance was considered equally important to the translation itself. After every translation was completed, it was expected that at least one reviser or proofreader would go over it before publication. There was also a time when quality managers didn’t exist. Any project manager would be tasked with finding those more glaring formatting, layout, or even branding errors that might have slipped through, even when they didn’t know anything about the target language. We can say those times are long gone. The explosion of both static and (mostly) dynamic content for over a decade now has meant that there is no time to do quality assurance on every piece of translation before it’s published. With more content being produced every year, the position of quality manager has evolved out of the necessity for greater oversight and control over complex production and localization workflows.

What does this mean for translation quality itself? Quality standards were developed as a result of long and painstaking industry collaborations, and the idea of “fitness for purpose” became a functional compromise for both buyers and service providers: we can use a quality standard to find exactly where the errors are, but if the translation is fit-for-purpose then maybe we don’t need to assign all the resources needed to get a near-perfect translation.

There has also been a parallel development in the technology used for quality assurance, i.e. for the process designed to match the translation to the quality standards. There are now more quality assurance (QA) tools for translation and localization than ever before, both stand-alone and built-in within CAT tools. Put together, they have created a false sense of security (and also unrealistic expectations) — a sense that we have at our disposal the means to identify and fix anything and everything that might be wrong in a translation. Another common myth is that perhaps we can even do all that without the need of a human reviser. Whether we look at stand-alone QA tools that linguists can use to validate and improve on the quality of a translation, or we consider the QA modules one can find embedded in CAT tools, there are some obvious inherent disadvantages that over time have made them rather unpopular. By extension, they have made the QA process as a whole a box-ticking exercise that normally comes as an afterthought.

These QA tools were intended to automate the process of quality assurance in post-translation and help revisers and proofreaders find errors and apply corrections in a translation — which is exactly what you want to have, ideally. The problem with such language-agnostic QA tools is that by far most of the warnings they will produce are in fact false positives, in other words the warnings are pointing to issues that are not actual errors. Revisers have been using these tools for many years, struggling to sift through the numerous warnings and find the real errors that need correction. This experience is in a translator’s DNA, and it’s not a pleasant one. This negativity has eventually seeped through to layers higher up the chain of content production, to language service providers (LSPs) and translation vendors. And that’s how Linguistic Quality Evaluation (LQE) was born, in effect as a necessity from the inadequacy of traditional Linguistic Quality Assurance (LQA).

In essence LQE was yet another compromise: decision-makers on the vendor side figured at some point that LQA was not efficient and cost-effective (at least not the way they were doing it). So, instead of trying to fix translation errors before publication, they would try to get a quality score on a sample of the translated content to determine whether the quality is fit-for-purpose. If the translation gets a score over the agreed threshold, it’s good to go —but if the score is low then it would need to be revised before publication.

There are many issues to consider with this practice. First of all, LQE reinforced the idea of “fitness for purpose” in a way that perpetuated the notion of “quality as an afterthought”. Translation quality was never (intended to be) an afterthought. If nothing else, any quality process revolving around content production is by design a holistic process that begins at the authoring stage and extends to the very last steps of product delivery. It also made sampling an acceptable benchmark: with LQE being a largely manual process (with a reviser providing feedback on quality issues by manually adjusting the scoring scale depending on the type of content), no-one can afford the time and money to do LQE on every bit of translated content. So we’ll pick a small sample (usually around 10% of the total), do the LQE on that and if it passes with a good score we can assume that the remaining 90% is also of the same quality. And then there is another shortcoming. In the context of more modern quality programs, LQE is something that is done after publication, with the intent of monitoring and improving processes in future production cycles. This means its role is rather retrospective and it can’t help catch and fix errors, like LQA normally can.



Let’s do quality better


Whether out of necessity or choice, these practices have been in place for far too long and it is time to change them at the core. A quality program would ideally incorporate both LQA and LQE, as they are both important in their own way — LQA in addressing issues before publication and LQE in ensuring that whatever process works today also works tomorrow (and if something doesn’t work today we make sure it works properly tomorrow). How to set this up?

Automation

  • eliminate as many as possible of the manual steps in LQA and LQE;
  • use a reliable and accurate LQA system that will not produce many false positives, thus reducing the number of manual steps needed for corrections; and run your QA checks while you translate;
  • streamline LQE for everything you are translating and compare quality scores to everything you have translated before.

Integration

  • whether you build or buy your QA, it needs to work well together with your TMS, CAT and all your other business systems; ensure all the pieces of your localization process, including those for LQA and LQE, can be connected together in one technology ecosystem — both internal and external stakeholders will thank you for it;
  • reduce the effort and maintenance overheads by going online, enhanced communication between desktop systems is impractical and improbable;
  • this will also encourage connectivity and interoperability amongst other players in the industry, making systems and processes quicker to react and cheaper to run at scale.

In the process of consolidating LQA and LQE in one quality program, especially in the context of continuous localization scenarios where everything is constantly updated, it is easy to conflate the two processes and think that, if I have either one or the other, my quality program is fine. Complementary as they might be, their scope is different and so is the way you apply them in localization. For example, LQE on its own can tell you very little about what the pain points are in the whole corpus of your translated content, so you can never be sure how many potentially critical errors have slipped through. However, when you put the two together, and you process LQA data through the lens of LQE, you can actually get a 360o view of where your translated content is doing well and where it could use some improvement. Connect your LQA data with a business intelligence system and that on its own will reveal a lot about how each language is performing, how your translators/revisers are reacting to the LQA input, and how to make things work even better in a more targeted way. As part of this targeting, it shouldn’t surprise us that big translation buyers are currently investing even more in their quality programs. Instead of giving up on LQA and LQE altogether, they are in fact investing more time and resources to consolidate style guides in their LQA in order to ensure a clear and consistent brand voice.


Adaptation before extinction


With global brands placing a lot more emphasis on marketing content, advertising campaigns, and engaging with their customer base in several locales (and platforms), the challenges involved in transcreation and other more creative forms of localization are not always what they used to be for translation. However, suggesting that the established practices of LQA and LQE are already obsolete is, to say the least, premature.

For starters, even global brands still have to localize vast amounts of content that is not going to be transcreated by a marketing agency. In fact, the majority of content that will end up being consumed in any local market is not going to be used for marketing purposes. However, we can probably agree that a misspelled brand name will have a more negative impact if it were to be found in a marketing brochure than in a user guide. Having a robust LQA process in place would help you catch this error before publication. So, even for something as simple as this, let’s not throw out the baby with the bathwater: even in the fast-paced world of transcreation, where the bonds between source and target texts are as loose as they can be, you shouldn’t have to wait for a user or a customer to tell you that you have misspelled your own brand. The customer experience is already ruined by then, and it means that either your LQA process isn’t tight enough or that you don’t have one at all.

There are many constructive ways by which users can help improve a quality program, and it’s important to have open communication channels that they can use to provide their feedback to content creators. Having said that, users and customers cannot, and should not, be the only agents of quality for a localization quality program. We can’t afford to treat translation quality like a random online forum where the person asking a question about something they don’t know also gets to give 5 stars to the least correct response because “it was well-written.” Instead of ignoring quality standards and the help of advanced QA technology that is out there already, ready to constructively disrupt localization workflows, put them to use and let your customers reap the rewards.


Vassilis Korkas
COO at lexiQA

This article originally appeared in the January 2022 issue of MultiLingual (#199) under the title “Linguistic Quality Assurance is Dead. Long live LQA!” and is reprinted here with permission.

Share: