We all know the impact of LQA and how central a role it plays in a localization workflow. In our article series so far (part 1, part 2, part 3) we have explored current practices and technologies in the industry and what challenges lie ahead. Today, we will attempt to address these challenges and make some predictions about the evolution of LQA in the context of localization.
There is nothing like the present – or is there?
They say that in order to predict the future one has to look into the past and also evaluate the present. So, let’s have a quick recap:
- There is a gap of about 15 years in the development of QA technology, when contrasted to CAT tools.
- Even the most advanced QA tools right now (whether built-in or stand-alone) have serious shortcomings to contend with, i.e. by and large language-independent checks, connectivity and workflow issues when working alongside other tools, etc.
- As a process, QA is effectively only done in post-translation and it is a resource-intensive, time-consuming process.
- In order to deal with this, practices such as sampling and spot-checking have been devised and are widely used (even though they are demonstrably insufficient).
Taking all the above into account, one has to wonder: what are the QA needs of LSPs and translation buyers right now and what will those needs be in the near future? We certainly shouldn’t just wait around for something to magically happen – it might take another 15 years to materialise. The need in the industry for a more efficient and effective QA process is here now and it is pressing. Is there a new workflow model which can produce tangible benefits both in terms of time and resources? We believe there is, but it will take some faith and boldness to apply it.
Changing the game of LQA
There are a number of use cases by language vendors and translation buyers which support the idea that something needs to change. We probably all know translators or reviewers or project managers or quality managers who have expressed their true feelings about the QA process they currently have to follow in their work. In less than equal measure, we probably also know people in the industry who are more than happy to maintain the current status quo. Managing the process of QA can obviously be quite a different experience when compared to actually performing the QA with tools and workflows that fall short of the demands for quality in the industry today. In many respects, change management and making a case for a new process can be more challenging than the new process itself. It is easy to stay put and resist change, even when you know that what you’re doing now is (or will quickly become) inadequate.
There is a way around this stagnation: get ahead of the curve! In the last few years, the translation technology market has been marked by substantial shifts in the market shares occupied by offline and online CAT tools respectively, with the online tools gaining rapidly more ground. This trend is unlikely to change. At the same time the age-old problems of connectivity and compatibility between different platforms will have to be addressed one way or another. For example, slowly transitioning to an online CAT tool and still using the same offline QA tool from your old workflow is inefficient as it is irrational, especially in the long run.
A deeper integration between CAT and QA tools also has other benefits. The QA process can move up a step in the translation process. Why have QA only in post-translation when you can also have it in-translation? (And it goes without saying that pre-translation QA is also vital, but it would apply to the source content only so it’s a different topic altogether.) This shift is indeed possible by using API-enabled applications – which are in fact already standard practice for the majority of online CAT tools. There was a time when each CAT tool had its own proprietary file formats (as they still do), and then the TMX and TBX standards were introduced and the industry changed forever, as it became possible for different CAT tools to “communicate” with each other. The same will happen again, only this time APIs will be the agent of change.
The key to progress is automation
Looking further ahead, there are also some other exciting ideas which could bring about truly innovative changes to the quality assurance process. The first one is the idea of automated corrections. Much in the same way that a text can be pre-translated in a CAT tool when a translation memory or a machine translation system is available, in a QA tool which has been pre-configured with granular settings it would be possible to “pre-correct” certain errors in the translation before a human reviewer even starts working on the text. With a deeper integration scenario in a CAT tool, an error could be corrected in a live QA environment the moment a translator makes that error.
This kind of advanced automation in LQA could be taken even a step further, if we consider the principles of machine learning. Access to big data in the form of bilingual corpora which have been checked and confirmed by human reviewers makes the potential of this approach even more likely. Imagine a QA tool that collects all the corrections a reviewer has made and all the false positives the reviewer has ignored and then it processes all that information and learns from it. Every new text processed and the machine learning algorithms make the tool more accurate in what it should and should not consider to be an error. The possibilities are endless.
As this is the last instalment of our current series, it would be good to end it on a positive note. Despite the various shortcomings of current practices in LQA, the potential is there to streamline and improve on processes and workflows alike, so much so that quality assurance will not be seen as a “burden” anymore, but rather as an inextricable component of localization, both in theory and in practice. It is up to us to embrace the change and move forward.
COO at lexiQA