icons-social-media-facebook-circleicons-social-media-twitter-circleicons-social-media-linked-in-circle
5 Ways to Reduce Translation Review Time
icons-action-calendar06/10/2016
6 minute read
This blog post lists 5 ways in which you can reduce translation review time by streamlining the review process.

The amount of time and money you spend on quality management easily constitutes 20% of the total translation time and costs. A large part of this percentage consists of translation review (or quality review). You can reduce translation review time by streamlining the review process. In this post, we’ve listed 5 ways to do this.

 

1. Embrace a dynamic approach

Implement a translation management system in which different content profiles automatically go through different translation or review cycles and with different error tolerance thresholds. When profiling content you need to take into consideration the expected quality (good enough vs. high quality), the perishability of your content (long vs. short shelf-life), the visibility of your content (highly visible vs. low visibility), your target audience (age, cultural background, ethnicity), the communication channel (B2B, B2C, C2C) and last but not least, your budget (…well, no need to give examples here). 

When it comes to quality review, the evaluation method and error profiles (including weights and severities and the pass/fail thresholds) should be aligned with your content profiles. This dynamic approach enables you to diversify your translation workflows and to save time on the review steps where possible.

 

2. Merge cycles

Once a diversified content strategy is in place and you have finished your content profiling exercise, you can start merging cycles. Even today, many translation jobs contain multiple review cycles, some of them unnecessary or even harmful. For instance, there are companies that start the review process with pure linguistic review, followed by in-context review and finally in-country review. In each review step corrections are made but at the same time, these steps can also increase the risk of introducing new errors. Instead of having two, three or even more review cycles, try to have just one. To achieve this, you need to merge different cycles and train reviewers to consider different aspects of translation quality in one go. For example, correction and error-annotation can be done simultaneously; adequacy/fluency evaluation (scoring segments on a scale from 1-4) can be combined with fixing errors. One can also find in-country reviewers or domain experts with linguistic skills who can help save time on additional linguistic validation.

 

3. Apply sampling

As we have seen, content profiling can be used to reduce the number of cycles but it can also help shorten the review cycle by offering reviewers only a part of the translation for review (i.e. a sample). Sampling is appropriate in most evaluation scenarios, though keep in mind that the scenario in which sampling takes place has a strong influence on how the sample will be designed. We make the distinction between systematic (using knowledge of the content) and random sampling techniques. In contrast to random sampling, which provides a representative sample of the total content, systematic sampling provides an optimal selection of translated segments for review, systematically reducing the size of a project to save time and effort.

 


 

Sampling Use Case – VMware

VMware is a translation buyer that uses automatic sampling based on the different content profiles. Only content types with high-risk for errors are 100% reviewed. Normal-risk content types are sampled whereby 20% of the segments are evaluated. A single Error Typology with four main criteria and 19 sub-criteria is used for the Review (100% Review) and Evaluation (20% sampling) processes. For a sampling project, 80% of the content (Segments) is selected from “new” material; 20% of the content is selected from 100% matches; ICE matches are excluded from the sampling process. In the future, a new method will be implemented in which ICE & 100% matches and repetitions will be removed from the sampling selection.

 


 

4. Enhance automatic Quality Assurance

Automatic Quality Assurance (QA) shows many similarities to Quality Estimation. Both areas use external (or formal) features to predict potential errors and the quality of the output. Considered features include sentence length, the number of alternative translations per source word or phrase, edit-distance, the presence of numbers, symbols and punctuation marks, percentages of content words and function words in source and target translation etc.  One possible application of Quality Estimation is in post-editing projects where an algorithm can be trained to flag MT output of low quality. When light post-editing is required, post-editors can skip segments of higher quality.

Since formal features alone are not enough to detect all possible mistakes, automatic QA, just like Quality Estimation, could benefit from machine learning.  Traditional automatic QA can be extended with a machine learning algorithm that “learns” from the corrections (and, if available, from the error annotations) of a reviewer.  Add to this some data points that are available in CAT tools but not yet collected in a systematic way such as segment activation time, total segment edits, the quality of the source, the edit distance (MT) or fuzzy match percentage (TM), etc. These are all very important indicators of potential quality issues in the target and can produce accurate quality scores for each segment together with flagged issues that can be used to reduce the time for review. 

 

5. Let the crowd speak

Finally, reviewing content can be done faster if multiple reviewers are working on the same project. One option is to ask your Language Service Provider (LSP) to crowd-source the review part of the translation job. Another one is to engage in community evaluation which allows you to reduce turnaround time by enabling your own user communities to review translations.

Community evaluation involves opening an online collaboration technology process for volunteers to help review translated content. These volunteer evaluators build a collaborative community where they participate as reviewers (of the content and often of one another).  A cautionary note is in place: not all content types will lend themselves to a crowd-sourced approach.

If you don’t have access to a community directly, you can select one of the community translation platforms like Say Hello or Unbabel where you can upload your translation for review. This will still save you time as the turnaround time is faster than sending the translation to a third-party LSP.

 

To conclude…

By now you have probably guessed that time saving is also cost saving. In the long run, most of the methods discussed above will save you both time and money. But of course, there are more ways that lead to Rome. We would love to hear how YOU reduce translation review time. Please leave a comment! 

Author
attila-görög

Attila Görög worked in various national and international projects on language technology. He had a solid background in quality evaluation, post-editing and terminology management. As Director of Enterprise Member Services, he worked mostly with large enterprises involved in the TAUS community and hosted TAUS user groups until 2017.

Related Articles
icons-action-calendar27/11/2023
Explore the fascinating journey of Lisa Vasileva, a Machine Learning Engineer at TAUS, as she transitions from a professional translator to the field of Natural Language Processing (NLP).
icons-action-calendar01/10/2021
The factors that impact the reconfiguration of the translation industry in the 2020s and emerging pricing and licensing models: The Owned, Public, Private, Hosted and Shared.
icons-action-calendar03/08/2021
Looking into the future of the translation industry under seven sections where automated translation is no longer just a freebie on the internet, but entering the real economy of the translation sector, and it changes everything.