TAUS Blog

  • Home
    Home This is where you can find all the blog posts throughout the site.
  • Categories
    Categories Displays a list of categories from this blog.
  • Tags
    Tags Displays a list of tags that have been used in the blog.
  • Bloggers
    Bloggers Search for your favorite blogger from this site.
in Quality

5 Ways to Reduce Translation Review Time

Font size: Larger Smaller
Rate this blog entry:

The amount of time and money you spend on quality management easily constitutes 20% of the total translation time and costs. A large part of this percentage consists of translation review (or quality review). You can reduce translation review time by streamlining the review process. In this post, we’ve listed 5 ways to do this.

 

1. Embrace a dynamic approach

Implement a translation management system in which different content profiles automatically go through different translation or review cycles and with different error tolerance thresholds. When profiling content you need to take into consideration the expected quality (good enough vs. high quality), the perishability of your content (long vs. short shelf-life), the visibility of your content (highly visible vs. low visibility), your target audience (age, cultural background, ethnicity), the communication channel (B2B, B2C, C2C) and last but not least, your budget (…well, no need to give examples here). 

When it comes to quality review, the evaluation method and error profiles (including weights and severities and the pass/fail thresholds) should be aligned with your content profiles. This dynamic approach enables you to diversify your translation workflows and to save time on the review steps where possible.

 

2. Merge cycles

Once a diversified content strategy is in place and you have finished your content profiling exercise, you can start merging cycles. Even today, many translation jobs contain multiple review cycles, some of them unnecessary or even harmful. For instance, there are companies that start the review process with pure linguistic review, followed by in-context review and finally in-country review. In each review step corrections are made but at the same time, these steps can also increase the risk of introducing new errors. Instead of having two, three or even more review cycles, try to have just one. To achieve this, you need to merge different cycles and train reviewers to consider different aspects of translation quality in one go. For example, correction and error-annotation can be done simultaneously; adequacy/fluency evaluation (scoring segments on a scale from 1-4) can be combined with fixing errors. One can also find in-country reviewers or domain experts with linguistic skills who can help save time on additional linguistic validation.

 

3. Apply sampling

As we have seen, content profiling can be used to reduce the number of cycles but it can also help shorten the review cycle by offering reviewers only a part of the translation for review (i.e. a sample). Sampling is appropriate in most evaluation scenarios, though keep in mind that the scenario in which sampling takes place has a strong influence on how the sample will be designed. We make the distinction between systematic (using knowledge of the content) and random sampling techniques. In contrast to random sampling, which provides a representative sample of the total content, systematic sampling provides an optimal selection of translated segments for review, systematically reducing the size of a project to save time and effort.

 

Sampling Use Case – VMware

VMware is a translation buyer that uses automatic sampling based on the different content profiles. Only content types with high-risk for errors are 100% reviewed. Normal-risk content types are sampled whereby 20% of the segments are evaluated. A single Error Typology with four main criteria and 19 sub-criteria is used for the Review (100% Review) and Evaluation (20% sampling) processes. For a sampling project, 80% of the content (Segments) is selected from “new” material; 20% of the content is selected from 100% matches; ICE matches are excluded from the sampling process. In the future, a new method will be implemented in which ICE & 100% matches and repetitions will be removed from the sampling selection.

 

4. Enhance automatic Quality Assurance

Automatic Quality Assurance (QA) shows many similarities to Quality Estimation. Both areas use external (or formal) features to predict potential errors and the quality of the output. Considered features include sentence length, the number of alternative translations per source word or phrase, edit-distance, the presence of numbers, symbols and punctuation marks, percentages of content words and function words in source and target translation etc.  One possible application of Quality Estimation is in post-editing projects where an algorithm can be trained to flag MT output of low quality. When light post-editing is required, post-editors can skip segments of higher quality.

Since formal features alone are not enough to detect all possible mistakes, automatic QA, just like Quality Estimation, could benefit from machine learning.  Traditional automatic QA can be extended with a machine learning algorithm that “learns” from the corrections (and, if available, from the error annotations) of a reviewer.  Add to this some data points that are available in CAT tools but not yet collected in a systematic way such as segment activation time, total segment edits, the quality of the source, the edit distance (MT) or fuzzy match percentage (TM), etc. These are all very important indicators of potential quality issues in the target and can produce accurate quality scores for each segment together with flagged issues that can be used to reduce the time for review. 

 

5. Let the crowd speak

Finally, reviewing content can be done faster if multiple reviewers are working on the same project. One option is to ask your Language Service Provider (LSP) to crowd-source the review part of the translation job. Another one is to engage in community evaluation which allows you to reduce turnaround time by enabling your own user communities to review translations.

Community evaluation involves opening an online collaboration technology process for volunteers to help review translated content. These volunteer evaluators build a collaborative community where they participate as reviewers (of the content and often of one another).  A cautionary note is in place: not all content types will lend themselves to a crowd-sourced approach.

If you don’t have access to a community directly, you can select one of the community translation platforms like Say Hello or Unbabel where you can upload your translation for review. This will still save you time as the turnaround time is faster than sending the translation to a third-party LSP.

 

To conclude…

By now you have probably guessed that time saving is also cost saving. In the long run, most of the methods discussed above will save you both time and money. But of course, there are more ways that lead to Rome. We would love to hear how YOU reduce translation review time. Please leave a comment! 

Attila Görög is responsible for the translation quality product line at TAUS. Attila’s challenge is to convince translation buyers and vendors about the flexible nature of quality by promoting tools, metrics and best practices on quality evaluation. He's been involved in various national and international projects on machine translation, terminology management and semantic web.

  • In terms of the sampling process we use at VMware, there are a couple of points worth considering. When a sample fails at a certain threshold, another sample from the same content is chosen and the process repeats. If this fails as well, a full review is required. The system has a built-in review rebuttal process that allows reviewers to reject a sample back to the linguists responsible for the translation, who can then challenge the review findings. This creates a healthy dialog and allows us to also "review the reviewer" in a sense. However, if a sample fails below a given threshold - called a "hard fail", there is no opportunity for rebuttal and the content defaults to full review.

  • Dear Luigi, thanks for your comments. Unfortunately, I don't have written evidence or a study supporting the statement about the 20% time/cost. This is something that has come up several times at various industry events. I agree that not all errors can be fixed in the review stage and sometimes new errors are introduced and that's why, you should try to reduce the number review cycles in a project. As for the VMware use case, this wasn't meant to be a thorough study but rather an illustration. I'm afraid I cannot answer your questions but will reach out to VMWare and ask if someone can address them. Just one note on sampling, it is not impossible to do manual sampling on large volumes of content fast. I know companies that are using some scripts that you can tune manually and that will do the job, a kind of semi-automatic sampling. Thanks again for your feedback.

  • You write that "quality management easily constitutes 20% of the total translation time and costs." Could you report any studies to substantiate this statement?
    I live in a country where defected products are seldom discarded, but often properly labeled as "second-choice" or "flawed" and sold in factory outlets. A major issue in the traditional approach to translation quality is that translations must all be released as "first-choice" products, and rarely paid as such. Unfortunately, not all errors can be (conveniently) fixed while other errors can be introduced during the so-called quality review/control stage.
    As to sampling, sistematic sampling on very large batches in ongoing projects can be very troublesome, especially when made manually. Even random sampling can be biased if no AQL has been pre-established and an overall assessment of the whole batch made, possibly automatically.
    The VMware sampling use case is very interesting, but no information is given about what is done with no-pass items. Are they reworked? Is the whole batch re-examined? Are errors investigated to identify and possibly correct issues in processes? Quality control is useless if it does not bring to a process review, in a continuos effort.

Add comment

Blog Archive

Recent Comments

In terms of the sampling process we use at VMware, there are a couple of points worth considering. W...
Dear Luigi, thanks for your comments. Unfortunately, I don't have written evidence or a study suppor...
You write that "quality management easily constitutes 20% of the total translation time and costs." ...