dynamic qualityQuality is when the buyer or customer is satisfied. Yet quality measurement in the translation industry is not always linked to customer satisfaction, but rather is managed by quality gatekeepers on the supply and demand side who have specific evaluation models, the majority of which are based on counting errors, applying penalties and maintaining thresholds with little, if any, interaction from customers.

Quality evaluation (QE) in the translation industry is problematic. Despite very detailed and strict error-based evaluation models, it seems that satisfaction levels with both translation quality and the evaluation process itself are low. QE models are static, that is, there is a ‘one size fits all’ approach. Little consideration is given to multiple variables such as content type, communicative function, end user requirements, context, perishability, or mode of translation generation (whether the translation is created by a qualified human translator, unqualified volunteer, machine translation system or a combination of these).

TAUS carried out a benchmarking exercise in Q1 2011 to review evaluation models and this shows that existing QE models are relatively rigid. For the majority, the error categories, penalties applied, pass/fail thresholds etc. are the same no matter what communication parameters were involved. The models are also of such a detailed nature that applying them is time-consuming and evaluation can only be done for a small sample of words translated in any one task. No standard tool is used for quality evaluation. What’s more, QE models are predicated on a static and serial model of translation production, which is not suited to the emerging models of ubiquitous computing.

In this report we present a Dynamic Quality Evaluation framework, which offers a more flexible approach to the common static quality evaluation models. The Dynamic QE framework is based on the three parameters of utility, time and sentiment (UTS).

The report starts by summarising best practices for reducing quality problems earlier in the content production cycle. The best practices are listed for source quality, translation partners and the translation community.

Next, some of the main methods for quality evaluation in domains related to translation, i.e. machine translation, translator training, community translation, and (monolingual) technical communication are reviewed. This reveals that the concepts of utility, time and sentiment already play a role in quality evaluation in those areas. On the basis of this review, the report suggests eight QE models that could contribute to the dynamic framework.

Finally, a Dynamic QE framework is proposed. The model considers the communication channel – Regulatory, Internal, or External (B2C, B2B, C2C). It is informed by the results from the content profiling exercise performed by TAUS enterprise members collaborating in this project, which shows that it is possible to map content profiles to the evaluation parameters utility, time and sentiment.

Taking these results, the report proposes how specific UTS ratings could be mapped on to specific QE models to recommend the most suitable model for each user’s needs. It concludes by suggesting the next steps to improve the current beta web-based dashboard on www.taus labs.com that describes and renders the Dynamic QE framework actionable.



Problem Statement
Proposed Solution
Industry Benefits
Business Benefits

Part 1 – Avoiding Quality Problems – Prevention is better than cure

Source Quality
Translation Partners
Translation Community

Part 2 – QE in Different Contexts

QE in Machine Translation
QE in Translator Training
QE in Community Translation
QE in Technical Communication
Conclusions from Part 2

{slider Part 3 – Towards a Dynamic QE Framework}
Communication Channel
Content Profiles
Mapping Content Profiles to Evaluation Parameters
Mapping Evaluation Parameters to Evaluation Models

Moving Forward


Appendix A - Summary of Evaluation Types

Appendix B - List of Content Types Per Company

Reviewed by

  • CA Technologies
  • Cisco
  • Dell
  • EMC
  • European Commission DGT for Translation
  • Google
  • Medtronic
  • Microsoft
  • Oracle
  • Phillips
  • PTC
  • Siemens


DQF 2011Translation Quality Evaluation Framework

Authors: Dr. Sharon O’Brien, Rahzeb Choudhury, Jaap van der Meer and Dr. Nora Aranberri Monasterio

Download report

Reports Search