Since its launch in 2012, TAUS DQF (Dynamic Quality Framework) has gone through a few rounds of changes. Originally a quality framework built around DQF-MQM error typology, it was upgraded to a translation performance analytics tool in 2015, with the release of the API and the DQF Dashboard. The API enabled CAT tools and translation management systems (TMS) to build plugins and connect directly to the DQF Dashboard, where users can see their reports in real-time. Today, DQF Dashboard is an integrated and robust tool offering reporting on various levels: segment level, project level, or aggregated as benchmarks (across organization and industry) and trends (over time).
If you are already using one of the DQF plugins to measure the performance of your translation resources or are considering to start, this article will answer the most common questions.
Simply put, DQF is a software that tracks and measures translation productivity and quality in a standardized way. It consists of:
In order to measure productivity and quality, DQF tracks the full translation workflow, including translation/post-editing, correction and review phase. It also records the translation memory (TM) and machine translation (MT) input, if they are used.
If you only use post-editing or don’t differentiate between correction and review phase, that is ok. DQF will show the data available for your workflow.
The benefits of DQF are different for different users. Some companies only use the quality review, others are more interested in the fully automatic tracking of productivity, while some want to be able to compare their performance to the industry averages, or compare the performance of their vendors. The benefit that they all share is increased efficiency with real-time, data-driven reporting on the DQF Dashboard, business intelligence from the collected data, and access to the industry benchmarks on productivity, correction density, error density, and error weights.
When the DQF plugin is enabled in the CAT tool/TMS, the largest part of the data collection happens in the background, without any user intervention. Time spent, word and character counts, TM match rate, MT engine used, the number of edits for post-editing, the corrections and their numbers are all tracked automatically.
The part that requires user input is the quality review, also known as error annotation in DQF - when the reviewer/proofreader/revisor labels the errors that are found in the translated text with a category and gives them a score. This data is important if you’d like to have a quality score and a deeper understanding of the types of errors found in the translation. Even though not fully automated, error annotation is integrated in a native environment of a CAT tool/TMS, and has a minimal operational impact - the error categories and scores appear in a drop-down for the user to choose from.
There are two ways the quality is measured in DQF: 1. Quantitative - expressed in the number of corrections, the number of errors and the general pass/fail score, and 2. Qualitative - indicated in the types of errors detected.
For the quality review that’s done through error annotation, TAUS developed an error typology in consultation with many parties in the industry. This error typology is referred to as DQF and is the same as what is known as the DQF-MQM harmonized error typology. The template is available for download here.
If you use the DQF error typology template to score quality offline, or a CAT tool/TMS that has no integration with the DQF Dashboard, there is no guarantee that you are compliant with the DQF standard measurements. Translation tool providers sometimes claim that they have integrated DQF, but in fact, they have only replicated the DQF error typology in their software. In these cases, there is no way to ensure that you use the most recent, up-to-date metrics and that your results will be comparable to the industry averages on the DQF Dashboard.
The DQF integrations that are 100% complete at the moment are with XTM Cloud, MateCat, GlobalLink and the SDL suite of translation products. You can view them all on this page. If your CAT tool/TMS is not on the list, but you’d like to have it integrated, the best thing is to contact your translation technology provider. TAUS is happy to support you with the integration.
After you install the plugin for your CAT tool, you’ll need to create a TAUS account, so that we can register your user on the DQF Dashboard. Set up a project with DQF and complete the translation/review, and your results will appear on the DQF Dashboard. It is that simple!
Other questions about DQF? Please check our FAQ page, or request a personalized demo.
Milica is a marketing professional with over 10 years in the field. As TAUS Head of Product Marketing she manages the positioning and commercialization of TAUS data services and products, as well as the development of taus.net. Before joining TAUS in 2017, she worked in various roles at Booking.com, including localization management, project management, and content marketing. Milica holds two MAs in Dutch Language and Literature, from the University of Belgrade and Leiden University. She is passionate about continuously inventing new ways to teach languages.
While the whole of Europe seemed to be taken by the GDPR (General Data Protection Regulation) frenzy in 2018, we welcomed it at TAUS. We always knew that data are the key to process improvements, quality control, and automation, but that it doesn’t have to come at the cost of misusing personal data.
What is quality assurance? The answer to this seemingly simple question is not that straightforward, as in the translation space this term may refer to a number of activities, taking place in various stages of translation production: