dynamic quality framework
icons-action-calendar5 Mar 2020

More flexibility with the DQF plugin for SDL Trados Studio

Now the DQF plugin allows Trados Studio users to track and measure the productivity and quality data even when working with confidential and sensitive assignments, as the source and target content won’t get submitted to the DQF database.

icons-action-calendar3 Mar 2020

While the whole of Europe seemed to be taken by the GDPR (General Data Protection Regulation) frenzy in 2018, we welcomed it at TAUS. We always knew that data are the key to process improvements, quality control, and automation, but that it doesn’t have to come at the cost of misusing personal data.

icons-action-calendar25 Feb 2020

What is quality assurance? The answer to this seemingly simple question is not that straightforward, as in the translation space this term may refer to a number of activities, taking place in various stages of translation production:

icons-action-calendar7 Feb 2020

Since its launch in 2012, TAUS DQF (Dynamic Quality Framework) has gone through a few rounds of changes. Originally a quality framework built around DQF-MQM error typology, it was upgraded to a translation performance analytics tool in 2015, with the release of the API and the DQF Dashboard. The API enabled CAT tools and translation management systems (TMS) to build plugins and connect directly to the DQF Dashboard, where users can see their reports in real-time. Today, DQF Dashboard is an integrated and robust tool offering reporting on various levels: segment level, project level, or aggregated as benchmarks (across organization and industry) and trends (over time).

icons-action-calendar11 Jul 2019

There are so many interesting insights to be gathered from analyzing the translation production process. Luckily, you don’t have to be a data scientist to collect data about your translation productivity or quality evaluation. TAUS offers an easy-to-use plugin for the most popular CAT tool up to this date, SDL Trados Studio, and here are six things you should know about it: 

icons-action-calendar15 Jan 2019

The demand for post-editing of machine translation (PEMT) is growing, according to the 2018 report from Slator. But before post-editing becomes an inherent part of every production workflow, the industry should agree on the most effective methods to evaluate the quality of post-edited machine translation output.

icons-action-calendar7 Jan 2019

Thanks to DQF

In November 2011, TAUS published the foundational report for the Dynamic Quality Framework (DQF). The solution proposed in this report was a dynamic evaluation model that takes into account the changing landscape of diversification in content types and the adoption of automated translation technologies. We predicted a rapid uptake in the use of machine translation. A few years later, in 2017, the neural wave of technology took the translation world by surprise.

icons-action-calendar13 Aug 2018

Today almost everyone seems to own some sort of an activity tracker which counts the steps taken, calories burnt and hours slept. The most dedicated users measure their performance on an everyday basis, collect achievement badges and proudly share their milestones on social media. In the same way, modern marketing professionals use marketing automation tools to track the performance of product campaigns, website visitor drop off rates and many other indicators.

icons-action-calendar8 Aug 2018

As the industry is getting closer and closer to adopting the TAUS Dynamic Quality Framework (DQF) as a standard for evaluating translation productivity and quality, we’ve decided to talk to the heads of the translation department at Dell-EMC about their experience with using DQF. They are a long-time supporter of the vision behind TAUS DQF and, since July 2017, an active user of the integration with GlobalLink (provided by Translations.com).

icons-action-calendar16 Jul 2018

How an LSP from the Baltics uses TAUS DQF to minimize loss with more efficient and fair billing
By Mindaugas Kazlauskas, CEO of Synergium

Running an effective, profitable translation business while keeping your customers and stakeholders happy is challenging in the era of machine translation (MT). Especially for a relatively small LSP like Synergium. We employ 70 people in-house who manage a few hundreds of freelancers and work with multiple MLVs as clients. When it comes to ensuring gross profit daily, the story about an effective business becomes a story about the effective price negotiation and risk management.

icons-action-calendar23 Jun 2017

Why standards and metrics for objective evaluation?

Different companies use different metrics which makes it hard to compare vendors, translators, projects and to benchmark translation quality with industry averages. In order to benchmark quality and productivity of translation services, we need an objective approach by employing industry standards and metrics. The difference between metric and standard is simple: a metric is a system of measurement; a standard is a required or agreed level of quality or attainment. A metric helps ensure that a service or a product complies with an agreed level of quality, the standard. In what follows, we will highlight some of the standards and metrics used in translation quality management.


icons-action-calendar23 Jun 2017

Error Typology is a venerable evaluation method for content quality that’s very common in the modern Translation & Localization industry. Despite having been popularized for translated multilingual content, it can easily be applied in a single-language context just as well, with only minor changes. Here’s how to use it.

icons-action-calendar8 Oct 2015

Once upon a time in Land of Translations...

... we wanted to know how many words we could produce per month, per day, per hour. How much time we needed to craft human quality translation, post-editing machine translated segments. And we wanted to track the edit distance. Why on Earth?! To find ways to profile translators and post-editors, to set prices, to compare vendors, to categorize content, evaluate MT engine performance... the list is endless but are we doing it right?