icons-social-media-facebook-circleicons-social-media-twitter-circleicons-social-media-linked-in-circle
Eight Years of DQF Bring Us Closer to Fixing the Operational Gap
icons-action-calendar07/01/2019
7 minute read
Close to 200 million words have been processed by DQF in the past year. Thousands of translators and reviewers have DQF plugged into their work environment. Now with BI Bulletins, Confidence Score, MY DQF Toolbox and DQF Reviewer, it is bringing the industry one step closer to ficing the operational gap.

Thanks to DQF

In November 2011, TAUS published the foundational report for the Dynamic Quality Framework (DQF). The solution proposed in this report was a dynamic evaluation model that takes into account the changing landscape of diversification in content types and the adoption of automated translation technologies. We predicted a rapid uptake in the use of machine translation. A few years later, in 2017, the neural wave of technology took the translation world by surprise.

 Now, MT is everywhere, and so is DQF. DQF is well underway to become a world standard for translation evaluation. DQF is embedded in the day-to-day translation production systems of large enterprises and small and medium language service providers, tracking translation performance real-time. Close to 200 million words have been processed by DQF in the past year. Thousands of translators and reviewers have DQF plugged in to their work environment. Both language professionals and managers discover a new world of data and how this data might help them getting a deeper insight in translation operations. Thanks to DQF.

The DQF Journey

And yet, DQF has not completely lived up to the vision we shared in our foundational report in 2011. Not yet. Our vision is for DQF to serve as a ‘neutralizer’ and standard metric in every conversation in the translation industry about service expectations and deliverables. We envision that everything can be measured and ‘translated’ into data points and that we can set benchmark scores and tolerances, very much so as in most other industries. We imagine that operational gaps in plain translation services can easily be bridged this way. What we realize though is that some go really fast on this DQF journey and others have to go slower. 

Following consultations with the member community and with the DQF User Group, TAUS has embarked on a roadmap that will help DQF users to continue on their journey in 2019 and perhaps even accelerate. Below we share a preview on this roadmap.

1. DQF Bulletins

With the many data points DQF is collecting we can draw hundreds of different reports and investigate even more correlations. That’s great for the number crunchers and data geeks among us, but the average user only has a few questions that she wants to get answered. What is my average throughput per hour, and how does that compare to the industry average? How much of our total volume this past month has gone through MT, and how does that compare to the industry average? 

Working closely with the DQF User Group we are starting to establish a better understanding of the KPIs that quality managers like to track. Edit density and productivity turn out to be meaningful data points to estimate translation effort. These are the data points that we will be focusing on in this first edition of this new Business Intelligence Bulletin series, serving as a barometer for buyers, providers and producers of translation around the world. 

DQF Bulletins will be published periodically starting in Q1 of 2019 and will be publicly available. Download the first DQF BI Bulletin >>>

2. My DQF Toolbox

DQF is great for tracking one’s overall performance and quality against the company’s and industry averages. Reports can be broken down by vendor, language pair, process type and content type, but not by individual projects. This shortcoming will be fixed with the release of My DQF Toolbox, a feature set that allows users to assign labels to projects and filter them according to their own custom needs. My DQF Toolbox will enhance the functionality of DQF and turn it into a full project reporting system. In addition to project labelling, users will also be able to group vendors in custom groups as well as to perform internal benchmarking, by comparing specific projects or project groups side by side. 

My DQF Toolbox will be available in Q1 for all DQF users.

3. DQF Confidence Score

DQF tracks both productivity and quality of translation. DQF users like the productivity tracking in particular, because it does not require any additional effort or human intervention. Once DQF is switched on, all actions performed on each segment are tracked and counted with a timer. The reports are unambiguous and not subject to human judgment. Quality tracking on the other hand is only possible if reviewers read through the translated content and make error annotations. This costs extra time and effort and is subject to human judgment. 

What if we could generate an automatic report on the quality of the translation without human judgment and intervention? And, what if we could apply an automatic quality evaluation also on the source text to assess the translatability? Well, we can. The answer lies in the language data, of course. 

The DQF Confidence Score will be empowered by the the TAUS Data Cloud. The close to 40 billion words of human quality translation in 700 language pairs allow us to build powerful language models that will serve as sophisticated automatic quality checkers on the translated content submitted by DQF API users. 

This brand new DQF feature will allow users to select the most appropriate translation method based on the confidence score for the source and pre-translated content as well as to pick a fitting review approach, based on the confidence scoring attributed to the pretranslated and final translation. 

DQF Confidence Score will become available for a first set of languages in Q3 of 2019.

4. DQF Reviewer

While the number of enterprises and LSPs using the DQF plugins, the DQF Dashboard and Data Connector keeps growing, the population of offline users is growing much faster. We estimate that there are already a couple of thousand companies using DQF offline. This number is likely to grow even faster when DQF is getting its official ASTM standard recognition in 2019. We define offline DQF users as the companies that apply the DQF-MQM error typology, but who are not using the DQF API or DQF plugins and are therefore not able to track their translation productivity and quality on the DQF Dashboard. 

For these offline users TAUS is planning to bring out the DQF Reviewer. The DQF Reviewer is an app for review and investigations of translations. It is a standalone tool, not integrated in a CAT system or translation workflow system. Managers in LSP organizations or in buyers’ translation organizations can import translated files in the DQF Reviewer and request reviewers to perform reviews, annotations and corrections. Reports will be available on the DQF Dashboard and can also be downloaded using the DQF Data Connector. The DQF Reviewer is in a way a revamp of the ‘old’ TAUS DQF Tools that many are still using. 

The DQF Reviewer will offer among others the following features: adequacy and fluency review, productivity testing, error annotation, confidence scoring. Offline DQF users may like to use the DQF Reviewer incidentally for the validation and review of new MT engines or new human translation resources, or periodically for benchmarking and business reviews. 

A first release (not including all features mentioned above) of the DQF Reviewer will be available in Q2 of 2019. Integration with DQF Dashboard and DQF Data Connector will be made available in Q3. 

What else…?

What else could be improved on DQF? We don’t ultimately know. Yes, we know people are concerned about data sharing. But there isn’t much that we can do about that from the TAUS side. We have a solid legal framework in place: the TAUS DQF Terms of Use. These conditions were set up and reviewed in close cooperation with TAUS members. In 2017 Microsoft conducted a full technical audit on the DQF software and infrastructure with positive conclusions. 

If there’s anything else that TAUS can do to close the operational gap, please let us know by commenting below or writing directly to dqf@taus.net. 

Author
jaap-van-der-meer

Jaap van der Meer founded TAUS in 2004. He is a language industry pioneer and visionary, who started his first translation company, INK, in The Netherlands in 1980. Jaap is a regular speaker at conferences and author of many articles about technologies, translation and globalization trends.

Related Articles
icons-action-calendar05/03/2020
It's now possible for DQF users on SDL Trados Studio to send metadata-only and still be able to generate detailed quality evaluation reports on the DQF Dashboard.
icons-action-calendar03/03/2020
Data are the key to process improvements, quality control, and automation, and they can be collected in a GDPR-compliant way. Learn how TAUS DQF treats your personal data.
icons-action-calendar25/02/2020
What is quality assurance? How can you do translation quality assurance in the most efficient and data-oriented way?