This bimonthly webinar is open to buyers and vendors of translation and localization services interested in translation quality. In general two invited speakers present their use case or introduce a topic. After the presentations, an expert panel asks questions and the audience can raise questions that are answered by the presenters.
Translation quality evaluation is problematic. In 2011, TAUS conducted a survey among its members. We found that despite very detailed and strict error-based evaluation models the satisfaction levels with both translation quality and the evaluation process itself were very low. QE models are static, that is, there is a ‘one size fits all’ approach. Little consideration is given to multiple variables such as content type, communications function, end user requirements, context, perishability, or mode of translation generation.
Quality is Measurement
Thursday 22 June 2017, 5 p.m. CET
The famous saying goes: "without measurement, no improvement". One of the biggest challenges in the translation industry is to measure translation quality and to quantify something that is known to be subjective. Three ingredients are needed to "track" translation quality: an evaluation metric integrated in a review or CAT tool, a database to collect measurement data and a dashboard to visualize the data. In this webinar, we will see how these three components are aligned to form the backbone of VMware's quality management process.
In 2013, the corporate globalization department at VMware began development of an automated quality management system based on the TAUS Dynamic Quality Framework (DQF). The VMware LQE is an enterprise application designed to streamline the work of globally-distributed review teams, dramatically cutting time-to-market and costs for localized deliverables while empirically measuring performance against target quality goals. The tool has been developed by Spartan Software.
Attendees will learn how to implement the TAUS DQF to improve process and quality in the enterprise; how sample-based review can actually work on a large scale; and how to change a costly, subjective review process into a data-driven, cost-effective one.
Panelists: Willem Stoeller (the Localization Institute), Kirill Soloviev (ContentQuo)
Bodo Vahldieck, VMware
Bodo Vahldieck is a global localization quality and terminology manager at VMware Switzerland GmbH. He has over 20 years of experience in translation, localization and globalization of software products, marketing materials and web content. Bodo is a certified project management professional and member of the Project Management Institute. He started his career in a quality assurance department overseeing the quality assurance processes for localized software. Since 2014 Bodo has been driving the quality management strategy for VMware. Previously, he was responsible for language and functional software quality at Autodesk.
Daniel Chin, Spartan Software
With more than a decade of consulting experience, Daniel Chin brings a wealth of technical project management, business strategy and localization experience to Spartan. He started his career on the custom automation of existing WorldServer deployments for many large enterprise companies where he quickly learned that successful deployments rely on custom automation of existing business processes. With his extensive background in translation technology and project management, Daniel quickly became a seasoned veteran of the localization industry. He is an ONTRAM expert and has delivered countless implementations of various automated localization and translation management systems. Daniel was also the founder of SeamApp and holds a BS degree in computer science from San Francisco State University.
Analyzing Translation Quality to help Improve Machine Translation
Wednesday 11 October 2017, 5 p.m. CET
In this webinar we will present the work done by QT21, a European Commission funded a research project on machine translation that makes use of translation quality analysis in order to improve machine translation. After an introduction to the different approaches, methods, and tools that are available to analyze quality (manual and/or automated quality estimations, professional translator based or crowd source based) we will summarize their theoretical pros and cons. Further, we will describe the data QT21 has generated on 4 language pairs (typically post editions, error annotations) and analyze their value for quality assessment. Finally, we will present how we could improve machine translation by incorporating this data within an Automatic Post-Editing system what leads to the possibility of building continuously learning machine translation systems.
Prof. Dr. Lucia Specia, University of Sheffield
Dr. Lucia Specia is Professor of Language Engineering at the Department of Computer Science of the University of Sheffield. Her research focuses on various aspects of data-driven approaches to multilingual language processing, with applications to Machine Translation, Quality Estimation and Text Adaptation. She is the recipient of an ERC Starting Grant on Multimodal Machine Translation (2016-2021) and is involved in other funded research projects on Machine Translation (QT21 21 and CRACKER) and Text Adaptation (SIMPATICO). Before joining the University of Sheffield in 2012, she was Senior Lecturer at the University of Wolverhampton, UK (2010-2011), and research engineer at the Xerox Research Centre, France (2008-2009). She received a PhD in Computer Science from the University of São Paulo, Brazil, in 2008. She has published over 100 research papers in peer-reviewed journals and conference proceedings. She has served as area and program chair, and on programme committees of numerous leading international conferences and journals, and organised a number of workshops and shared tasks in the area of NLP.
Dr. Marco Turchi, Fondazione Bruno Kessler (FBK)
Marco is a researcher in the Human Language Technology Machine Translation (HLT-MT) group at Fondazione Bruno Kessler (FBK) in Trento, Italy. Before joining FBK, he worked as research engineer at the European Commission Joint Research Centre in Italy, at the University of Bristol and at the Xerox Research Centre Europe. He received his Ph.D. degree in Computer Science from the University of Siena, Italy in 2006. His current research is centered around applying machine learning techniques to MT, with particular emphasis on exploiting post-edited data to improve MT quality. He is involved in various funded research projects, including the European initiatives QT21 (Quality Translation 21) and MMT (Modern Machine Translation). He has co-authored more than 80 peer-reviewed scientific publications.
Dr. Aljoscha Burchardtn, DFKI GmbH
Aljoscha Burchardt is lab manager at the Language Technology Lab of the German Research Center for Artificial Intelligence (DFKI GmbH). His interests include the evaluation of (machine) translation quality and the inclusion of language professionals in the MT R&D workflow. Burchardt is co-developer of the MQM framework for measuring translation quality. He has a background in semantic Language Technology.