1 November 2017, San Jose, CA (USA) hosted by eBay
Flow and overview of the program
Six topics have been selected: four in the morning and two in the afternoon. For each topic there will be a presentation by the session leader. There are also discussants assigned to each toipc who will share a perspective or an opinion and raise relevant questions after the presentation. The presentation and the panel discussion are followed by a Q&A with the audience. After the six topics have been covered it is time for breakout discussions. The delegates can ‘vote with their feet’ as they walk towards the different rooms and tables where the discussions take place. At the end of the day the discussion leaders will report back to the plenary the conclusions and recommendations from their group discussion.
Who should attend and why?
In this workshop, we will see that high quality is not always as important as we think and that, very often, we evaluate quality without even knowing it. We will explore the relationship between KPIs and quality and we suggest ways to evaluate and benchmark MT engines based on real life use cases. The Translation QE Summit is recommended to buyers of translation in different industries and government and non-government organizations (director level and vendor/quality management functions), and to small and large language service providers (CEO, director and quality management functions).
Objectives of the QE Summit
The objectives of the QE Summit are to lay out strategies, raise awareness of industry dynamics and where possible agree to share and take collaborative actions. Participants will discuss relevant topics, recommend best practices and outline collaboration plans between industry and academia. The breakout sessions will provide opportunity for networking and interaction. You can watch a recording of Jaap van der Meer, TAUS founder and director, sharing the big picture view on translation quality at the TAUS Quality Evaluation Summit in March 2013.
Crowdshaping Translation Quality
The 2014-born child of crowdsourcing is Crowdshaping. It means using personal data and user behavior to shape and reshape a product or a service. One example in the online translation industry, is when all website content is machine translated first and based on page views, bounce rate and engagement information, the most important content is post-edited or human translated in a later phase. How can we harvest user or customer data and feedback to offer the right level of quality? How to measure usability and customer behavior? With different translation quality levels offered by different vendors on the market, crowdshaping is already reshaping the way we approach translation quality.
The KPIs of Quality Evaluation
Is translation QE a cost or opportunity? In several TAUS conferences and workshops in the past, the importance of linking quality evaluation to ROI was emphasized. Quality evaluation in isolation of the bigger business picture is meaningless. How do we make the balance between right quality and budgetary restrictions? How much is the cost of translations with the wrong quality level? How can we save on quality evaluation? Unfortunately, there’s no such thing as a free lunch! Assessing the quality of a translation can sometimes cost you even more than producing the translation itself! Nonetheless, continuous monitoring of translation quality and the sharing of evaluation data are indispensable for developing better metrics for automated QE. Without that, no advances will be booked in the translation industry.
Benchmarking MT engines and collecting use cases
One of the main problems in the translation industry today is the lack of benchmarking. The output of MT engines cannot be compared to industry averages or standards because these are not yet available. Automated scores are meaningless outside the “laboratory”. At the same time, buyers of translation services are increasingly interested in translated content of different quality levels. They want to save on some content and invest more in other. They also want to know how the different engines are performing on different content in different language pairs. How can we be sure MT providers deliver what they are paid for? Benchmarking MT engines and creating a library of MT use cases are the only ways to move forward if we want to achieve credibility in the language industry.
What are we really evaluating?
When evaluating the quality of a translation we do much more than only giving a score to the final product. We are also indirectly assessing the quality of the source text or the translator who delivers the translation. Or the MT engine. Or the translation process itself… or many other things. Today, DQF and other QE tools are not only used for the sole purpose of evaluating translations but also to profile post-editors and translators, to suggest workflow changes, to find out whether the right marketing tools and content have been applied and translated and to check if the translation complies with customer specifications. What do we really want to know when we evaluate a translation?
The TAUS QE Summit 2017 San Jose will be hosted by eBay at their South Campus buildings. The address is:
2065 Hamilton Ave
San Jose, CA 95125
Since the QE Summit is a 1-day event and hosted at eBay instead of a hotel, we do not have a contract with a hotel for bedrooms. However, the 2 days before the QE Summit we will be downtown San Jose for our Annual Conference. Please have a look at the Annual Conference pages for information on how to book your hotel room.
Recommendable hotels in the area of the eBay South Campus:
1995 South Bascom Avenue
Campbell, CA, 95008
Avg. price is $269 for a king or double bed
Walking distance or 5 min by car
655 Creekside Way
Campbell, CA 95008
Avg. rate is $369
Walking distance or 5 min by car
2761 South Bascom Avenue
Campbell, CA 95008
Avg, rate is $289 with free breakfast
10 min by car – too far to walk to the campus
Here are some comments from previous Summit participants:
- “Thanks for organizing such a great QE summit. I really enjoyed everything I saw and made some great contacts that I think can be very helpful.” – Roisin Twomey, Microsoft
- “Just wanted to say that I enjoyed very much the conference.” – Maribel Rodríguez Molina, Lionbridge
- “I really enjoyed the workshop and meeting known & new faces, thanks for inviting me along!” – Lena Marg, Welocalize