Quality validation of your models by smartly designed groups of evaluators.
Connect with the users and speakers of the languages of your customers to evaluate and test your MT models or to add the final human touch through review and editing.
Human quality review
In addition to automatic quality metrics, TAUS provides human quality review services based on the DQF error typology. This thorough data-driven annotation review of MT output helps to generate better insights into where MT engines can be further improved.
MT Ranking
MT engines perform differently depending on various factors. The most reliable way to find the best-performing MT system is to engage human evaluators for ranking the output quality and for gathering more elaborate feedback.
Editing
MT output may be further improved to meet your quality expectations or to ensure that your data matches the style and the terminology needed for the training of your engines. For this purpose, TAUS provides light-touch and instructions-based human editing.