icons-social-media-facebook-circleicons-social-media-twitter-circleicons-social-media-linked-in-circle
Training Data Sourcing Methods
icons-action-calendar4 Oct 2021
3 minute read
Training data can be sourced via synthetic data generation, public datasets, data marketplaces, and crowd-sourced platforms.

Training data can be sourced from many different places, depending on your machine learning application. Data can be found just about anywhere - from free publicly available datasets to privately-held data available for purchase, to crowdsourced data. These types of datasets are known as organic data or naturally occurring datasets. 

Synthetic Data

Synthetic datasets are one common option to use as training data, as mentioned above. The benefit of using synthetic data is that it can be sourced internally under any given set of applicable constraints. Furthermore, it can be abundantly produced, has a short generation to model training turnaround, and is easy to create when prior conditions are known. The downfall is that synthetic data production can be costly and it consumes resources. 

Public Datasets

Other alternatives include using platforms like Google or Kaggle to pull datasets. The datasets on offer there are often maintained by government agencies or enterprise companies. Some companies have in-house teams or use a data labeling or data collection service to acquire the training data they are looking for.

Crowd-sourced Datasets

Crowd-sourced data is another option to source training data, depending on the given application. TAUS HLP Platform is an example that provides crowd-sourced data solutions. With this platform, TAUS offers tailor-made datasets based on specific requirements for an application.

Marketplaces

How and where you source your training dataset, whether organic or synthetic data, really depends on what you are using it for. If you wish to train an NLP model, for example, then you would need a hefty-sized dataset consisting of either audio or text data to train your model accordingly. An example of a platform that contains training data is the TAUS Data Marketplace, where hundreds of datasets in numerous world languages are present. 

 

Author
husna-sayedi

Husna is a data scientist and has studied Mathematical Sciences at University of California, Santa Barbara. She also holds her master’s degree in Engineering, Data Science from University of California Riverside. She has experience in machine learning, data analytics, statistics, and big data. She enjoys technical writing when she is not working and is currently responsible for the data science-related content at TAUS.

Related Articles
icons-action-calendar3 Mar 2022

The AI scene of the 2010s was shaped by breakthroughs in vision-enabled technologies, from advanced image searches to computer vision systems for medical image analysis or for detecting defective parts in manufacturing and assembly. The 2020s, however, are foreseen to be all about natural language technologies and language-based AI tasks. NLP, NLG, NLQ, NLU… The list of abbreviations starting with NL (Natural Language) seems to grow each day. Regardless of the technology domain, it’s observed that natural language technologies will be in a field-shaping position in a variety of areas from business intelligence and healthcare to fintech.

icons-action-calendar3 Jan 2022

Bilingual, NLP-driven word clouds are now available in TAUS Data Marketplace. In this article, we discuss what word clouds are and what they can tell us about the contents of a document containing bilingual text data.

icons-action-calendar2 Dec 2021

This is the third article in my series on Translation Economics of the 2020s. In the first article published in Multilingual, I sketched the evolution of the translation industry driven by technological breakthroughs from an economic perspective. In the second article, Reconfiguring the Translation Ecosystem, I laid out the emerging new business models and ended with the observation that new smarter models still need to be invented. This is where I will now pick up the thread and introduce you to the next logical translation solution. I call it: Data-Enhanced Machine Translation.