Husna Sayedi
Husna Sayedi

Husna is a data scientist and has studied Mathematical Sciences at University of California, Santa Barbara. She also holds her master’s degree in Engineering, Data Science from University of California Riverside. She has experience in machine learning, data analytics, statistics, and big data. She enjoys technical writing when she is not working and is currently responsible for the data science-related content at TAUS.

icons-action-calendar18 Nov 2021

Google Research team has recently published a paper titled Data Cascades in High-Stakes AI. The six authors of this article, Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora Aroyo, bring light to a profound pattern of data undervaluation in high-stake fields where AI models are critical and prevalent. They conclude that although there is great interest in creating MT and ML models, there is less interest in doing the actual data work. 

icons-action-calendar4 Nov 2021

What is Explainable AI? 

As AI is becoming more prominent in high-stakes industries like healthcare, education, construction, environment, autonomous machines, and law enforcement, we are finding an increased need to trust the decision-making process. These predictions often need to be extremely accurate, e.g. critical life or death situations in healthcare. Due to the critical and direct impact AI is having on our day-to-day lives, decision-makers need more insight and visibility into the mechanics of AI systems and the prediction process. Presently, often only technical experts such as data scientists or engineers understand the backend processes and algorithms being used, like the highly complex deep neural networks. The lack of interpretability has shown to be a means of disconnect between technical and non-technical practitioners. In an effort to make these AI systems more transparent, the field of Explainable AI (XAI) came into existence.    

icons-action-calendar4 Oct 2021

A machine learning algorithm uses data to learn and make decisions. The algorithm develops confidence in its decisions by understanding the underlying patterns, relationships, and structures within a training dataset. The higher quality the training data is, the better the algorithm will perform. So what is training data exactly? 

icons-action-calendar4 Oct 2021

Training data is perhaps one of the most integral pieces of machine learning and artificial intelligence. Without it, machine learning and artificial intelligence would be impossible. Models would not be able to learn, make predictions, or extract useful information without learning from training data. It’s safe to say that training data is the backbone of machine learning and artificial intelligence. 

icons-action-calendar4 Oct 2021

Training data is used in three primary types of machine learning: supervised, unsupervised, and semi-supervised learning. In supervised learning, the training data must be labeled. This allows the model to learn a mapping from the label to its associated features. In unsupervised learning, labels are not required in the training set. Unsupervised machine learning models look for underlying structures in the features of the training set to make generalized groupings or predictions. A semi-supervised training dataset will have a mix of both unlabeled and labeled features, used in semi-supervised learning problems. 

icons-action-calendar4 Oct 2021

The amount of training data you need depends on many variables - the model you use, the task you perform, the performance you wish to achieve, the number of features available, the noise in the data, the complexity of the model, and more. 

icons-action-calendar4 Oct 2021

Data cleaning is an essential step in machine learning and takes place before the model training step. It is important because your machine learning model will produce results only as good as the data you feed it. If your dataset contains too much noise, your model will capture that noise as a result. Furthermore, messy data can break your model and cause model accuracy rates to decrease. Examples of data cleaning techniques include syntax error removals, data normalization, duplicate removal, outlier detection/removal, and fixing encoding issues. 

icons-action-calendar4 Oct 2021

Training data can be sourced from many different places, depending on your machine learning application. Data can be found just about anywhere - from free publicly available datasets to privately-held data available for purchase, to crowdsourced data. These types of datasets are known as organic data or naturally occurring datasets. 

icons-action-calendar7 Sep 2021

What is Intent Recognition and Why is it Important? 

As our society continues to rely on technologies such as social network apps, emails, chatboxes, and more, the volume and availability of text data continue to multiply. Due to the popular use of online or phone services, companies have previously had a difficult time keeping up. Intent recognition models have come to the rescue to help flag and sort through the vastness of this text data.

icons-action-calendar7 Jun 2021

Sentiment analysis is a subfield of Natural Language Processing (NLP) where the general sentiment is learned from a body of text. It is primarily used to understand customer satisfaction, social popularity, and feedback on a product from the general public through monitoring social or public data. There are different types of sentiment analyses that exist and are common in the real world. 

icons-action-calendar1 Jun 2021

Perhaps the most pivotal step in your machine learning application is the data preparation phase. On average, data scientists spend more time prepping and transforming datasets before actually training a machine learning model than any other task. Each machine learning algorithm requires the data to be in a certain format. Transforming the data into these specific formats will not only have a massive impact on the performance of the model but will also yield higher predictive power. Understanding and implementing the proper data preparation methods will strengthen any machine learning algorithm. Below is a brief guide on common data preparation techniques. 

icons-action-calendar29 Apr 2021

Machine learning (ML) is all around us. From your mail inbox spam detection, to text or document classification, to self-driving cars, to speech recognition, and more, machine learning algorithms are present in our everyday lives. A machine learning model is an algorithm that learns and predicts data automatically, without the need to be explicitly programmed. In this article, we will review the main factors to consider when selecting the right ML model, given your application and data. 

icons-action-calendar19 Apr 2021

Visual data plays an integral part in our society and technology today. A massive amount of images are processed daily, from using facial recognition to unlock your mobile phone to detecting lane departure while driving. Any technology processing image data is likely implementing image annotation. Image annotation is similar to data labeling but in the context of visual data such as video or images. Annotating images is the act of labeling objects within an image. This step is crucial for any machine learning supervised model training on image data for tasks such as image segmentation, image classification, and object detection. As the rate of visual data being processed continues to incline, annotating images according to its business application can deem to be a time-consuming and challenging task. Hence, it is worthwhile to carefully choose the best image annotation tools and techniques based on the task at hand.

icons-action-calendar8 Apr 2021

As data science methodologies increasingly become more technologically advanced, new tools are created within the realm of artificial intelligence (AI). One such emerging and increasingly commonplace tool is known as synthetic data. Synthetic data is artificial data created by a computer program, hence the name “synthetic”. Although synthetic data is not a novel concept, the technological resources and computing power we have today have made this type of data grow in popularity. 

icons-action-calendar29 Mar 2021

What is NLP? 

The discipline concerned with extracting information from text data is known as natural language processing, or NLP for short. NLP has many different use cases in artificial intelligence (AI) and machine learning (ML) tasks. It can be defined as the process of analyzing text data. Text processing is a pre-processing step during training data preparation for any text-based machine learning model, such as a natural language processing (NLP) model. Some common use-cases of NLP include spam filtering, sentiment analysis, topic analysis, information retrieval, data mining, and language translation. 

icons-action-calendar17 Mar 2021

Data labeling is a key factor in artificial intelligence (AI) which enables a machine learning model to learn and output accurate predictions.

icons-action-calendar11 Mar 2021

We live in a data-driven world where much of our society’s key decision-making is based on data, ranging from governmental to industrial, commercial, and so on. Data science and AI (artificial intelligence) would not be possible without an abundant amount of data. Now that data has become mainstream in almost every industry, the quality of data has become increasingly imperative.