The 10 Best New Features in Scikit-Learn 0.24 🔎

Scikit-learn version 0.24.0 is packed with new features for machine learning. The 10 best new features in Scikit-Learn 0.24 🔎 Faster ways to select hyper-parameters, ICE plots, Histogram boosting improvements, Forward selection for feature selection, Fast approximation of polynomial feature expansion, SelfTrainingClassifier for semi-supervised learning, Mean absolute percentage error (MAPE), OneHotEncoder supports missing values, OrdinalEncoder can handle new values in the test set, Recursive feature elimination (RFE) accepts a proportion of features to retain

Data Normalization with Python scikit-learn

In this tutorial, I use the scikit-learn library to perform normalization, while in my previous tutorial, I dealt with data normalization using the pandas library. I use the same dataset used in my previous tutorial, thus results can be compared. Indeed, we obtain the same results using the two methodologies.

New Features of Scikit-Learn Version 0.24

Scikit-learn, Python’s machine-learning library. An overview of the most important features in Scikit-Learn version 0.24: Sequential Feature Selector (SFS); Individual Conditional Expectation (ICE) plots; Successive Halving Estimators; Semi-supervised Self Training Classifier; Native categorical features in HistGradientBoosting; What NOT to do with scikit-learn

Logistic Regression using scikit-learn in Python

This tutorial explains the few lines to code logistic regression in Python using scikit-learn library.

Practical Machine Learning with Scikit-Learn

Practical Machine Learning with Scikit-Learn. Churn prediction is a common task in predictive analytics. In this article, we will try to predict whether a customer will leave the credit card services of a bank. The dataset is available on Kaggle.

Introduction to Scikit-learn Pipelines

The pipeline module of Scikit-learn is a tool that makes the preprocessing simple and easy by combining the transformations in a pipe. We'll be creating a pipeline to transform features for a machine learning model.

K-Nearest Neighbors Algorithm

K-Nearest Neighbors (KNN) is a classification and regression algorithm which uses nearby points to generate predictions. This post will serve as a high-level overview of what’s happening under the hood of KNN when performing classification. I’ll start by going over the different distance metrics and move into code examples with KNN from Scikit-Learn.

5 Steps to Build a KNN Classifier with Python and Scikit-learn

You can use these 5 steps to build your own KNN classifier in Python with Scikit-learn library: Import Libraries & Get the Data; Standardization; Train-Test Split; Build the Model; Evaluate the Model

Applied Data Analysis in Python Machine Learning and Data Science | Scikit-Learn

Applied Data Analysis in Python Machine learning and Data science, we will investigate the use of scikit-learn for machine learning to discover things about whatever data may come across your desk.

DON’T underestimate these Python dunder methods!

There are tons of great articles, books and videos about Object Oriented Programming with Python, many of which deal with _dunder_  (also called _magic_ ) methods

Split Your Dataset with Scikit-learn's train_test_split()

Learn why it's important to split your dataset in supervised machine learning and how to do that with train_test_split() from scikit-learn.

Top 15 Machine Learning Frameworks for AI & ML Experts

Top 15 Machine Learning Frameworks for AI & ML Experts: Amazon Machine Learning, Apache SINGA, TensorFlow, Scikit-Learn, MLlib Spark, Spark ML, Caffe, H2O, Torch, Keras, mlpack, Azure ML Studio, Google Cloud ML Engine, Theano, Veles

Pipelines & Custom Transformers in Scikit-learn with Python Code

Implement custom transformers and pipelines in Scikit-learn using Python. Understand the basics and workings of scikit-learn pipelines from the ground up, so that you can build your own. Why another tutorial on Pipelines? Creating a Custom Transformer from scratch, to include in the Pipeline. Modifying and parameterizing Transformers. Custom target transformation via TransformedTargetRegressor. Chaining everything together in a single Pipeline.

Pipelines & Custom Transformers in Scikit-learn

Pipelines & Custom Transformers in Scikit-learn. Machine Learning academic curriculums tend to focus almost exclusively on the models. Getting the data in the right form is what the industry calls preprocessing. Pipelines integrate the preprocessing steps and the fitting or predicting into a single operation. Apartfrom helping to make the model production-ready, they add a great deal of reproducibility to the experimental phase.

Train a regression model using a decision tree

we only focus on decision trees with a regression task. For this, the equivalent Scikit-learn class is DecisionTreeRegressor. We will start by discussing how to train, visualize and make predictions with Decision Trees for a regression task. We will also discuss how to regularize hyperparameters in decision trees.

Principal Component Analysis (PCA) with Scikit-learn

I’ll explain the dimensionality of a dataset, what dimensionality reduction means, main approaches to dimensionality reduction, reasons for dimensionality reduction and what PCA means. Then, I will go deeper into the topic PCA by implementing the PCA algorithm with Scikit-learn machine learning library.

How to Create Dummy Datasets

How to Create Dummy Datasets in Python. How to Create Dummy Datasets in Python for testing and machine learning problems

Alternative Python libraries for Data Science

Few helpful libraries which aim to simplify the data science process for beginners. In this article, I will cover five such libraries, which could speed the process of traditional machine learning, thereby lowering the entry barrier.

Creating a class-based TF-IDF with Scikit-Learn

Extracting informative words per class. This article will mostly go into the applications of c-TF-IDF but some background on the model will also be given.

How to Implement Linear Regression in Scikit-learn

Today, we’ll explore this awesome library and show you how to implement its core functions. At the end, we’ll combine what we’ve learned to implement your own linear regression algorithm.