<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Digital Transformation Archives - relataly.com</title>
	<atom:link href="https://www.relataly.com/tag/digital-transformation/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.relataly.com/tag/digital-transformation/</link>
	<description>The Business AI Blog</description>
	<lastBuildDate>Sat, 27 May 2023 11:36:03 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">175977316</site>	<item>
		<title>How to Use Hierarchical Clustering For Customer Segmentation in Python</title>
		<link>https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/</link>
					<comments>https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/#respond</comments>
		
		<dc:creator><![CDATA[Florian Follonier]]></dc:creator>
		<pubDate>Thu, 22 Dec 2022 18:50:14 +0000</pubDate>
				<category><![CDATA[Agglomerative Clustering]]></category>
		<category><![CDATA[Algorithms]]></category>
		<category><![CDATA[Clustering]]></category>
		<category><![CDATA[Customer Segmentation]]></category>
		<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Data Visualization]]></category>
		<category><![CDATA[Exploratory Data Analysis (EDA)]]></category>
		<category><![CDATA[Finance]]></category>
		<category><![CDATA[Insurance]]></category>
		<category><![CDATA[Kaggle Competitions]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Marketing Automation]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[Scikit-Learn]]></category>
		<category><![CDATA[Seaborn]]></category>
		<category><![CDATA[Telecommunications]]></category>
		<category><![CDATA[Use Cases]]></category>
		<category><![CDATA[AI in Finance]]></category>
		<category><![CDATA[AI in Insurance]]></category>
		<category><![CDATA[Beginner Tutorials]]></category>
		<category><![CDATA[Classic Machine Learning]]></category>
		<category><![CDATA[Digital Transformation]]></category>
		<guid isPermaLink="false">https://www.relataly.com/?p=11335</guid>

					<description><![CDATA[<p>Have you ever found yourself wondering how you can better understand your customer base and target your marketing efforts more effectively? One solution is to use hierarchical clustering, a method of grouping customers into clusters based on their characteristics and behaviors. By dividing your customers into distinct groups, you can tailor your marketing campaigns and ... <a title="How to Use Hierarchical Clustering For Customer Segmentation in Python" class="read-more" href="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/" aria-label="Read more about How to Use Hierarchical Clustering For Customer Segmentation in Python">Read more</a></p>
<p>The post <a href="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/">How to Use Hierarchical Clustering For Customer Segmentation in Python</a> appeared first on <a href="https://www.relataly.com">relataly.com</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>Have you ever found yourself wondering how you can better understand your customer base and target your marketing efforts more effectively? One solution is to use hierarchical clustering, a method of grouping customers into clusters based on their characteristics and behaviors. By dividing your customers into distinct groups, you can tailor your marketing campaigns and personalize your marketing efforts to meet the specific needs of each group. This can be especially useful for businesses with large customer bases, as it allows them to target their marketing efforts to specific segments rather than trying to appeal to everyone at once. Additionally, hierarchical clustering can help businesses identify common patterns and trends among their customers, which can be useful for targeting future marketing efforts and improving the overall customer experience. In this tutorial, we will use Python and the scikit-learn library to apply hierarchical (agglomerative) clustering to a dataset of customer data. </p>



<p>The rest of this tutorial proceeds in two parts. The first part will discuss hierarchical clustering and how we can use it to identify clusters in a set of customer data. The second part is a hands-on Python tutorial. We will explore customer health insurance data and apply an agglomerative clustering approach to group the customers into meaningful segments. Finally, we will use a tree-like diagram called a dendrogram, which is helpful for visualizing the structure of the data. The resulting segments could inform our marketing strategies and help us better understand our customers. So let&#8217;s get started!</p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%">
<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="896" height="510" data-attachment-id="12402" data-permalink="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/isometric-view-of-people-customer-segmentation-using-machine-learning-python-tutorial-min/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2023/02/isometric-view-of-people-customer-segmentation-using-machine-learning-python-tutorial-min.png" data-orig-size="896,510" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="isometric-view-of-people-customer-segmentation-using-machine-learning-python-tutorial-min" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2023/02/isometric-view-of-people-customer-segmentation-using-machine-learning-python-tutorial-min.png" src="https://www.relataly.com/wp-content/uploads/2023/02/isometric-view-of-people-customer-segmentation-using-machine-learning-python-tutorial-min.png" alt="isometric view of people customer segmentation using machine learning python tutorial" class="wp-image-12402" srcset="https://www.relataly.com/wp-content/uploads/2023/02/isometric-view-of-people-customer-segmentation-using-machine-learning-python-tutorial-min.png 896w, https://www.relataly.com/wp-content/uploads/2023/02/isometric-view-of-people-customer-segmentation-using-machine-learning-python-tutorial-min.png 300w, https://www.relataly.com/wp-content/uploads/2023/02/isometric-view-of-people-customer-segmentation-using-machine-learning-python-tutorial-min.png 768w" sizes="(max-width: 896px) 100vw, 896px" /><figcaption class="wp-element-caption">Customer segmentation is a typical use case for clustering. Image generated with <a href="http://www.midjourney.com" target="_blank" rel="noreferrer noopener">Midjourney</a>. </figcaption></figure>
</div>
</div>



<h2 class="wp-block-heading">What is Hierarchical Clustering?</h2>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>So what is hierarchical clustering? Hierarchical clustering is a method of cluster analysis that aims to build a hierarchy of clusters. It creates a tree-like diagram called a dendrogram, which shows the relationships between clusters. There are two main types of hierarchical clustering: agglomerative and divisive. </p>



<ol class="wp-block-list">
<li>Agglomerative hierarchical clustering: This is a bottom-up approach in which each data point is treated as a single cluster at the outset. The algorithm iteratively merges the most similar pairs of clusters until all data points are in a single cluster.</li>



<li>Divisive hierarchical clustering: This is a top-down approach in which all data points are treated as a single cluster at the outset. The algorithm iteratively splits the cluster into smaller and smaller subclusters until each data point is in its own cluster.</li>
</ol>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%"></div>
</div>



<h3 class="wp-block-heading">Agglomerative Clustering</h3>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>In this article, we will apply the agglomerative clustering approach, which is a bottom-up approach to clustering. The idea is to initially treat each data point in a dataset as its own cluster and then combine the points with other clusters as the algorithm progresses. The process of agglomerative clustering can be broken down into the following steps:</p>



<ol class="wp-block-list">
<li>Start with each data point in its own cluster.</li>



<li>Calculate the similarity between all pairs of clusters.</li>



<li>Merge the two most similar clusters.</li>



<li>Repeat steps 2 and 3 until all the data points are in a single cluster or until a predetermined number of clusters is reached.</li>
</ol>



<p>There are several ways to calculate the similarity between clusters, including using measures such as the Euclidean distance, cosine similarity, or the Jaccard index. The specific measure used can impact the results of the clustering algorithm.</p>



<p>For details on how the clustering approach works, see the&nbsp;<a href="https://en.wikipedia.org/wiki/Hierarchical_clustering">Wikipedia page</a>.</p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%">
<figure class="wp-block-image size-large"><img decoding="async" width="430" height="512" data-attachment-id="13027" data-permalink="https://www.relataly.com/mushrooms_and_fruits_pattern-min-2/" data-orig-file="https://www.relataly.com/wp-content/uploads/2023/03/mushrooms_and_fruits_pattern-min.png" data-orig-size="506,602" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="mushrooms_and_fruits_pattern-min" data-image-description="&lt;p&gt;Hierarchical clustering is an unsupversied way to classify things. &lt;/p&gt;
" data-image-caption="&lt;p&gt;Hierarchical clustering is an unsupversied way to classify things. &lt;/p&gt;
" data-large-file="https://www.relataly.com/wp-content/uploads/2023/03/mushrooms_and_fruits_pattern-min.png" src="https://www.relataly.com/wp-content/uploads/2023/03/mushrooms_and_fruits_pattern-min-430x512.png" alt="Hierarchical clustering is an unsupversied way to classify things. " class="wp-image-13027" srcset="https://www.relataly.com/wp-content/uploads/2023/03/mushrooms_and_fruits_pattern-min.png 430w, https://www.relataly.com/wp-content/uploads/2023/03/mushrooms_and_fruits_pattern-min.png 252w, https://www.relataly.com/wp-content/uploads/2023/03/mushrooms_and_fruits_pattern-min.png 506w" sizes="(max-width: 430px) 100vw, 430px" /><figcaption class="wp-element-caption">Hierarchical clustering is an unsupervised technique to classify things based on patterns in their data. Image created with <a href="http://www.midjourney.com" target="_blank" rel="noreferrer noopener">Midjourney</a>.</figcaption></figure>
</div>
</div>



<h3 class="wp-block-heading">Hierarchical Clustering vs. K-means</h3>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>In a previous article, we have already discussed the popular <a href="https://www.relataly.com/simple-cluster-analysis-with-k-means-with-python/5070/" target="_blank" rel="noreferrer noopener">clustering approach k-means</a>. So how are k-means and hierarchical clustering different? Hierarchical clustering and k-means are both clustering algorithms that can be used to group similar data points together. However, there are several key differences between these two approaches:</p>



<ol class="wp-block-list">
<li><strong>The number of clusters:</strong> In k-means, the number of clusters must be specified in advance, whereas in hierarchical clustering, the number of clusters is not specified. Instead, hierarchical clustering creates a hierarchy of clusters, starting with each data point as its own cluster and then merging the most similar clusters until all data points are in a single cluster.</li>



<li><strong>Cluster shape:</strong> K-means produces clusters that are spherical, while hierarchical clustering produces clusters that can have any shape. This means that k-means is better suited for data that is well-separated into distinct, spherical clusters, while hierarchical clustering is more flexible and can handle more complex cluster shapes.</li>



<li><strong>Distance measure:</strong> K-means uses a distance measure, such as the Euclidean distance, to calculate the similarity between data points, while hierarchical clustering can use a variety of distance measures. This means that k-means is more sensitive to the scale of the features, while hierarchical clustering is less sensitive to the feature scale.</li>



<li><strong>Computational complexity:</strong> K-means is generally faster than hierarchical clustering, especially for large datasets. This is because k-means only requires a single pass through the data to assign data points to clusters, while hierarchical clustering requires multiple passes to merge clusters.</li>



<li><strong>Visualization: </strong>Hierarchical clustering produces a tree-like diagram called a &#8220;dendrogram.&#8221; The dendrogram shows the relationships between clusters. This can be useful for visualizing the structure of the data and understanding how clusters are related.</li>
</ol>



<p>Next, let&#8217;s look at how we can implement a hierarchical clustering model in Python. </p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%">
<p></p>
</div>
</div>



<h2 class="wp-block-heading">Customer Segmentation using Hierarchical Clustering in Python</h2>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>In this comprehensive guide, we explore the application of hierarchical clustering for effective customer segmentation using a customer dataset. This data-driven segmentation method enables businesses to identify distinct customer clusters based on various factors, including demographics, behaviors, and preferences.</p>



<p>Customer segmentation is a strategic approach that splits a customer base into smaller, more manageable groups with similar characteristics. It aims to better understand the diverse needs and wants of different customer segments to enhance marketing strategies and product development.</p>



<p>Applying customer segmentation through hierarchical clustering allows businesses to personalize their marketing messages, design targeted campaigns, and tailor products to meet the unique needs of each segment. This proactive approach can stimulate increased customer loyalty and sales.</p>



<p>We begin by loading the customer data and selecting the relevant features we want to use for clustering. We then standardize the data using the StandardScaler from scikit-learn. Next, we apply hierarchical clustering using the AgglomerativeClustering method, specifying the number of clusters we want to create. Finally, we add the predictions to the original data as a new column and view the resulting segments by calculating the mean of each feature for each segment.</p>



<p>The code is available on the GitHub repository.</p>



<div class="wp-block-kadence-advancedbtn kb-buttons-wrap kb-btns_bada6f-73"><a class="kb-button kt-button button kb-btn_43f94b-af kt-btn-size-standard kt-btn-width-type-full kb-btn-global-inherit  kt-btn-has-text-true kt-btn-has-svg-true  wp-block-button__link wp-block-kadence-singlebtn" href="https://github.com/flo7up/relataly-public-python-tutorials/blob/master/03%20Clustering/043%20Customer%20Segmentation%20using%20Hierarchical%20Clustering%20with%20Python.ipynb" target="_blank" rel="noreferrer noopener"><span class="kb-svg-icon-wrap kb-svg-icon-fe_eye kt-btn-icon-side-left"><svg viewBox="0 0 24 24"  fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"  aria-hidden="true"><path d="M1 12s4-8 11-8 11 8 11 8-4 8-11 8-11-8-11-8z"/><circle cx="12" cy="12" r="3"/></svg></span><span class="kt-btn-inner-text">View on GitHub </span></a>

<a class="kb-button kt-button button kb-btn_17702b-41 kt-btn-size-standard kt-btn-width-type-full kb-btn-global-inherit  kt-btn-has-text-true kt-btn-has-svg-true  wp-block-button__link wp-block-kadence-singlebtn" href="https://github.com/flo7up/relataly-public-python-API-tutorials" target="_blank" rel="noreferrer noopener"><span class="kb-svg-icon-wrap kb-svg-icon-fa_github kt-btn-icon-side-left"><svg viewBox="0 0 496 512"  fill="currentColor" xmlns="http://www.w3.org/2000/svg"  aria-hidden="true"><path d="M165.9 397.4c0 2-2.3 3.6-5.2 3.6-3.3.3-5.6-1.3-5.6-3.6 0-2 2.3-3.6 5.2-3.6 3-.3 5.6 1.3 5.6 3.6zm-31.1-4.5c-.7 2 1.3 4.3 4.3 4.9 2.6 1 5.6 0 6.2-2s-1.3-4.3-4.3-5.2c-2.6-.7-5.5.3-6.2 2.3zm44.2-1.7c-2.9.7-4.9 2.6-4.6 4.9.3 2 2.9 3.3 5.9 2.6 2.9-.7 4.9-2.6 4.6-4.6-.3-1.9-3-3.2-5.9-2.9zM244.8 8C106.1 8 0 113.3 0 252c0 110.9 69.8 205.8 169.5 239.2 12.8 2.3 17.3-5.6 17.3-12.1 0-6.2-.3-40.4-.3-61.4 0 0-70 15-84.7-29.8 0 0-11.4-29.1-27.8-36.6 0 0-22.9-15.7 1.6-15.4 0 0 24.9 2 38.6 25.8 21.9 38.6 58.6 27.5 72.9 20.9 2.3-16 8.8-27.1 16-33.7-55.9-6.2-112.3-14.3-112.3-110.5 0-27.5 7.6-41.3 23.6-58.9-2.6-6.5-11.1-33.3 2.6-67.9 20.9-6.5 69 27 69 27 20-5.6 41.5-8.5 62.8-8.5s42.8 2.9 62.8 8.5c0 0 48.1-33.6 69-27 13.7 34.7 5.2 61.4 2.6 67.9 16 17.7 25.8 31.5 25.8 58.9 0 96.5-58.9 104.2-114.8 110.5 9.2 7.9 17 22.9 17 46.4 0 33.7-.3 75.4-.3 83.6 0 6.5 4.6 14.4 17.3 12.1C428.2 457.8 496 362.9 496 252 496 113.3 383.5 8 244.8 8zM97.2 352.9c-1.3 1-1 3.3.7 5.2 1.6 1.6 3.9 2.3 5.2 1 1.3-1 1-3.3-.7-5.2-1.6-1.6-3.9-2.3-5.2-1zm-10.8-8.1c-.7 1.3.3 2.9 2.3 3.9 1.6 1 3.6.7 4.3-.7.7-1.3-.3-2.9-2.3-3.9-2-.6-3.6-.3-4.3.7zm32.4 35.6c-1.6 1.3-1 4.3 1.3 6.2 2.3 2.3 5.2 2.6 6.5 1 1.3-1.3.7-4.3-1.3-6.2-2.2-2.3-5.2-2.6-6.5-1zm-11.4-14.7c-1.6 1-1.6 3.6 0 5.9 1.6 2.3 4.3 3.3 5.6 2.3 1.6-1.3 1.6-3.9 0-6.2-1.4-2.3-4-3.3-5.6-2z"/></svg></span><span class="kt-btn-inner-text">Relataly GitHub Repo </span></a></div>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%">
<figure class="wp-block-image size-full"><img decoding="async" width="512" height="513" data-attachment-id="12366" data-permalink="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/the_future_of_the_healthcare_using_blockchain-min/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2022/12/the_future_of_the_healthcare_using_blockchain-min.png" data-orig-size="512,513" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="the_future_of_the_healthcare_using_blockchain-min" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2022/12/the_future_of_the_healthcare_using_blockchain-min.png" src="https://www.relataly.com/wp-content/uploads/2022/12/the_future_of_the_healthcare_using_blockchain-min.png" alt="In this machine learning tutorial, we will run a hierarchical clustering algorithm on health data." class="wp-image-12366" srcset="https://www.relataly.com/wp-content/uploads/2022/12/the_future_of_the_healthcare_using_blockchain-min.png 512w, https://www.relataly.com/wp-content/uploads/2022/12/the_future_of_the_healthcare_using_blockchain-min.png 300w, https://www.relataly.com/wp-content/uploads/2022/12/the_future_of_the_healthcare_using_blockchain-min.png 140w" sizes="(max-width: 512px) 100vw, 512px" /><figcaption class="wp-element-caption">The future of healthcare will see a tight collaboration between humans and AI. Image generated using&nbsp;Midjourney</figcaption></figure>
</div>
</div>



<h3 class="wp-block-heading">About the Customer Health Insurance Dataset</h3>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>In this tutorial, we will work with a public dataset on health_insurance_customer_data from kaggle.com. Download the <a href="https://www.kaggle.com/datasets/teertha/ushealthinsurancedataset" target="_blank" rel="noreferrer noopener">CSV file from Kaggle</a> and copy it into the following path, starting from the folder with your python notebook: data/customer/</p>



<p>The dataset is relatively simple and contains 1338 rows of insured customers. It includes the insurance charges, as well as demographic and personal information such as Age, Sex, BMI, Number of Children, Smoker, and Region. The dataset does not have any undefined or missing values.</p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%"></div>
</div>



<h3 class="wp-block-heading" id="h-prerequisites">Prerequisites</h3>



<p>Before we start the coding part, ensure that you have set up your Python 3 environment and the required packages. If you don’t have an environment, follow&nbsp;this tutorial&nbsp;to set up the&nbsp;<a href="https://www.anaconda.com/products/individual" target="_blank" rel="noreferrer noopener">Anaconda environment</a>. Also, make sure you install all required packages. In this tutorial, we will be working with the following standard packages:&nbsp;</p>



<ul class="wp-block-list">
<li>pandas</li>



<li>NumPy</li>



<li>matplotlib</li>



<li>scikit-learn</li>
</ul>



<p>You can install packages using console commands:</p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:false,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;null&quot;,&quot;mime&quot;:&quot;text/plain&quot;,&quot;theme&quot;:&quot;3024-day&quot;,&quot;lineNumbers&quot;:false,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:false,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Plain Text&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;text&quot;}">pip install &lt;package name&gt; 
conda install &lt;package name&gt; (if you are using the anaconda packet manager)</pre></div>



<h3 class="wp-block-heading">Step #1 Load the Data</h3>



<p>To begin, we need to load the required packages and the data we want to cluster. We will load the data by reading the CSV file via the pandas library. </p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># import necessary libraries
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import AgglomerativeClustering
from sklearn.preprocessing import LabelEncoder
from pandas.api.types import is_string_dtype
import pandas as pd
import math
import seaborn as sns

# load customer data
customer_df = pd.read_csv(&quot;data/customer/customer_health_insurance.csv&quot;)
customer_df.head(3)</pre></div>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:false,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;null&quot;,&quot;mime&quot;:&quot;text/plain&quot;,&quot;theme&quot;:&quot;3024-day&quot;,&quot;lineNumbers&quot;:false,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:false,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Plain Text&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;text&quot;}">	age	sex		bmi		children	smoker	region		charges
0	19	female	27.90	0			yes		southwest	16884.9240
1	18	male	33.77	1			no		southeast	1725.5523
2	28	male	33.00	3			no		southeast	4449.4620</pre></div>



<h3 class="wp-block-heading">Step #2 Explore the Data</h3>



<p>Next, it is a good idea to explore the data and get a sense of its structure and content. This can be done using a variety of methods, such as examining the shape of the dataframe, checking for missing values, and plotting some basic statistics. For example, the following plots will explore the relationships between some of the variables. We won&#8217;t go into too much detail here.</p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}">def make_kdeplot(df, column_name, target_name):
    fig, ax = plt.subplots(figsize=(10, 6))
    sns.kdeplot(data=df, hue=column_name, x=target_name, ax = ax, linewidth=2,)
    ax.tick_params(axis=&quot;x&quot;, rotation=90, labelsize=10, length=0)
    ax.set_title(column_name)
    ax.set_xlim(0, df[target_name].quantile(0.99))
    plt.show()

# make kde plot for ext_color 
make_kdeplot(customer_df, 'smoker', 'charges')</pre></div>



<figure class="wp-block-image size-full is-resized"><img decoding="async" data-attachment-id="11363" data-permalink="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/image-17-3/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2022/12/image-17.png" data-orig-size="833,571" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-17" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2022/12/image-17.png" src="https://www.relataly.com/wp-content/uploads/2022/12/image-17.png" alt="" class="wp-image-11363" width="567" height="389" srcset="https://www.relataly.com/wp-content/uploads/2022/12/image-17.png 833w, https://www.relataly.com/wp-content/uploads/2022/12/image-17.png 300w, https://www.relataly.com/wp-content/uploads/2022/12/image-17.png 768w" sizes="(max-width: 567px) 100vw, 567px" /></figure>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># make kde plot for ext_color 
make_kdeplot(customer_df, 'sex', 'charges')</pre></div>



<figure class="wp-block-image size-full is-resized"><img decoding="async" data-attachment-id="11364" data-permalink="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/image-44-2/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2022/12/image-44.png" data-orig-size="846,571" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-44" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2022/12/image-44.png" src="https://www.relataly.com/wp-content/uploads/2022/12/image-44.png" alt="" class="wp-image-11364" width="572" height="386" srcset="https://www.relataly.com/wp-content/uploads/2022/12/image-44.png 846w, https://www.relataly.com/wp-content/uploads/2022/12/image-44.png 300w, https://www.relataly.com/wp-content/uploads/2022/12/image-44.png 768w" sizes="(max-width: 572px) 100vw, 572px" /></figure>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}">sns.lmplot(x=&quot;charges&quot;, y=&quot;age&quot;, hue=&quot;smoker&quot;, data=customer_df, aspect=2)
plt.show()</pre></div>



<figure class="wp-block-image size-large is-resized"><img decoding="async" data-attachment-id="11365" data-permalink="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/image-45/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2022/12/image-45.png" data-orig-size="1067,489" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-45" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2022/12/image-45.png" src="https://www.relataly.com/wp-content/uploads/2022/12/image-45-1024x469.png" alt="" class="wp-image-11365" width="700" height="321" srcset="https://www.relataly.com/wp-content/uploads/2022/12/image-45.png 1024w, https://www.relataly.com/wp-content/uploads/2022/12/image-45.png 300w, https://www.relataly.com/wp-content/uploads/2022/12/image-45.png 768w, https://www.relataly.com/wp-content/uploads/2022/12/image-45.png 1067w" sizes="(max-width: 700px) 100vw, 700px" /></figure>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}">def make_boxplot(customer_df, x,y,h):
    fig, ax = plt.subplots(figsize=(10,4))
    box = sns.boxplot(x=x, y=y, hue=h, data=customer_df)
    box.set_xticklabels(box.get_xticklabels())
    fig.subplots_adjust(bottom=0.2)
    plt.tight_layout()

make_boxplot(customer_df, &quot;smoker&quot;, &quot;charges&quot;, &quot;sex&quot;)</pre></div>



<figure class="wp-block-image size-full is-resized"><img decoding="async" data-attachment-id="11366" data-permalink="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/image-46/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2022/12/image-46.png" data-orig-size="989,390" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-46" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2022/12/image-46.png" src="https://www.relataly.com/wp-content/uploads/2022/12/image-46.png" alt="" class="wp-image-11366" width="675" height="266" srcset="https://www.relataly.com/wp-content/uploads/2022/12/image-46.png 989w, https://www.relataly.com/wp-content/uploads/2022/12/image-46.png 300w, https://www.relataly.com/wp-content/uploads/2022/12/image-46.png 768w" sizes="(max-width: 675px) 100vw, 675px" /></figure>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}">make_boxplot(customer_df, &quot;region&quot;, &quot;charges&quot;, &quot;sex&quot;)</pre></div>



<figure class="wp-block-image size-full is-resized"><img decoding="async" data-attachment-id="11367" data-permalink="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/image-47-4/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2022/12/image-47.png" data-orig-size="989,390" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-47" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2022/12/image-47.png" src="https://www.relataly.com/wp-content/uploads/2022/12/image-47.png" alt="" class="wp-image-11367" width="693" height="273" srcset="https://www.relataly.com/wp-content/uploads/2022/12/image-47.png 989w, https://www.relataly.com/wp-content/uploads/2022/12/image-47.png 300w, https://www.relataly.com/wp-content/uploads/2022/12/image-47.png 768w" sizes="(max-width: 693px) 100vw, 693px" /></figure>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}">make_boxplot(customer_df, &quot;children&quot;, &quot;bmi&quot;, &quot;sex&quot;)</pre></div>



<figure class="wp-block-image size-full is-resized"><img decoding="async" data-attachment-id="11368" data-permalink="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/image-48-4/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2022/12/image-48.png" data-orig-size="989,390" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-48" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2022/12/image-48.png" src="https://www.relataly.com/wp-content/uploads/2022/12/image-48.png" alt="" class="wp-image-11368" width="705" height="278" srcset="https://www.relataly.com/wp-content/uploads/2022/12/image-48.png 989w, https://www.relataly.com/wp-content/uploads/2022/12/image-48.png 300w, https://www.relataly.com/wp-content/uploads/2022/12/image-48.png 768w" sizes="(max-width: 705px) 100vw, 705px" /></figure>



<p>Next, let&#8217;s prepare the data for model training. </p>



<h3 class="wp-block-heading" id="h-step-3-prepare-the-data">Step #3 Prepare the Data</h3>



<p>Before we can train a model on the data, we must prepare it for modeling. This typically involves selecting the relevant features, handling missing values, and scaling the data. However, we are using a very simple dataset that already has good data quality. Therefore we can limit our data preparation activities to encoding the labels and scaling the data. </p>



<p>To encode the categorical values, we will use label encoder from the scikit-learn library.</p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># encode categorical features
label_encoder = LabelEncoder()

for col_name in customer_df.columns:
    if (is_string_dtype(customer_df[col_name])):
        customer_df[col_name] = label_encoder.fit_transform(customer_df[col_name])
customer_df.head(3)</pre></div>



<p>Next, we will scale the numeric variables. While scaling the data is an essential preprocessing step for many machine learning algorithms to work effectively, it is generally not necessary for hierarchical clustering. This is because hierarchical clustering is not sensitive to the scale of the features. However, when you use certain distance measures, such as Euclidean distance, scaling the data might still be useful when performing hierarchical clustering. Scaling the data can help to ensure that all of the features are given equal weight. This can be useful if you want to avoid giving more weight to features with larger scales.</p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># select features
X = customer_df # we will select all features

# standardize the data
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
X_scaled.head(3)</pre></div>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:false,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;null&quot;,&quot;mime&quot;:&quot;text/plain&quot;,&quot;theme&quot;:&quot;3024-day&quot;,&quot;lineNumbers&quot;:false,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:false,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Plain Text&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;text&quot;}">array([[-1.43876426, -1.0105187 , -0.45332   , ...,  1.34390459,
         0.2985838 ,  1.97058663],
       [-1.50996545,  0.98959079,  0.5096211 , ...,  0.43849455,
        -0.95368917, -0.5074631 ],
       [-0.79795355,  0.98959079,  0.38330685, ...,  0.43849455,
        -0.72867467, -0.5074631 ],
       ...,
       [-1.50996545, -1.0105187 ,  1.0148781 , ...,  0.43849455,
        -0.96159623, -0.5074631 ],
       [-1.29636188, -1.0105187 , -0.79781341, ...,  1.34390459,
        -0.93036151, -0.5074631 ],
       [ 1.55168573, -1.0105187 , -0.26138796, ..., -0.46691549,
         1.31105347,  1.97058663]])</pre></div>



<h3 class="wp-block-heading">Step #4 Train the Hierarchical Clustering Algorithm</h3>



<p>To train a hierarchical clustering model using scikit-learn, we can use the AgglomerativeClustering or Ward class. The main parameters for these classes are:</p>



<ul class="wp-block-list">
<li><strong>n_clusters: </strong>The number of clusters to form. This parameter is required for AgglomerativeClustering but is not used for <code>Ward</code>.</li>



<li><strong>affinity: </strong>The distance measure used to calculate the similarity between pairs of samples. This can be any of the distance measures implemented in scikit-learn, such as the Euclidean distance or the cosine similarity.</li>



<li>l<strong>inkage: </strong>The method used to calculate the distance between clusters. This can be one of &#8220;ward,&#8221; &#8220;complete,&#8221; &#8220;average,&#8221; or &#8220;single.&#8221;</li>



<li><strong>distance_threshold:</strong> The maximum distance between two clusters that allows them to be merged. This parameter is only used in the AgglomerativeClustering class.</li>
</ul>



<p>To train the model, we specify the desired parameters and fit the model to the data using the fit_predict method. This method will fit the model to the data and generate predictions in one step.</p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># apply hierarchical clustering 
model = AgglomerativeClustering(affinity='euclidean')
predicted_segments = model.fit_predict(X_scaled)</pre></div>



<p>Now we have a trained clustering model also predicted the segments for our data.</p>



<h3 class="wp-block-heading">Step #5 Visualize the Results</h3>



<p>After the model is trained, we can visualize the results to get a better understanding of the clusters that were formed. There is a wide range of plots and tools to visualize clusters. In this tutorial, we will use a scatterplot and a dendrogram. </p>



<h4 class="wp-block-heading">5.1 Scatterplot</h4>



<p>For this, we can use the lmplot function in Seaborn. The lmplot creates a 2D scatterplot with an optional overlay of a linear regression model. The plot visualizes the relationship between two variables and fits a linear regression model to the data that can highlight differences. In the following, we use this linear regression model to highlight the differences between our two cluster segments and the age of the customers. </p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># add predictions to data as a new column
customer_df['segment'] = predicted_segments

# create a scatter plot of the first two features, colored by segment
sns.lmplot(x=&quot;charges&quot;, y=&quot;age&quot;, hue=&quot;segment&quot;, data=customer_df, aspect=2)
plt.show()</pre></div>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="470" data-attachment-id="11370" data-permalink="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/image-49-2/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2022/12/image-49.png" data-orig-size="1065,489" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-49" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2022/12/image-49.png" src="https://www.relataly.com/wp-content/uploads/2022/12/image-49-1024x470.png" alt="" class="wp-image-11370" srcset="https://www.relataly.com/wp-content/uploads/2022/12/image-49.png 1024w, https://www.relataly.com/wp-content/uploads/2022/12/image-49.png 300w, https://www.relataly.com/wp-content/uploads/2022/12/image-49.png 768w, https://www.relataly.com/wp-content/uploads/2022/12/image-49.png 1065w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>We can see that our model has determined two clusters in our data. The clusters seem to correspond well with the smoker category, which indicates that this attribute is decisive in forming relevant groups.</p>



<h4 class="wp-block-heading" id="h-5-2-dendrogram">5.2 Dendrogram</h4>



<p>The hierarchical clustering approach lets us visualize relationships between different groups in our dataset in a dendrogram. A dendrogram is a graphical representation of a hierarchical structure, such as the relationships between different groups of objects or organisms. It is typically used in biology to show the relationships between different species or taxonomic groups, but it can also be used in other fields to represent the hierarchical structure of any set of data. In a dendrogram, the objects or groups being studied are represented as branches on a tree-like diagram. The branches are usually labeled with the names of the objects or groups, and the lengths of the branches represent the distances or dissimilarities between the objects or groups. The branches are also arranged in a hierarchical manner, with the most closely related objects or groups being placed closer together and the more distantly related ones being placed farther apart.</p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># Visualize data similarity in a dendogram
def plot_dendrogram(model, **kwargs):
    # create the counts of samples under each node
    counts = np.zeros(model.children_.shape[0])
    n_samples = len(model.labels_)
    for i, merge in enumerate(model.children_):
        current_count = 0
        for child_idx in merge:
            if child_idx &lt; n_samples:
                current_count += 1  # leaf node
            else:
                current_count += counts[child_idx - n_samples]
        counts[i] = current_count

    linkage_matrix = np.column_stack(
        [model.children_, model.distances_, counts]
    ).astype(float)

    # Plot the corresponding dendrogram
    dendrogram(linkage_matrix, orientation='right',**kwargs)


plt.title(&quot;Hierarchical Clustering Dendrogram&quot;)
# plot the top three levels of the dendrogram
plot_dendrogram(cluster_model, truncate_mode=&quot;level&quot;, p=4)
plt.xlabel(&quot;Euclidean Distance&quot;)
plt.ylabel(&quot;Number of points in node (or index of point if no parenthesis).&quot;)
plt.show()</pre></div>



<p>Source: This code block is based on code <a href="https://scikit-learn.org/stable/auto_examples/cluster/plot_agglomerative_dendrogram.html" target="_blank" rel="noreferrer noopener">from the scikit-learn page</a></p>



<figure class="wp-block-image size-full"><img decoding="async" width="575" height="453" data-attachment-id="11396" data-permalink="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/image-53-4/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2022/12/image-53.png" data-orig-size="575,453" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-53" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2022/12/image-53.png" src="https://www.relataly.com/wp-content/uploads/2022/12/image-53.png" alt="" class="wp-image-11396" srcset="https://www.relataly.com/wp-content/uploads/2022/12/image-53.png 575w, https://www.relataly.com/wp-content/uploads/2022/12/image-53.png 300w" sizes="(max-width: 575px) 100vw, 575px" /></figure>



<h2 class="wp-block-heading">Summary</h2>



<p>In conclusion, hierarchical clustering is a powerful tool for customer segmentation that can help businesses better understand their customer base and target their marketing efforts more effectively. By grouping customers into clusters based on their characteristics and behaviors, companies can create targeted campaigns and personalize their marketing efforts to better meet the needs of each group. Using Python and the scikit-learn library, we were able to apply an agglomerative clustering approach to a dataset of customer data and identify two distinct segments. We can then use these segments to inform our marketing strategies and get a better understanding of our customers.</p>



<p>By the way, customer segmentation is an area where real-world data can be prone to bias and unfairness. If you&#8217;re concerned about this, check out our latest article on <a href="https://www.relataly.com/building-fair-machine-machine-learning-models-with-fairlearn/12804/" target="_blank" rel="noreferrer noopener">addressing fairness in machine learning with fairlearn</a>.</p>



<p>I hope this article was useful. If you have any feedback, please write your thoughts in the comments. </p>



<h2 class="wp-block-heading">Sources and Further Reading</h2>



<p>Articles</p>



<ul class="wp-block-list">
<li><a href="https://scikit-learn.org/stable/auto_examples/cluster/plot_agglomerative_dendrogram.html" target="_blank" rel="noreferrer noopener">https://scikit-learn.org/stable/auto_examples/cluster/plot_agglomerative_dendrogram.html</a></li>



<li>Images generated with OpenAI Dall-E and Midjourney.</li>
</ul>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<div class="wp-block-group"><div class="wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained">
<h4 class="wp-block-heading"><strong>Books on Clustering</strong></h4>



<ul class="wp-block-list">
<li><a href="https://amzn.to/3Gb5kfj" target="_blank" rel="noreferrer noopener">&#8220;Data Clustering: Algorithms and Applications&#8221; by Charu C. Aggarwal</a>: This book covers a wide range of clustering algorithms, including hierarchical clustering, and discusses their applications in various fields.</li>



<li><a href="https://amzn.to/3WmhGXB" target="_blank" rel="noreferrer noopener">&#8220;Data Mining: Practical Machine Learning Tools and Techniques&#8221; by Ian H. Witten and Eibe Frank</a>: This book is a comprehensive introduction to data mining and machine learning, including a chapter on hierarchical clustering.</li>
</ul>



<div style="display: inline-block;">
<iframe sandbox="allow-popups allow-scripts allow-modals allow-forms allow-same-origin" style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-eu.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&amp;OneJS=1&amp;Operation=GetAdHtml&amp;MarketPlace=DE&amp;source=ss&amp;ref=as_ss_li_til&amp;ad_type=product_link&amp;tracking_id=flo7up-21&amp;language=de_DE&amp;marketplace=amazon&amp;region=DE&amp;placement=0128042915&amp;asins=0128042915&amp;linkId=1e9fe160a76f7255e3eea8e0119ca74f&amp;show_border=true&amp;link_opens_in_new_window=true"></iframe>

<iframe sandbox="allow-popups allow-scripts allow-modals allow-forms allow-same-origin" style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-eu.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&amp;OneJS=1&amp;Operation=GetAdHtml&amp;MarketPlace=DE&amp;source=ss&amp;ref=as_ss_li_til&amp;ad_type=product_link&amp;tracking_id=flo7up-21&amp;language=de_DE&amp;marketplace=amazon&amp;region=DE&amp;placement=B00EYROAQU&amp;asins=B00EYROAQU&amp;linkId=ba1fcb8a59417e729afe33f6eceb2a9f&amp;show_border=true&amp;link_opens_in_new_window=true"></iframe>
</div>
</div></div>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<div class="wp-block-group"><div class="wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained">
<h4 class="wp-block-heading"><strong>Books on Machine Learning</strong></h4>



<ul class="wp-block-list">
<li><a href="https://amzn.to/3S9Nfkl" target="_blank" rel="noreferrer noopener">Aurélien Géron (2019) Hands-On Machine Learning</a></li>



<li><a href="https://amzn.to/3EKidwE" target="_blank" rel="noreferrer noopener">David Forsyth (2019) Applied Machine Learning Springer</a></li>
</ul>



<div style="display: inline-block;">

  <iframe sandbox="allow-popups allow-scripts allow-modals allow-forms allow-same-origin" style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-eu.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&amp;OneJS=1&amp;Operation=GetAdHtml&amp;MarketPlace=DE&amp;source=ss&amp;ref=as_ss_li_til&amp;ad_type=product_link&amp;tracking_id=flo7up-21&amp;language=de_DE&amp;marketplace=amazon&amp;region=DE&amp;placement=3030181162&amp;asins=3030181162&amp;linkId=669e46025028259138fbb5ccec12dfbe&amp;show_border=true&amp;link_opens_in_new_window=true"></iframe>

<iframe sandbox="allow-popups allow-scripts allow-modals allow-forms allow-same-origin" style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-eu.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&amp;OneJS=1&amp;Operation=GetAdHtml&amp;MarketPlace=DE&amp;source=ss&amp;ref=as_ss_li_til&amp;ad_type=product_link&amp;tracking_id=flo7up-21&amp;language=de_DE&amp;marketplace=amazon&amp;region=DE&amp;placement=1492032646&amp;asins=1492032646&amp;linkId=2214804dd039e7103577abd08722abac&amp;show_border=true&amp;link_opens_in_new_window=true"></iframe>
</div>
</div></div>
</div>
</div>



<p class="has-contrast-2-color has-base-3-background-color has-text-color has-background"><em>The links above to Amazon are affiliate links. By buying through these links, you support the Relataly.com blog and help to cover the hosting costs. Using the links does not affect the price.</em></p>



<p><strong>Relataly articles on clustering and machine learning</strong></p>



<ul class="wp-block-list">
<li><a href="https://www.relataly.com/simple-cluster-analysis-with-k-means-with-python/5070/" target="_blank" rel="noreferrer noopener">Simple Clustering using K-means in Python</a>: This article gives an overview of cluster analysis with k-means.</li>



<li><a href="https://www.relataly.com/crypto-market-cluster-analysis-using-affinity-propagation-python/8114/" target="_blank" rel="noreferrer noopener">Clustering crypto markets using affinity propagation in Python</a>: This article applies cluster analysis to crypto markets and creates a market map for various cryptocurrencies.</li>



<li><a href="https://www.relataly.com/building-fair-machine-machine-learning-models-with-fairlearn/12804/" target="_blank" rel="noreferrer noopener">Addressing fairness in machine learning with the fairlearn library</a></li>
</ul>



<p></p>
<p>The post <a href="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/">How to Use Hierarchical Clustering For Customer Segmentation in Python</a> appeared first on <a href="https://www.relataly.com">relataly.com</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">11335</post-id>	</item>
		<item>
		<title>Unlocking the Potential of Machine Learning in the Insurance Industry: Five Use Cases with High Business Value</title>
		<link>https://www.relataly.com/top-5-machine-learning-use-cases-in-insurance/10489/</link>
					<comments>https://www.relataly.com/top-5-machine-learning-use-cases-in-insurance/10489/#respond</comments>
		
		<dc:creator><![CDATA[Florian Follonier]]></dc:creator>
		<pubDate>Sat, 10 Dec 2022 17:18:32 +0000</pubDate>
				<category><![CDATA[Insurance]]></category>
		<category><![CDATA[Use Cases]]></category>
		<category><![CDATA[AI in Business]]></category>
		<category><![CDATA[AI in Insurance]]></category>
		<category><![CDATA[Classic Machine Learning]]></category>
		<category><![CDATA[Digital Transformation]]></category>
		<guid isPermaLink="false">https://www.relataly.com/?p=10489</guid>

					<description><![CDATA[<p>The insurance industry has long harnessed technology&#8217;s transformative power. From online policy applications to modernized claims processing systems, the tech revolution in insurance has been in motion for years. However, machine learning promises to be one of the most influential and disruptive advancements in the sector. Machine learning empowers insurers to analyze vast data volumes, ... <a title="Unlocking the Potential of Machine Learning in the Insurance Industry: Five Use Cases with High Business Value" class="read-more" href="https://www.relataly.com/top-5-machine-learning-use-cases-in-insurance/10489/" aria-label="Read more about Unlocking the Potential of Machine Learning in the Insurance Industry: Five Use Cases with High Business Value">Read more</a></p>
<p>The post <a href="https://www.relataly.com/top-5-machine-learning-use-cases-in-insurance/10489/">Unlocking the Potential of Machine Learning in the Insurance Industry: Five Use Cases with High Business Value</a> appeared first on <a href="https://www.relataly.com">relataly.com</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>The <a href="https://www.relataly.com/category/industries/insurance/" target="_blank" rel="noreferrer noopener">insurance industry</a> has long harnessed technology&#8217;s transformative power. From online policy applications to modernized claims processing systems, the tech revolution in insurance has been in motion for years. However, machine learning promises to be one of the most influential and disruptive advancements in the sector.</p>



<p>Machine learning empowers insurers to analyze vast data volumes, unveiling hidden patterns, delivering key insights, and enabling better-informed business decisions. It holds the potential to redefine operations, from customer segmentation and fraud detection to underwriting and claims processing, thereby enhancing customer service.</p>



<p>This article delves into five specific instances where machine learning is driving significant business value in insurance. It&#8217;s part of a new blog series that delves into machine learning&#8217;s role across various sectors, beginning with insurance. By exploring its real-world applications, insurance professionals can understand how to leverage this technology to streamline operations, manage risks more effectively, and enrich customer experiences.</p>



<p>Gear up to uncover how machine learning can help your insurance business stay competitive in the digital era!</p>



<p>Also: <a href="https://www.relataly.com/eliminating-friction-how-openais-gpt-streamlines-online-experiences-and-reduces-the-need-for-google-searches/13171/" target="_blank" rel="noreferrer noopener">Eliminating Friction: How OpenAI’s GPT Streamlines Online Experiences and Reduces the Need for Traditional Search</a> </p>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:100%">
<h2 class="wp-block-heading" id="h-five-machine-learning-use-cases-in-insurance-with-high-business-value">Five Machine Learning Use Cases in Insurance with High Business Value</h2>



<p>There are many potential machine learning use cases in insurance. The specific use cases that are most important will depend on the specific needs and goals of the insurance company. However, some common use cases in insurance include the following:</p>



<ol class="wp-block-list">
<li>Underwriting: Machine learning can improve underwriting by automating the process and reducing the risk of human error.</li>



<li>Fraud Detection: Machine learning can help detect fraudulent claims by identifying patterns and anomalies in data.</li>



<li>Customer Segmentation: Machine learning can help insurers segment their customer base and develop targeted marketing strategies.</li>



<li>Claim Processing: Machine learning can automate and accelerate the claims process, making it more efficient and accurate.</li>



<li>Risk Modelling: Machine learning can help insurers model and predict risks more accurately, allowing them to make better-informed decisions.</li>
</ol>
</div>
</div>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%"></div>
</div>



<div style="height:12px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">1. Underwriting</h3>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>Underwriting plays a crucial role in the insurance industry, involving the assessment of risk and determination of suitable premiums for coverage. The objective is to ensure profitability by accurately evaluating and pricing risk. Underwriters evaluate applicant information, such as age, health, and financial history, to predict the likelihood and cost of potential claims. This information, alongside other factors, guides the calculation of policy premiums.</p>



<p>Machine learning enables insurers to automate and enhance the underwriting process, making it more precise and efficient. By training models on historical data, patterns and trends associated with varying levels of risk can be identified. For instance, algorithms can recognize that individuals with specific medical conditions or occupations are more likely to make claims on their policies.</p>



<p>With advancements in Natural Language Processing (NLP), algorithms become proficient in understanding textual information. They can discern policy terms that correlate with higher or lower levels of risk. Once trained, these algorithms can evaluate new applicants and predict their risk levels, aiding insurers in making informed decisions regarding acceptance, rejection, and appropriate premium rates.</p>



<p>Machine learning empowers insurers to streamline underwriting procedures, ensuring accuracy, consistency, and improved risk assessment for sustainable business practices.</p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%">
<figure class="wp-block-image size-full"><img decoding="async" width="491" height="504" data-attachment-id="12870" data-permalink="https://www.relataly.com/top-5-machine-learning-use-cases-in-insurance/10489/ui-app-design-machine-learning-underwriting-in-surance-midjourney-min/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2023/03/ui-app-design-machine-learning-underwriting-in-surance-midjourney-min.png" data-orig-size="491,504" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="ui-app-design-machine-learning-underwriting-in-surance-midjourney-min" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2023/03/ui-app-design-machine-learning-underwriting-in-surance-midjourney-min.png" src="https://www.relataly.com/wp-content/uploads/2023/03/ui-app-design-machine-learning-underwriting-in-surance-midjourney-min.png" alt="" class="wp-image-12870" srcset="https://www.relataly.com/wp-content/uploads/2023/03/ui-app-design-machine-learning-underwriting-in-surance-midjourney-min.png 491w, https://www.relataly.com/wp-content/uploads/2023/03/ui-app-design-machine-learning-underwriting-in-surance-midjourney-min.png 292w" sizes="(max-width: 491px) 100vw, 491px" /><figcaption class="wp-element-caption">Underwriting processes can benefit from recent improvements in natural language processing models á la GPT-3. Image created with <a href="http://www.midjourney.com" target="_blank" rel="noreferrer noopener">midjourney</a>.</figcaption></figure>
</div>
</div>



<div style="height:9px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">2. Fraud Detection</h3>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>Insurance fraud is a serious problem that costs the insurance industry billions of dollars annually. It refers to the act of intentionally providing false or misleading information to an insurer, and it can take many forms. For example, customers may provide false information on a policy application, exaggerate the value or extent of a claim, or stage an accident or theft to make a claim. Service providers such as hospitals may also engage in fraud by exaggerating the costs of treating a patient.</p>



<p>Fraud is a significant issue for insurers, as it can lead to higher insurance premiums for all policyholders. It is estimated that insurance fraud costs the industry approximately $80 billion each year. To combat fraud, insurers are turning to machine learning to analyze large datasets of claims data and identify patterns and anomalies that may indicate fraudulent activity.</p>



<p>Machine learning algorithms can analyze vast amounts of data to identify suspicious behavior that may be indicative of fraud. For example, machine learning can be used to detect patterns of behavior that are inconsistent with normal claim activity, such as a sudden increase in claims activity from a particular location or a sudden change in the type of claims being submitted. By identifying these patterns, insurers can more effectively detect and prevent fraudulent activity.</p>



<p>Another way in which machine learning is helping insurers combat fraud is by identifying relationships between claimants. For example, machine learning algorithms can analyze social media data to identify connections between individuals who have submitted claims. This can help insurers detect cases of fraud in which multiple individuals collude to submit false claims. </p>



<p>Also: <a href="https://www.relataly.com/multivariate-outlier-detection-using-isolation-forests-in-python-detecting-credit-card-fraud/4233/" target="_blank" rel="noreferrer noopener">Multivariate Anomaly Detection on Time-Series Data in Python: Using Isolation Forests to Detect Credit Card Fraud</a> </p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%">
<figure class="wp-block-image size-large"><img decoding="async" width="512" height="295" data-attachment-id="12861" data-permalink="https://www.relataly.com/top-5-machine-learning-use-cases-in-insurance/10489/fraud-detection-machine-learning-in-insurance-min/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2023/03/fraud-detection-machine-learning-in-insurance-min.png" data-orig-size="891,513" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="fraud-detection-machine-learning-in-insurance-min" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2023/03/fraud-detection-machine-learning-in-insurance-min.png" src="https://www.relataly.com/wp-content/uploads/2023/03/fraud-detection-machine-learning-in-insurance-min-512x295.png" alt="" class="wp-image-12861" srcset="https://www.relataly.com/wp-content/uploads/2023/03/fraud-detection-machine-learning-in-insurance-min.png 512w, https://www.relataly.com/wp-content/uploads/2023/03/fraud-detection-machine-learning-in-insurance-min.png 300w, https://www.relataly.com/wp-content/uploads/2023/03/fraud-detection-machine-learning-in-insurance-min.png 768w, https://www.relataly.com/wp-content/uploads/2023/03/fraud-detection-machine-learning-in-insurance-min.png 891w" sizes="(max-width: 512px) 100vw, 512px" /><figcaption class="wp-element-caption">Image generated with <a href="http://www.midjourney.com/" target="_blank" rel="noreferrer noopener">Midjourney ai</a></figcaption></figure>
</div>
</div>



<div style="height:9px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">3. Customer Segmentation</h3>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>Customer segmentation is the process of dividing a customer base into smaller groups with similar characteristics. This is often done so that a business can tailor its products or services to the specific needs of each group, and can also help a business to target its marketing efforts more effectively. For example, a clothing retailer might segment its customers by age, gender, income level, and location, and then offer different promotions or discounts to each segment in order to maximize sales. However, collecting and using personal data to segment customers can raise concerns about privacy and data protection. Insurers must ensure that they are collecting and using customer data in a responsible and ethical manner and that they are complying with all relevant regulations and laws.</p>



<p>Also: <a href="https://www.relataly.com/predicting-the-customer-churn-of-a-telecommunications-provider/2378/" target="_blank" rel="noreferrer noopener">Customer Churn prediction using Python</a> </p>



<p>Machine learning can help cope with the challenges of customer segmentation in several ways. It can help identify more relevant and accurate segments by analyzing large amounts of data and identifying patterns and correlations that may not be obvious to human analysts. This can lead to more precise segmentation and a better understanding of customer behavior. Secondly, machine learning algorithms can be used to automate the process of segmenting customers. By inputting data such as demographic information, purchasing history, and online behavior into a machine learning algorithm, insurers can quickly and accurately identify the most relevant segments for their business. This approach can also help insurers better personalize their interactions with customers within each segment. </p>



<p>A recent relataly article describes how <a href="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/" target="_blank" rel="noreferrer noopener">to implement automated customer segmentation in Python</a>.</p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%">
<figure class="wp-block-image size-full"><img decoding="async" width="510" height="509" data-attachment-id="12860" data-permalink="https://www.relataly.com/top-5-machine-learning-use-cases-in-insurance/10489/customer-segmentation-insurance-machine-learning-python-tutorial-relataly-midjourney-min/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2023/03/customer-segmentation-insurance-machine-learning-python-tutorial-relataly-midjourney-min.png" data-orig-size="510,509" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="customer-segmentation-insurance-machine-learning-python-tutorial-relataly-midjourney-min" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2023/03/customer-segmentation-insurance-machine-learning-python-tutorial-relataly-midjourney-min.png" src="https://www.relataly.com/wp-content/uploads/2023/03/customer-segmentation-insurance-machine-learning-python-tutorial-relataly-midjourney-min.png" alt="" class="wp-image-12860" srcset="https://www.relataly.com/wp-content/uploads/2023/03/customer-segmentation-insurance-machine-learning-python-tutorial-relataly-midjourney-min.png 510w, https://www.relataly.com/wp-content/uploads/2023/03/customer-segmentation-insurance-machine-learning-python-tutorial-relataly-midjourney-min.png 300w, https://www.relataly.com/wp-content/uploads/2023/03/customer-segmentation-insurance-machine-learning-python-tutorial-relataly-midjourney-min.png 140w" sizes="(max-width: 510px) 100vw, 510px" /><figcaption class="wp-element-caption">Image generated with <a href="http://www.midjourney.com/" target="_blank" rel="noreferrer noopener">Midjourney ai</a></figcaption></figure>
</div>
</div>



<div style="height:9px" aria-hidden="true" class="wp-block-spacer"></div>



<h3 class="wp-block-heading">4. Claim Processing</h3>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>Claim processing is the process of evaluating, investigating, and resolving insurance claims. It typically involves verifying that the claim is valid and covered under the terms of the policy, determining the amount of the payout, and issuing payment to the insured party. Claim processing can be done manually or with the aid of specialized software. The goal of claim processing is to ensure that valid claims are paid quickly and accurately and that any fraudulent claims are detected and denied. Machine learning can help insurers identify patterns and trends in claims data, which can be used to detect potential fraud or other anomalies. It can also be used to automatically process claims, reducing the need for manual intervention and speeding up the process.</p>



<p>In addition, machine learning can help insurers make more accurate decisions about the payout amount for a claim. By analyzing data such as the type of claim, the severity of the damage, and the policyholder&#8217;s history, machine learning algorithms can predict the expected payout and ensure that it is fair and accurate. However, there are potential challenges in using machine learning for claim processing. One challenge is ensuring that the algorithms are fair and unbiased, as biased algorithms can result in discrimination against certain groups or individuals. To address this, insurers must take proactive measures to ensure that their machine learning algorithms are fair and unbiased.</p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%">
<figure class="wp-block-image size-full"><img decoding="async" width="502" height="510" data-attachment-id="12496" data-permalink="https://www.relataly.com/business-use-cases-for-openai-gpt-models-chatgpt-davinci/12200/robot_saying_simplemail_as_a_word-min/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2023/02/Robot_saying_simplemail_as_a_word-min.png" data-orig-size="502,510" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="claim processing with machine learning in the insurance industry relataly midjourney" data-image-description="&lt;p&gt;claim processing with machine learning in the insurance industry relataly midjourney&lt;/p&gt;
" data-image-caption="&lt;p&gt;claim processing with machine learning in the insurance industry relataly midjourney&lt;/p&gt;
" data-large-file="https://www.relataly.com/wp-content/uploads/2023/02/Robot_saying_simplemail_as_a_word-min.png" src="https://www.relataly.com/wp-content/uploads/2023/02/Robot_saying_simplemail_as_a_word-min.png" alt="claim processing with machine learning in the insurance industry relataly midjourney" class="wp-image-12496" srcset="https://www.relataly.com/wp-content/uploads/2023/02/Robot_saying_simplemail_as_a_word-min.png 502w, https://www.relataly.com/wp-content/uploads/2023/02/Robot_saying_simplemail_as_a_word-min.png 295w" sizes="(max-width: 502px) 100vw, 502px" /><figcaption class="wp-element-caption">Machine learning can significantly speed up claim processing. Image generated using <a href="https://openai.com/dall-e-2/" target="_blank" rel="noreferrer noopener">Midjourney</a>.</figcaption></figure>
</div>
</div>



<h3 class="wp-block-heading">5. Risk Modeling</h3>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>Insurers can use machine learning to develop more accurate and sophisticated models of their risks. For example, models can assess the risk associated with insuring a particular individual or property. Insurers can use such models to make more informed decisions about the risks they are willing to take. One specific use case is crime prediction, which we have <a href="https://www.relataly.com/category/use-case/risk-assessment/" target="_blank" rel="noreferrer noopener">recently covered in a separate article</a>. Insurers can determine the likelihood that a person or property will be a victim of a specific crime. Following this understanding, they can then adjust their offerings accordingly.</p>



<p>Risk modeling can also use satellite data. This data can include information on weather patterns, topography, land use, and other factors that can affect the risk of natural disasters, such as floods and hurricanes.</p>



<p>Insurers can use satellite data to create detailed maps of areas at risk of natural disasters. These maps can include information on the type of terrain, the density of vegetation, and the location of buildings and infrastructure. This information can be used to create models that predict the likelihood of damage from natural disasters in a particular area.</p>



<p>Insurers can also use the data to create more accurate and detailed flood and wind hazard maps. These maps can help insurers to determine the risk of insuring a particular property and can also help them to create more accurate pricing for policies. In addition, by using satellite data to monitor the changes of a given area, insurers can also detect if there is any new constructions or any new developments in the area that can affect the risk level of a certain property.</p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%">
<figure class="wp-block-image size-full"><img decoding="async" width="498" height="502" data-attachment-id="12660" data-permalink="https://www.relataly.com/volcano-erupting-in-water-machine-learning-python-tutorial-relataly-midjourney-min/" data-orig-file="https://www.relataly.com/wp-content/uploads/2023/03/volcano-erupting-in-water-machine-learning-python-tutorial-relataly-midjourney-min.png" data-orig-size="498,502" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="volcano erupting in water machine learning python tutorial relataly midjourney-min" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2023/03/volcano-erupting-in-water-machine-learning-python-tutorial-relataly-midjourney-min.png" src="https://www.relataly.com/wp-content/uploads/2023/03/volcano-erupting-in-water-machine-learning-python-tutorial-relataly-midjourney-min.png" alt="" class="wp-image-12660" srcset="https://www.relataly.com/wp-content/uploads/2023/03/volcano-erupting-in-water-machine-learning-python-tutorial-relataly-midjourney-min.png 498w, https://www.relataly.com/wp-content/uploads/2023/03/volcano-erupting-in-water-machine-learning-python-tutorial-relataly-midjourney-min.png 298w, https://www.relataly.com/wp-content/uploads/2023/03/volcano-erupting-in-water-machine-learning-python-tutorial-relataly-midjourney-min.png 140w" sizes="(max-width: 498px) 100vw, 498px" /><figcaption class="wp-element-caption">In combination with satellite data, machine learning allows a new level of risk modeling. Image generated using <a href="https://openai.com/dall-e-2/" target="_blank" rel="noreferrer noopener">Midjourney</a>.</figcaption></figure>
</div>
</div>



<div style="height:9px" aria-hidden="true" class="wp-block-spacer"></div>



<h2 class="wp-block-heading">Why don&#8217;t we see more Adoption?</h2>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>The insurance industry often lags behind in adopting machine learning technologies. Several insurers have initiated machine learning implementations, but many continue to grapple with the challenge. Insurance companies may encounter several stumbling blocks when deploying machine learning:</p>



<ul class="wp-block-list">
<li><strong>Data Accessibility</strong>: Machine learning algorithms rely on extensive data to learn and make precise predictions. However, insurance companies often struggle with insufficient data organization and IT infrastructures. Disparate systems and formats often scatter data, complicating the efficient training and usage of machine learning algorithms.</li>



<li><strong>Regulatory Hurdles</strong>: Insurance is a heavily regulated sector, laden with rules about data collection, usage, and sharing. These stipulations can inhibit insurance companies&#8217; use of machine learning, as they may require customer consent or other specific protocols to use certain data types.</li>



<li><strong>Expertise Shortage</strong>: Machine learning is a rapidly evolving, complex field requiring specialized skills for successful implementation. Many insurers lack in-house machine learning expertise and might need to either recruit or upskill existing employees.</li>



<li><strong>Change Resistance</strong>: Like any emergent technology, machine learning can face resistance within insurance companies. Employees might question its benefits or fear potential job loss due to automation. Overcoming such resistance is a significant challenge for insurers keen on deploying machine learning.</li>
</ul>



<p>By understanding these hurdles, insurance companies can devise strategies to integrate machine learning effectively, enhancing their operational efficiency and decision-making processes.</p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%"></div>
</div>



<div style="height:9px" aria-hidden="true" class="wp-block-spacer"></div>



<h2 class="wp-block-heading">Outlook</h2>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>The potential benefits of machine learning are manifold, and the insurance industry is becoming increasingly aware of its transformative power. In the coming years, we can expect to see even more widespread adoption of this technology.</p>



<p>One key factor driving this trend is the increasing accuracy and accessibility of machine learning algorithms. As technology continues to advance, insurers are finding it easier to implement these algorithms and make more informed decisions. With more insurance companies migrating to the cloud, the scalability and efficiency of machine learning solutions are also improving.</p>



<p>Another key factor is the growing availability of data. Insurers are now able to collect and store vast amounts of data, which can be used to train and refine machine learning algorithms. With more data, insurers can gain deeper insights into customer behavior and preferences, identify patterns and trends, and make more accurate risk assessments.</p>



<p>Finally, there is a growing recognition of the potential benefits of machine learning, both among insurance companies and regulators. As insurers continue to see the benefits of this technology, they are likely to drive further adoption and investment. At the same time, regulators are becoming increasingly aware of the potential for machine learning to improve efficiency and reduce costs in the insurance industry.</p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%">
<figure class="wp-block-image size-full"><img decoding="async" width="510" height="510" data-attachment-id="12863" data-permalink="https://www.relataly.com/top-5-machine-learning-use-cases-in-insurance/10489/umbrella-machine-learning-in-insurance-python-tutorial-min-1/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2023/03/umbrella-machine-learning-in-insurance-python-tutorial-min-1.png" data-orig-size="510,510" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="umbrella-machine-learning-in-insurance-python-tutorial-min-1" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2023/03/umbrella-machine-learning-in-insurance-python-tutorial-min-1.png" src="https://www.relataly.com/wp-content/uploads/2023/03/umbrella-machine-learning-in-insurance-python-tutorial-min-1.png" alt="" class="wp-image-12863" srcset="https://www.relataly.com/wp-content/uploads/2023/03/umbrella-machine-learning-in-insurance-python-tutorial-min-1.png 510w, https://www.relataly.com/wp-content/uploads/2023/03/umbrella-machine-learning-in-insurance-python-tutorial-min-1.png 300w, https://www.relataly.com/wp-content/uploads/2023/03/umbrella-machine-learning-in-insurance-python-tutorial-min-1.png 140w" sizes="(max-width: 510px) 100vw, 510px" /><figcaption class="wp-element-caption">The insurance industry is likely to see more adoption of machine learning in the coming years. Image generated with <a href="http://www.midjourney.com/" target="_blank" rel="noreferrer noopener">Midjourney ai</a></figcaption></figure>
</div>
</div>



<h2 class="wp-block-heading">Sources and Further Reading</h2>



<ol class="wp-block-list">
<li><a href="https://amzn.to/3Pii0DV" target="_blank" rel="noreferrer noopener">C Hull (2021) Machine Learning in Business: An Introduction to the World of Data Science</a></li>



<li><a href="https://amzn.to/3Y9XxW2" target="_blank" rel="noreferrer noopener">Dixon et. al.&nbsp;(2020) Machine Learning in Finance: From Theory to Practice</a></li>
</ol>



<p class="has-contrast-2-color has-base-3-background-color has-text-color has-background"><em>The links above to Amazon are affiliate links. By buying through these links, you support the Relataly.com blog and help to cover the hosting costs. Using the links does not affect the price.</em></p>
<p>The post <a href="https://www.relataly.com/top-5-machine-learning-use-cases-in-insurance/10489/">Unlocking the Potential of Machine Learning in the Insurance Industry: Five Use Cases with High Business Value</a> appeared first on <a href="https://www.relataly.com">relataly.com</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.relataly.com/top-5-machine-learning-use-cases-in-insurance/10489/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">10489</post-id>	</item>
		<item>
		<title>Customer Churn Prediction &#8211; Understanding Models with Feature Permutation Importance using Python</title>
		<link>https://www.relataly.com/predicting-the-customer-churn-of-a-telecommunications-provider/2378/</link>
					<comments>https://www.relataly.com/predicting-the-customer-churn-of-a-telecommunications-provider/2378/#respond</comments>
		
		<dc:creator><![CDATA[Florian Follonier]]></dc:creator>
		<pubDate>Sun, 02 Aug 2020 13:24:28 +0000</pubDate>
				<category><![CDATA[Churn Prediction]]></category>
		<category><![CDATA[Classification (two-class)]]></category>
		<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Data Sources]]></category>
		<category><![CDATA[Feature Permutation Importance]]></category>
		<category><![CDATA[Hyperparameter Tuning]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[Random Decision Forests]]></category>
		<category><![CDATA[Retail]]></category>
		<category><![CDATA[Scikit-Learn]]></category>
		<category><![CDATA[Seaborn]]></category>
		<category><![CDATA[Use Cases]]></category>
		<category><![CDATA[AI in E-Commerce]]></category>
		<category><![CDATA[AI in Marketing]]></category>
		<category><![CDATA[Digital Transformation]]></category>
		<category><![CDATA[Intermediate Tutorials]]></category>
		<category><![CDATA[Model Interpretation]]></category>
		<category><![CDATA[Multivariate Models]]></category>
		<category><![CDATA[Permutation Feature Importance]]></category>
		<category><![CDATA[Supervised Learning]]></category>
		<category><![CDATA[Two-Label Classification]]></category>
		<guid isPermaLink="false">https://www.relataly.com/?p=2378</guid>

					<description><![CDATA[<p>Customer retention is a prime objective for service companies, and understanding the patterns that lead to customer churn can be the key to maintaining long-lasting client relationships. Businesses incur significant costs when customers discontinue their services, hence it&#8217;s vital to identify potential churn risks and take preemptive actions to retain these customers. Machine Learning models ... <a title="Customer Churn Prediction &#8211; Understanding Models with Feature Permutation Importance using Python" class="read-more" href="https://www.relataly.com/predicting-the-customer-churn-of-a-telecommunications-provider/2378/" aria-label="Read more about Customer Churn Prediction &#8211; Understanding Models with Feature Permutation Importance using Python">Read more</a></p>
<p>The post <a href="https://www.relataly.com/predicting-the-customer-churn-of-a-telecommunications-provider/2378/">Customer Churn Prediction &#8211; Understanding Models with Feature Permutation Importance using Python</a> appeared first on <a href="https://www.relataly.com">relataly.com</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>Customer retention is a prime objective for service companies, and understanding the patterns that lead to customer churn can be the key to maintaining long-lasting client relationships. Businesses incur significant costs when customers discontinue their services, hence it&#8217;s vital to identify potential churn risks and take preemptive actions to retain these customers. Machine Learning models can be instrumental in identifying these patterns and providing valuable insights into customer behavior.</p>



<p>An intriguing technique, Permutation Feature Importance, allows us to discern the significance of different features of our machine learning model, thereby shedding light on their influence on customer churn. This tutorial guides you through the intricacies of this technique and its implementation.</p>



<p>The structure of this tutorial is as follows:</p>



<ul class="wp-block-list">
<li>We begin by discussing the business problem of customer churn and its implications.</li>



<li>We introduce the concept of Permutation Feature Importance, a powerful tool to identify essential features in our machine learning model.</li>



<li>We transition into the hands-on coding segment, where we build a churn prediction model using Python.</li>



<li>Our model undergoes a classification process and hyperparameter tuning to select the most effective parameters.</li>



<li>Utilizing the trained model, we predict the churn probabilities for a test set of customers.</li>



<li>Finally, we create a feature ranking based on their impact on the model&#8217;s performance.</li>
</ul>



<p>By employing permutation feature importance, this tutorial offers a deep-dive into the correlation between input variables and model predictions, providing actionable insights for effective customer churn management.</p>



<p>Also: </p>



<ul class="wp-block-list">
<li><a href="https://www.relataly.com/building-fair-machine-machine-learning-models-with-fairlearn/12804/" target="_blank" rel="noreferrer noopener">Using Fairlearn to Build Fair Machine Machine Learning Models with Python: Step-by-Step Towards More Responsible AI</a></li>



<li><a href="https://www.relataly.com/customer-segmentation-using-hierarchical-clustering-in-python/11335/" target="_blank" rel="noreferrer noopener">How to Use Hierarchical Clustering For Customer Segmentation in Python</a></li>
</ul>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%"><div class="wp-block-image">
<figure class="aligncenter size-large is-resized"><img decoding="async" data-attachment-id="2402" data-permalink="https://www.relataly.com/predicting-the-customer-churn-of-a-telecommunications-provider/2378/image-47/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2020/08/image.png" data-orig-size="448,173" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2020/08/image.png" src="https://www.relataly.com/wp-content/uploads/2020/08/image.png" alt="machine learning. It is particularly effective when combined with feature permutation importance" class="wp-image-2402" width="324" height="127"/><figcaption class="wp-element-caption">Customer churn prediction is a compelling use case for machine learning. It is particularly effective when combined with feature permutation importance.</figcaption></figure>
</div></div>
</div>



<div style="height:26px" aria-hidden="true" class="wp-block-spacer"></div>



<h2 class="wp-block-heading" id="h-what-is-churn-prediction">What is Churn Prediction?</h2>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>A company&#8217;s effort to persuade a new customer to sign a contract is many times higher than the costs incurred in retaining existing customers. According to industry experts, winning a new customer is four times more expensive than keeping an existing one. Providers that can identify churn candidates and manage to retain them can significantly reduce costs. </p>



<p>A crucial point is whether the provider succeeds in getting the churn candidates to stay. Sometimes it may be enough to contact the churn candidate and inquire about customer satisfaction. In other cases, this may not be enough, and the provider needs to increase the service value, for example, by offering free services or a discount. However, actions should be well thought out, as they can also negatively affect. For instance, if a customer hardly ever uses his contract, a call from the provider may even increase the desire to cancel the contract. Machine learning can help assess cases individually and identify the optimal anti-churn action. </p>



<h2 class="wp-block-heading" id="h-about-permutation-feature-importance">About Permutation Feature Importance</h2>



<p>Feature importance is a helpful technique for understanding the contribution of input variables (features) to a predictive model. The results from this technique can be as valuable as the predictions themselves, as they can help us understand the business context better. For example, let&#8217;s say we have trained a model that predicts which of our customers will likely churn. Wouldn&#8217;t it be interesting to know why specific customers are more likely to churn than others? Permutation feature importance can help us answer this question by providing us with a ranking of the input variables in our model by their usefulness. The order can validate assumptions about the business context and uncover causal relations in the data.</p>



<p>Compared to neural networks, one of the most significant advantages of traditional prediction models, such as a decision tree, is their interpretability. Neural networks are black boxes because it is tough to understand the relationship between input and model predictions. In traditional models, on the other hand, we can calculate the meaning of the features and use it to interpret the model and optimize its performance, for example, by removing features from the model that are not important. We, therefore, start with a simple model first and move on to more complex models once we understand the data.</p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%"></div>
</div>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<h2 class="wp-block-heading" id="h-implementing-a-customer-churn-prediction-model-in-python">Implementing a Customer Churn Prediction Model in Python</h2>



<p>In the following, we will implement a customer churn prediction model. We will train a decision forest model on a data set from Kaggle and optimize it using <a aria-label="undefined (opens in a new tab)" href="https://www.relataly.com/hyperparameter-tuning-with-grid-search/2261/" target="_blank" rel="noreferrer noopener">grid search</a>. The data contains customer-level information for a telecom provider and a binary prediction label of which customers canceled their contracts and did not. Finally, we will calculate the feature importance to understand how the model works. </p>



<p>The code is available on the GitHub repository.</p>



<div class="wp-block-kadence-advancedbtn kb-buttons-wrap kb-btns_bddeda-14"><a class="kb-button kt-button button kb-btn_b5bf96-e2 kt-btn-size-standard kt-btn-width-type-full kb-btn-global-inherit  kt-btn-has-text-true kt-btn-has-svg-true  wp-block-button__link wp-block-kadence-singlebtn" href="https://github.com/flo7up/relataly-public-python-tutorials/blob/master/02%20Classification/017%20Permutation%20Feature%20Importance%20-%20Customer%20Churn%20Prediction%20using%20Random%20Decision%20Forest.ipynb" target="_blank" rel="noreferrer noopener"><span class="kb-svg-icon-wrap kb-svg-icon-fe_eye kt-btn-icon-side-left"><svg viewBox="0 0 24 24"  fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"  aria-hidden="true"><path d="M1 12s4-8 11-8 11 8 11 8-4 8-11 8-11-8-11-8z"/><circle cx="12" cy="12" r="3"/></svg></span><span class="kt-btn-inner-text">View on GitHub </span></a>

<a class="kb-button kt-button button kb-btn_8e2f54-ca kt-btn-size-standard kt-btn-width-type-full kb-btn-global-inherit  kt-btn-has-text-true kt-btn-has-svg-true  wp-block-button__link wp-block-kadence-singlebtn" href="https://github.com/flo7up/relataly-public-python-API-tutorials" target="_blank" rel="noreferrer noopener"><span class="kb-svg-icon-wrap kb-svg-icon-fa_github kt-btn-icon-side-left"><svg viewBox="0 0 496 512"  fill="currentColor" xmlns="http://www.w3.org/2000/svg"  aria-hidden="true"><path d="M165.9 397.4c0 2-2.3 3.6-5.2 3.6-3.3.3-5.6-1.3-5.6-3.6 0-2 2.3-3.6 5.2-3.6 3-.3 5.6 1.3 5.6 3.6zm-31.1-4.5c-.7 2 1.3 4.3 4.3 4.9 2.6 1 5.6 0 6.2-2s-1.3-4.3-4.3-5.2c-2.6-.7-5.5.3-6.2 2.3zm44.2-1.7c-2.9.7-4.9 2.6-4.6 4.9.3 2 2.9 3.3 5.9 2.6 2.9-.7 4.9-2.6 4.6-4.6-.3-1.9-3-3.2-5.9-2.9zM244.8 8C106.1 8 0 113.3 0 252c0 110.9 69.8 205.8 169.5 239.2 12.8 2.3 17.3-5.6 17.3-12.1 0-6.2-.3-40.4-.3-61.4 0 0-70 15-84.7-29.8 0 0-11.4-29.1-27.8-36.6 0 0-22.9-15.7 1.6-15.4 0 0 24.9 2 38.6 25.8 21.9 38.6 58.6 27.5 72.9 20.9 2.3-16 8.8-27.1 16-33.7-55.9-6.2-112.3-14.3-112.3-110.5 0-27.5 7.6-41.3 23.6-58.9-2.6-6.5-11.1-33.3 2.6-67.9 20.9-6.5 69 27 69 27 20-5.6 41.5-8.5 62.8-8.5s42.8 2.9 62.8 8.5c0 0 48.1-33.6 69-27 13.7 34.7 5.2 61.4 2.6 67.9 16 17.7 25.8 31.5 25.8 58.9 0 96.5-58.9 104.2-114.8 110.5 9.2 7.9 17 22.9 17 46.4 0 33.7-.3 75.4-.3 83.6 0 6.5 4.6 14.4 17.3 12.1C428.2 457.8 496 362.9 496 252 496 113.3 383.5 8 244.8 8zM97.2 352.9c-1.3 1-1 3.3.7 5.2 1.6 1.6 3.9 2.3 5.2 1 1.3-1 1-3.3-.7-5.2-1.6-1.6-3.9-2.3-5.2-1zm-10.8-8.1c-.7 1.3.3 2.9 2.3 3.9 1.6 1 3.6.7 4.3-.7.7-1.3-.3-2.9-2.3-3.9-2-.6-3.6-.3-4.3.7zm32.4 35.6c-1.6 1.3-1 4.3 1.3 6.2 2.3 2.3 5.2 2.6 6.5 1 1.3-1.3.7-4.3-1.3-6.2-2.2-2.3-5.2-2.6-6.5-1zm-11.4-14.7c-1.6 1-1.6 3.6 0 5.9 1.6 2.3 4.3 3.3 5.6 2.3 1.6-1.3 1.6-3.9 0-6.2-1.4-2.3-4-3.3-5.6-2z"/></svg></span><span class="kt-btn-inner-text">Relataly GitHub Repo </span></a></div>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%"></div>
</div>



<h3 class="wp-block-heading">Prerequisites</h3>



<p>Before starting the coding part, make sure that you have set up your <a href="https://www.python.org/downloads/" target="_blank" rel="noreferrer noopener">Python 3</a> environment and required packages. If you don&#8217;t have an environment, you can follow&nbsp;<a href="https://www.relataly.com/anaconda-python-environment-machine-learning/1663/" target="_blank" rel="noreferrer noopener">this tutorial</a>&nbsp;to set up the&nbsp;<a href="https://www.anaconda.com/products/individual" target="_blank" rel="noreferrer noopener">Anaconda environment</a>.</p>



<p>Make sure you install all required packages. In this tutorial, we will be working with the following packages:&nbsp;</p>



<ul class="wp-block-list">
<li>Pandas</li>



<li>NumPy</li>



<li>Matplotlib</li>



<li>Seaborn</li>
</ul>



<p>In addition, we will be using <strong><em>Keras&nbsp;</em></strong>(2.0 or higher) with <strong><em>Tensorflow</em> </strong>backend and the machine learning library <strong><em>Scikit-learn</em></strong>.</p>



<p>You can install packages using console commands:</p>



<ul class="wp-block-list">
<li><em>pip install &lt;package name&gt;</em></li>



<li><em>conda install &lt;package name&gt;</em>&nbsp;(if you are using the anaconda packet manager)</li>
</ul>



<h3 class="wp-block-heading" id="h-step-1-loading-the-customer-churn-data">Step #1 Loading the Customer Churn Data</h3>



<p>We begin by loading a customer churn <a href="https://www.kaggle.com/barun2104/telecom-churn" target="_blank" rel="noreferrer noopener">dataset from Kaggle</a>. If you work with the Kaggle Python environment, you can directly save the dataset into your Kaggle project. After completing the download, put the dataset under the file path of your choice, but don&#8217;t forget to adjust the file path variable in the code. </p>



<p>The dataset contains 3333 records and the following attributes.</p>



<ul class="wp-block-list">
<li><strong>Churn</strong>: The prediction label: 1 if the customer canceled service, 0 if not.</li>



<li><strong>AccountWeeks</strong>: number of weeks the customer has had an active account</li>



<li><strong>ContractRenewal</strong>: 1 if customer recently renewed contract, 0 if not</li>



<li><strong>DataPlan</strong>: 1 if the customer has a data plan, 0 if not</li>



<li><strong>DataUsage</strong>: gigabytes of monthly data usage</li>



<li><strong>CustServCalls</strong>: number of calls into customer service</li>



<li><strong>DayMins</strong>: average daytime minutes per month</li>



<li><strong>DayCalls</strong>: average number of daytime calls</li>



<li><strong>MonthlyCharge</strong>: average monthly bill</li>



<li><strong>OverageFee</strong>: The most considerable overage fee in the last 12 months</li>
</ul>



<p>The following code will load the data from your local folder into your anaconda Python project:</p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}">import numpy as np 
import pandas as pd 
import math
from pandas.plotting import register_matplotlib_converters
import matplotlib.pyplot as plt 
import matplotlib.colors as mcolors
import matplotlib.dates as mdates 

from sklearn.metrics import confusion_matrix, classification_report
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.inspection import permutation_importance
import seaborn as sns


# set file path
filepath = &quot;data/Churn-prediction/&quot;

# Load train and test datasets
train_df = pd.read_csv(filepath + 'telecom_churn.csv')
train_df.head()</pre></div>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:false,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;null&quot;,&quot;mime&quot;:&quot;text/plain&quot;,&quot;theme&quot;:&quot;3024-day&quot;,&quot;lineNumbers&quot;:false,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:false,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Plain Text&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;text&quot;}">	Churn	AccountWeeks	ContractRenewal	DataPlan	DataUsage	CustServCalls	DayMins	DayCalls	MonthlyCharge	OverageFee	RoamMins
0	0		128				1				1			2.7			1				265.1		110		89.0			9.87		10.0
1	0		107				1				1			3.7			1				161.6		123		82.0			9.78		13.7
2	0		137				1				0			0.0			0				243.4		114		52.0			6.06		12.2
3	0		84				0				0			0.0			2				299.4		71		57.0			3.10		6.6
4	0		75				0				0			0.0			3				166.7		113		41.0			7.42		10.1</pre></div>



<h3 class="wp-block-heading" id="h-step-2-exploring-the-data">Step #2 Exploring the Data</h3>



<p>Before we begin with the preprocessing, we will quickly explore the data. For this purpose, we will create histograms for the different attributes in our data.</p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># # Create histograms for feature columns separated by prediction label value
df_plot = train_df.copy()

# class_columnname = 'Churn'
sns.pairplot(df_plot, hue=&quot;Churn&quot;, height=2.5, palette='muted')</pre></div>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="990" data-attachment-id="6808" data-permalink="https://www.relataly.com/predicting-the-customer-churn-of-a-telecommunications-provider/2378/pairplots-churn-prediction/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2022/04/pairplots-churn-prediction.png" data-orig-size="1828,1768" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="pairplots-churn-prediction" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2022/04/pairplots-churn-prediction.png" src="https://www.relataly.com/wp-content/uploads/2022/04/pairplots-churn-prediction-1024x990.png" alt="" class="wp-image-6808" srcset="https://www.relataly.com/wp-content/uploads/2022/04/pairplots-churn-prediction.png 1024w, https://www.relataly.com/wp-content/uploads/2022/04/pairplots-churn-prediction.png 300w, https://www.relataly.com/wp-content/uploads/2022/04/pairplots-churn-prediction.png 768w, https://www.relataly.com/wp-content/uploads/2022/04/pairplots-churn-prediction.png 1536w, https://www.relataly.com/wp-content/uploads/2022/04/pairplots-churn-prediction.png 1828w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Histograms of the churn prediction dataset separated by prediction label (red=churn, blue= no churn)</figcaption></figure>



<p>We can see that the data distribution for several attributes looks quite good and resembles a normal distribution, for example, for OverageFeed, DayMins, and DayCalls. However, the distribution for the prediction label is unbalanced. This is because more customers remain with their contract (prediction label class = 0) than those that cancel their contract (prediction label class = 1). </p>



<h3 class="wp-block-heading" id="h-step-3-data-preprocessing">Step #3 Data Preprocessing</h3>



<p>The next step is to preprocess the data. I have reduced this part to a minimum to keep this tutorial simple. For example, I do not treat the unbalanced label classes. However, this would be appropriate to improve the model performance in a real business context. The imbalanced data is also why I chose a decision forest as a model type. Decision forests can handle unbalanced data relatively well compared to traditional models such as logistic regression. </p>



<p>The following code splits the data into the train (x_train) and test data (x_test) and creates the respective datasets, which only contain the label class (y_train, y_test). The ratio is 0.7, resulting in 2333 records in the training dataset and 1000 in the test dataset.</p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># Create Training Dataset
x_df = train_df[train_df.columns[train_df.columns.isin(['AccountWeeks', 'ContractRenewal', 'DataPlan','DataUsage', 'CustServCalls', 'DayCalls', 'MonthlyCharge', 'OverageFee', 'RoamMins'])]].copy()
y_df = train_df['Churn'].copy()

# Split the data into x_train and y_train data sets
x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, train_size=0.7, random_state=0)
x_train</pre></div>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:false,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;null&quot;,&quot;mime&quot;:&quot;text/plain&quot;,&quot;theme&quot;:&quot;3024-day&quot;,&quot;lineNumbers&quot;:false,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:false,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Plain Text&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;text&quot;}">		AccountWeeks	ContractRenewal	DataPlan	DataUsage	CustServCalls	DayCalls	MonthlyCharge	OverageFee	RoamMins
2918	58				1				0			0.00		4				112			53.0			13.29		0.0
1884	51				0				1			3.32		2				60			74.2			10.03		12.3
2823	87				1				0			0.00		2				80			50.0			9.35		16.6
2319	83				1				1			2.35		3				105			91.5			12.65		8.7
2980	84				1				0			0.00		3				86			62.0			13.78		14.3
...		...				...				...			...			...				...			...				...			...
835	27	1				0				0.00		1			75				31.0		10.43			9.9
3264	89				1				1			1.59		0				98			50.9			10.36		5.9
1653	93				0				0			0.00		1				78			42.0			10.99		11.1
2607	91				1				0			0.00		3				100			53.0			11.97		9.9
2732	130				0				0			0.00		5				106			68.0			18.19		16.9</pre></div>



<h3 class="wp-block-heading" id="h-step-4-fit-an-optimized-decision-forest-model-for-churn-prediction-using-grid-search">Step #4 Fit an Optimized Decision Forest Model for Churn Prediction using Grid Search</h3>



<p>Now comes the exciting part. We will train a series of 36 decision forests and then choose the best-performing model. The technique used in this process is called hyperparameter tuning (more specifically, grid search), and I have recently published <a aria-label="undefined (opens in a new tab)" href="https://www.relataly.com/hyperparameter-tuning-with-grid-search/2261/" target="_blank" rel="noreferrer noopener">a separate article on this topic</a>.</p>



<p>The following code defines the parameters the grid search will test (max_depth, n_estimators, and min_samples_split). Then the code runs the grid search and trains the decision forests. Finally, we print out the model ranking along with model parameters. </p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># Define parameters
max_depth=[2, 4, 8, 16]
n_estimators = [64, 128, 256]
min_samples_split = [5, 20, 30]

param_grid = dict(max_depth=max_depth, n_estimators=n_estimators, min_samples_split=min_samples_split)

# Build the gridsearch
dfrst = RandomForestClassifier(n_estimators=n_estimators, max_depth=max_depth, min_samples_split=min_samples_split, class_weight='balanced')
grid = GridSearchCV(estimator=dfrst, param_grid=param_grid, cv = 5)
grid_results = grid.fit(x_train, y_train)

# Summarize the results in a readable format
results_df = pd.DataFrame(grid_results.cv_results_)
results_df.sort_values(by=['rank_test_score'], ascending=True, inplace=True)

# Reduce the results to selected columns
results_filtered = results_df[results_df.columns[results_df.columns.isin(['param_max_depth', 'param_min_samples_split', 'param_n_estimators','std_fit_time', 'rank_test_score', 'std_test_score', 'mean_test_score'])]].copy()
results_filtered</pre></div>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:false,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;null&quot;,&quot;mime&quot;:&quot;text/plain&quot;,&quot;theme&quot;:&quot;3024-day&quot;,&quot;lineNumbers&quot;:false,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:false,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Plain Text&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;text&quot;}">std_fit_time	param_max_depth	param_min_samples_split	param_n_estimators	mean_test_score	std_test_score	rank_test_score
28				0.004742		16						5					128	0.931415	0.006950		1
27				0.002620		16						5					64	0.925848	0.008177		2
29				0.015711		16						5					256	0.925846	0.006156		3
20				0.006258		8						5					256	0.923704	0.007961		4
19				0.001816		8						5					128	0.921988	0.006458		5
18				0.002161		8						5					64	0.919847	0.007716		6
31				0.003728		16						20					128	0.902690	0.011642		7
30				0.002057		16						20					64	0.901836	0.009789		8
32				0.004940		16						20					256	0.899691	0.009813		9
21				0.001994		8						20					64	0.898408	0.008710		10
22				0.003761		8						20					128	0.897121	0.007529		11
23				0.003828		8						20					256	0.895833	0.009159		12
33				0.003798		16						30					64	0.885546	0.010394		13
26				0.005560		8						30					256	0.885541	0.014937		14
...</pre></div>



<p>The best-performing model is model number 29, which scores 92,7 %. Its hyperparameters are as follows:</p>



<ul class="wp-block-list">
<li>max_depth = 16</li>



<li>min_samples_split = 5</li>



<li>n_estimators 256</li>
</ul>



<p>We will proceed with this model. So what does this model tell us?</p>



<p>We can gain an overview of the distributions of our customers according to their churn probability. Just use the following code:</p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># Predicting Probabilities
y_pred_prob = best_clf.predict_proba(x_test) 
churnproba = y_pred_prob[:,1]

# Create histograms for feature columns separated by prediction label value
sns.histplot(data=churnproba)</pre></div>



<figure class="wp-block-image size-full"><img decoding="async" width="517" height="324" data-attachment-id="6810" data-permalink="https://www.relataly.com/predicting-the-customer-churn-of-a-telecommunications-provider/2378/image-12-12/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2022/04/image-12.png" data-orig-size="517,324" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-12" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2022/04/image-12.png" src="https://www.relataly.com/wp-content/uploads/2022/04/image-12.png" alt="Customer Base According to their Churn Rate" class="wp-image-6810" srcset="https://www.relataly.com/wp-content/uploads/2022/04/image-12.png 517w, https://www.relataly.com/wp-content/uploads/2022/04/image-12.png 300w" sizes="(max-width: 517px) 100vw, 517px" /><figcaption class="wp-element-caption">Customer Base According to their Churn Rate</figcaption></figure>



<p>Customers who tend to churn have a churn probability greater than 0.5. They are further to the right in the diagram. So, we don&#8217;t have to worry about the customers on the far left (&lt;0.5).</p>



<h3 class="wp-block-heading" id="h-step-5-best-model-performance-insights">Step #5 Best Model Performance Insights</h3>



<p>Let&#8217;s take a more detailed look at the performance of the best model. We do this by calculating the confusion matrix. </p>



<p>If you want to learn more about measuring the performance of classification models, check out<a href="https://www.relataly.com/measuring-classification-performance-with-python-and-scikit-learn/846/" target="_blank" rel="noreferrer noopener"> this tutorial</a>.</p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># Extract the best decision forest 
best_clf = grid_results.best_estimator_
y_pred = best_clf.predict(x_test)

# Create a confusion matrix
cnf_matrix = confusion_matrix(y_test, y_pred)

# Create heatmap from the confusion matrix
class_names=[False, True] 
tick_marks = [0.5, 1.5]
fig, ax = plt.subplots(figsize=(7, 6))
sns.heatmap(pd.DataFrame(cnf_matrix), annot=True, cmap=&quot;Blues&quot;, fmt='g')
ax.xaxis.set_label_position(&quot;top&quot;)
plt.tight_layout()
plt.title('Confusion matrix')
plt.ylabel('Actual label'); plt.xlabel('Predicted label')
plt.yticks(tick_marks, class_names); plt.xticks(tick_marks, class_names)</pre></div>



<figure class="wp-block-image size-large"><img decoding="async" width="486" height="452" data-attachment-id="2387" data-permalink="https://www.relataly.com/predicting-the-customer-churn-of-a-telecommunications-provider/2378/image-14-4/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2020/07/image-14.png" data-orig-size="486,452" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-14" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2020/07/image-14.png" src="https://www.relataly.com/wp-content/uploads/2020/07/image-14.png" alt="Confusion matrix on churn probabilities calculated with feature permutation importance" class="wp-image-2387" srcset="https://www.relataly.com/wp-content/uploads/2020/07/image-14.png 486w, https://www.relataly.com/wp-content/uploads/2020/07/image-14.png 300w" sizes="(max-width: 486px) 100vw, 486px" /></figure>



<p>From 1000 customers in the test dataset, our model correctly classified 100 customers as churn candidates. For 832 customers, the model accurately predicted that these customers are unlikely to churn. In 30 cases, the model falsely classified customers as churn candidates, and 38 were missed and falsely classified as non-churn candidates. The result is a model accuracy of 93,2 % (based on a 0.5 threshold). </p>



<h3 class="wp-block-heading" id="h-step-6-permutation-feature-importance">Step #6 Permutation Feature Importance</h3>



<p>Now that we have trained a model that gives good results, we want to understand the importance of the model&#8217;s features. With the following code, we calculate the Feature Importance score. Then we visualize the results in a barplot.</p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># Load the data
r = permutation_importance(best_clf, x_test, y_test, n_repeats=30, random_state=0)

# Set the color range
clist = [(0, &quot;purple&quot;), (1, &quot;blue&quot;)]
rvb = mcolors.LinearSegmentedColormap.from_list(&quot;&quot;, clist)
colors = rvb(data_im['feature_permuation_score']/len(x_test.columns))

# Plot the barchart
data_im = pd.DataFrame(r.importances_mean, columns=['feature_permuation_score'])
data_im['feature_names'] = x_test.columns
data_im = data_im.sort_values('feature_permuation_score', ascending=False)

fig, ax = plt.subplots(figsize=(16, 5))
sns.barplot(y=data_im['feature_names'], x=&quot;feature_permuation_score&quot;, data=data_im, palette='nipy_spectral')
ax.set_title(&quot;Random Forest Feature Importances&quot;)</pre></div>



<figure class="wp-block-image size-full"><img decoding="async" width="1013" height="334" data-attachment-id="6801" data-permalink="https://www.relataly.com/predicting-the-customer-churn-of-a-telecommunications-provider/2378/output-2-2/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2022/04/output-2.png" data-orig-size="1013,334" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="output-2" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2022/04/output-2.png" src="https://www.relataly.com/wp-content/uploads/2022/04/output-2.png" alt="" class="wp-image-6801" srcset="https://www.relataly.com/wp-content/uploads/2022/04/output-2.png 1013w, https://www.relataly.com/wp-content/uploads/2022/04/output-2.png 300w, https://www.relataly.com/wp-content/uploads/2022/04/output-2.png 768w" sizes="(max-width: 1013px) 100vw, 1013px" /></figure>



<p>The feature ranking can provide the starting point for deeper analysis. As we can see, the most important features are the monthly fee, data usage, and customer service calls (CustServCalls). Of particular interest is the importance of customer service calls, as this could indicate that customers who encounter customer service have negative experiences.</p>



<h2 class="wp-block-heading" id="h-summary">Summary</h2>



<p>This article has shown how to implement a churn prediction model using Python and scikit-learn Machine Learning. We have calculated the permutation feature importance to analyze which features contribute to the performance of our model. You have learned that permutation feature importance can provide data scientists with new insights into the context of a prediction model. Therefore, the technique is often a good starting point for forthleading investigations. </p>



<p>I am always interested in improving my articles and learning from my audience. If you liked this article, show your appreciation by leaving a comment. And if you didn&#8217;t, let me know too. Cheers </p>



<h2 class="wp-block-heading">Sources and Further Reading</h2>



<ol class="wp-block-list">
<li><a href="https://amzn.to/3MAy8j5" target="_blank" rel="noreferrer noopener">Andriy Burkov (2020) Machine Learning Engineering</a></li>



<li><a href="https://amzn.to/3D0gB0e" target="_blank" rel="noreferrer noopener">Oliver Theobald (2020) Machine Learning For Absolute Beginners: A Plain English Introduction</a></li>



<li><a href="https://amzn.to/3S9Nfkl" target="_blank" rel="noreferrer noopener">Aurélien Géron (2019) Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems </a></li>



<li><a href="https://amzn.to/3EKidwE" target="_blank" rel="noreferrer noopener">David Forsyth (2019) Applied Machine Learning Springer</a></li>
</ol>



<p class="has-contrast-2-color has-base-3-background-color has-text-color has-background"><em>The links above to Amazon are affiliate links. By buying through these links, you support the Relataly.com blog and help to cover the hosting costs. Using the links does not affect the price.</em></p>



<p>And if you are interested in text mining and customer satisfaction, consider taking a look at my recent blog about sentiment analysis:</p>



<figure class="wp-block-embed is-type-wp-embed is-provider-relataly-com wp-block-embed-relataly-com"><div class="wp-block-embed__wrapper">
<blockquote class="wp-embedded-content" data-secret="HQ0lUMzbZR"><a href="https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/">Sentiment Analysis with Naive Bayes and Logistic Regression in Python</a></blockquote><iframe loading="lazy" class="wp-embedded-content" sandbox="allow-scripts" security="restricted"  title="&#8220;Sentiment Analysis with Naive Bayes and Logistic Regression in Python&#8221; &#8212; relataly.com" src="https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/embed/#?secret=WMWtohaT3c#?secret=HQ0lUMzbZR" data-secret="HQ0lUMzbZR" width="600" height="338" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe>
</div></figure>
<p>The post <a href="https://www.relataly.com/predicting-the-customer-churn-of-a-telecommunications-provider/2378/">Customer Churn Prediction &#8211; Understanding Models with Feature Permutation Importance using Python</a> appeared first on <a href="https://www.relataly.com">relataly.com</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.relataly.com/predicting-the-customer-churn-of-a-telecommunications-provider/2378/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2378</post-id>	</item>
		<item>
		<title>Training a Sentiment Classifier with Naive Bayes and Logistic Regression in Python</title>
		<link>https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/</link>
					<comments>https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/#respond</comments>
		
		<dc:creator><![CDATA[Florian Follonier]]></dc:creator>
		<pubDate>Sat, 20 Jun 2020 21:49:05 +0000</pubDate>
				<category><![CDATA[Algorithms]]></category>
		<category><![CDATA[Classification (multi-class)]]></category>
		<category><![CDATA[Finance]]></category>
		<category><![CDATA[Insurance]]></category>
		<category><![CDATA[Logistic Regression]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Naive Bayes]]></category>
		<category><![CDATA[Natural Language Processing]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[Retail]]></category>
		<category><![CDATA[Scikit-Learn]]></category>
		<category><![CDATA[Seaborn]]></category>
		<category><![CDATA[Sentiment Analysis]]></category>
		<category><![CDATA[Telecommunications]]></category>
		<category><![CDATA[AI in Business]]></category>
		<category><![CDATA[AI in E-Commerce]]></category>
		<category><![CDATA[Beginner Tutorials]]></category>
		<category><![CDATA[Classic Machine Learning]]></category>
		<category><![CDATA[Digital Transformation]]></category>
		<category><![CDATA[Social Media Data]]></category>
		<category><![CDATA[Supervised Learning]]></category>
		<guid isPermaLink="false">https://www.relataly.com/?p=2007</guid>

					<description><![CDATA[<p>Are you ready to learn about the exciting world of social media sentiment analysis using Python? In this article, we&#8217;ll dive into how companies are leveraging machine learning to extract insights from Twitter comments, and how you can do the same. By comparing two popular classification models &#8211; Naive Bayes and Logistic Regression &#8211; we&#8217;ll ... <a title="Training a Sentiment Classifier with Naive Bayes and Logistic Regression in Python" class="read-more" href="https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/" aria-label="Read more about Training a Sentiment Classifier with Naive Bayes and Logistic Regression in Python">Read more</a></p>
<p>The post <a href="https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/">Training a Sentiment Classifier with Naive Bayes and Logistic Regression in Python</a> appeared first on <a href="https://www.relataly.com">relataly.com</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>Are you ready to learn about the exciting world of social media sentiment analysis using Python? In this article, we&#8217;ll dive into how companies are leveraging machine learning to extract insights from Twitter comments, and how you can do the same. By comparing two popular classification models &#8211; Naive Bayes and Logistic Regression &#8211; we&#8217;ll help you identify which one best fits your needs.</p>



<p>Businesses are using sentiment analysis to make better sense of the vast amounts of data available online and on social media platforms. Understanding customer opinions and feedback can help companies identify trends and make more informed decisions. Whether you&#8217;re a business professional looking to leverage the power of social media data or a machine learning enthusiast, this article has everything you need to get started.</p>



<p>We&#8217;ll begin with an introduction to the concept of sentiment analysis and its theoretical foundations. Then, we&#8217;ll guide you through the practical steps of implementing a sentiment classifier in Python. Our model will analyze text snippets and categorize them into one of three sentiment categories: &#8220;positive,&#8221; &#8220;neutral,&#8221; or &#8220;negative.&#8221; Finally, we&#8217;ll compare the performance of Naive Bayes and Logistic Regression classifiers.</p>



<p>By the end of this article, you&#8217;ll have the skills and knowledge to perform sentiment analysis on social media data and apply these insights to your business or personal projects. So let&#8217;s jump right in!</p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%">
<figure class="wp-block-image size-large"><img decoding="async" width="512" height="423" data-attachment-id="13349" data-permalink="https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/flo7up_a_person_talking_to_a_virtual_assistant-_colorful_popart_2f34967a-ce4e-420d-bc01-75a4e47c1181-copy/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2023/03/Flo7up_a_person_talking_to_a_virtual_assistant._Colorful_popart_2f34967a-ce4e-420d-bc01-75a4e47c1181-Copy.png" data-orig-size="920,760" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Flo7up_a_person_talking_to_a_virtual_assistant._Colorful_popart_2f34967a-ce4e-420d-bc01-75a4e47c1181-Copy" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2023/03/Flo7up_a_person_talking_to_a_virtual_assistant._Colorful_popart_2f34967a-ce4e-420d-bc01-75a4e47c1181-Copy.png" src="https://www.relataly.com/wp-content/uploads/2023/03/Flo7up_a_person_talking_to_a_virtual_assistant._Colorful_popart_2f34967a-ce4e-420d-bc01-75a4e47c1181-Copy-512x423.png" alt="Sentiment analysis has various use cases from analyzing social media to reviewing customer feedback in call centers." class="wp-image-13349" srcset="https://www.relataly.com/wp-content/uploads/2023/03/Flo7up_a_person_talking_to_a_virtual_assistant._Colorful_popart_2f34967a-ce4e-420d-bc01-75a4e47c1181-Copy.png 512w, https://www.relataly.com/wp-content/uploads/2023/03/Flo7up_a_person_talking_to_a_virtual_assistant._Colorful_popart_2f34967a-ce4e-420d-bc01-75a4e47c1181-Copy.png 300w, https://www.relataly.com/wp-content/uploads/2023/03/Flo7up_a_person_talking_to_a_virtual_assistant._Colorful_popart_2f34967a-ce4e-420d-bc01-75a4e47c1181-Copy.png 768w, https://www.relataly.com/wp-content/uploads/2023/03/Flo7up_a_person_talking_to_a_virtual_assistant._Colorful_popart_2f34967a-ce4e-420d-bc01-75a4e47c1181-Copy.png 920w" sizes="(max-width: 512px) 100vw, 512px" /><figcaption class="wp-element-caption">Sentiment analysis has various use cases from analyzing social media to reviewing customer feedback in call centers.</figcaption></figure>
</div>
</div>



<p>Also: <a href="https://www.relataly.com/predicting-the-purchase-intention-of-online-shoppers/982/" target="_blank" rel="noreferrer noopener">Classifying Purchase Intention of Online Shoppers with Python</a></p>
</div>
</div>



<h2 class="wp-block-heading">What is Sentiment Analysis?</h2>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>Sentiment analysis is the process of identifying the sentiment, or emotional tone, of a piece of text. This can be useful for a wide range of applications, such as identifying customer sentiment towards a product or service, or detecting the overall sentiment of a social media post or news article.</p>



<p>Sentiment analysis is typically performed using natural language processing (NLP) techniques and machine learning algorithms. These tools allow computers to &#8220;understand&#8221; the meaning of text and identify the sentiment it contains. Sentiment analysis can be performed at various levels of granularity, from identifying the sentiment of an entire document to identifying the sentiment of individual words or phrases within a document.</p>



<h2 class="wp-block-heading">How Sentiment Classification Works</h2>



<figure class="wp-block-image size-large is-resized"><img decoding="async" data-attachment-id="4767" data-permalink="https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/image-61-3/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2021/06/image-61.png" data-orig-size="1171,492" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-61" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2021/06/image-61.png" src="https://www.relataly.com/wp-content/uploads/2021/06/image-61-1024x430.png" alt="sentiment classification using bayes and logistic regression in python" class="wp-image-4767" width="772" height="326" srcset="https://www.relataly.com/wp-content/uploads/2021/06/image-61.png 300w, https://www.relataly.com/wp-content/uploads/2021/06/image-61.png 768w" sizes="(max-width: 772px) 100vw, 772px" /><figcaption class="wp-element-caption">A sentiment classifier with three classes</figcaption></figure>



<p>There are many different approaches to sentiment analysis, and the specific methods used can vary depending on the specific application and the type of text being analyzed. Some common techniques for performing sentiment analysis include using machine learning algorithms to classify text as positive, negative, or neutral, and using lexicons, or lists of words with pre-defined sentiment, to identify the sentiment of individual words or phrases. In this way, it is possible to measure the emotions towards a specific topic, e.g., products, brands, political parties, services, or trends. </p>



<p>We can show how sentiment analysis works with a simple example:</p>



<ul class="wp-block-list">
<li>&#8220;This product is excellent!&#8221;</li>



<li>&#8220;I don&#8217;t like this ice cream at all.&#8221;</li>



<li>&#8220;Yesterday, I&#8217;ve seen a dolphin.&#8221;</li>
</ul>



<p>While the first sentence denotes a positive sentiment, the second sentence is negative, and in the third sentence, the sentiment is neutral. A sentiment classifier can automatically label these sentences:</p>



<figure class="wp-block-table"><table><tbody><tr><td><strong>Text Sequence</strong></td><td class="has-text-align-left" data-align="left"><strong>Sentiment Label</strong></td></tr><tr><td>This product is great!</td><td class="has-text-align-left" data-align="left">POSITIVE</td></tr><tr><td>I wouldn&#8217;t say I like this ice cream at all.</td><td class="has-text-align-left" data-align="left">NEGATIVE</td></tr><tr><td>Yesterday I saw a dolphin.</td><td class="has-text-align-left" data-align="left">NEUTRAL</td></tr></tbody></table><figcaption class="wp-element-caption">Sentiment Labels of Text Sequences </figcaption></figure>



<p>Predicting sentiment classes opens the door to more advanced statistical analysis and automated text processing. </p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%"></div>
</div>



<h3 class="wp-block-heading" id="h-use-cases-for-sentiment-analysis">Use Cases for Sentiment Analysis</h3>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>Sentiment analysis is used in various application domains, including the following:</p>



<ul class="wp-block-list">
<li>Sentiment analysis can lead to more efficient customer service by prioritizing customer requests. For example, when customers complain about services or products, an algorithm can identify and prioritize these messages so that sales agents answer them first. This can increase customer satisfaction and reduce the churn rate. </li>



<li>Twitter and Amazon reviews have become the first port of call for many customers today when exchanging information about products, brands, and trends or expressing their own opinions. A sentiment classifier systematically enables businesses to evaluate this information. It can collect data from social media posts and product reviews in real-time. For example, marketing managers can quickly obtain feedback on how well customers perceive campaigns and ads.</li>



<li>In stock market prediction, analyze the sentiment of social media or news feeds towards stocks or brands. The sentiment is then used as an additional feature alongside price data to create better forecasting models. Some forecasting also approaches exclusively rely on sentiment.</li>
</ul>



<p>Sentiment Analysis will find further adoption in the coming years. Especially in marketing and customer service, companies will increasingly use sentiment analysis to automate business processes and offer their customers a better customer experience.</p>



<h3 class="wp-block-heading" id="h-how-sentiment-analysis-works-feature-modelling">How Sentiment Analysis Works: Feature Modelling</h3>



<p>An essential step in the development of the Sentiment Classifier is language modeling. Before we can train a machine learning model, we need to bring the natural text into a structured format that the model can statistically assess in the training process. Various modeling techniques exist for this purpose. The two most common models are <strong>bag-of-words </strong>and <strong>n-grams</strong>.</p>



<p>Also: <a href="https://www.relataly.com/business-use-cases-for-openai-gpt-models-chatgpt-davinci/12200/" target="_blank" rel="noreferrer noopener">9 Powerful Applications of OpenAI&#8217;s ChatGPT and Davinci</a></p>



<h4 class="wp-block-heading" id="h-bag-of-word-model">Bag-of-word Model</h4>



<p>The bag-of-word model calculates probability distributions over the number of unique words. This approach converts individual words into individual features. Fill words with low predictive power, such as &#8220;the&#8221; or &#8220;a,&#8221; will be filtered out. Consider the following text sample: </p>



<p><em>&#8220;Bob likes to play basketball. But his friend Daniel prefers to play soccer. &#8220;</em></p>



<p>Through filtering of fill words, we convert his sample to: </p>



<p><em>&#8220;Bob&#8221;, &#8220;likes&#8221;, &#8220;play&#8221;, &#8220;basketball&#8221;, &#8220;friend&#8221;, &#8220;Daniel&#8221;, &#8220;play&#8221;, &#8220;soccer&#8221;</em>.</p>



<p>In the next step, the algorithm converts these words into a normalized form, where each word becomes a column:</p>



<figure class="wp-block-image size-large is-resized"><img decoding="async" data-attachment-id="4227" data-permalink="https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/image-15-9/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2021/05/image-15.png" data-orig-size="1046,134" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-15" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2021/05/image-15.png" src="https://www.relataly.com/wp-content/uploads/2021/05/image-15-1024x131.png" alt="" class="wp-image-4227" width="613" height="77" srcset="https://www.relataly.com/wp-content/uploads/2021/05/image-15.png 1024w, https://www.relataly.com/wp-content/uploads/2021/05/image-15.png 300w, https://www.relataly.com/wp-content/uploads/2021/05/image-15.png 768w" sizes="(max-width: 613px) 100vw, 613px" /><figcaption class="wp-element-caption">Text sample after transformation</figcaption></figure>



<p>The bag-of-word model is easy to implement. However, it does not consider grammar or word order.</p>



<h4 class="wp-block-heading" id="h-what-is-an-n-gram-model">What is an N-gram Model?</h4>



<p>The n-gram model considers multiple consecutive words in a text sequence and thus captures word sequence. The n stands for the number of words considered. </p>



<p>For example, in a 2-gram model, the sentence <em>&#8220;Bob likes to play basketball. But his friend Daniel prefers to play soccer.&#8221;</em> will be converted to the following model: </p>



<p>&#8220;Bob likes,&#8221; &#8220;likes to,&#8221; &#8220;to play,&#8221; &#8220;play basketball,&#8221; and so on. The n-gram model is often used to supplement the bag-of-word model. It is also possible to combine different n-gram models. For a 3-gram model, the text would be converted to &#8220;Bob likes to,&#8221; &#8220;likes to play,&#8221; &#8220;to play basketball,&#8221; and so on. Combining multiple n-gram models, however, can quickly increase model complexity.</p>



<h3 class="wp-block-heading" id="h-sentiment-classes-and-model-training">Sentiment Classes and Model Training </h3>



<p>The training of sentiment classifiers traditionally takes place in a supervised learning process. For this purpose, a training data set is used, which contains text sections with associated sentiment tendencies as prediction labels. Depending on which labels we provide and the training data, the classifier will learn to predict sentiment on a more or less fine-grained scale. Capturing neutral sentiment requires choosing an odd number of classes. </p>



<p>More advanced classifiers can detect different sorts of emotions and, for example, detect whether someone expresses anger, happiness, sadness, and so on. It basically comes down to which prediction labels you provide with the training data.</p>



<p>When the classifier is trained on a one-gram model, the classifier will learn that certain words such as &#8220;good&#8221; or &#8220;great&#8221; increase the probability that a text is associated with a positive sentiment. Consequently, when the classifier encounters these words in a new text sample, it will predict a higher probability of positive sentiment. On the other hand, the classifier will learn that words such as &#8220;hate&#8221; or &#8220;dislike&#8221; are often used to express negative opinions and thus increase the probability of negative sentiment.</p>



<h3 class="wp-block-heading" id="h-language-complications">Language Complications </h3>



<p>Is sentiment analysis that simple? Well, not quite. The cases described so far were deliberately chosen to be very simple. However, human language is very complex, and many peculiarities make it more difficult in practice to identify the sentiment in a sentence or paragraph. Here are some examples:</p>



<ul class="wp-block-list">
<li>Inversions: &#8220;this product is not so great.&#8221;</li>



<li>Typos: &#8220;I live this product!&#8221;</li>



<li>Comparisons: &#8220;Product a is better than product z.&#8221;</li>



<li>In a text passage, expression of pros and cons: &#8220;An advantage is that. But on the other hand&#8230;&#8221; </li>



<li>Unknown vocabulary: &#8220;This product is just whuopii!&#8221;</li>



<li>Missing words: &#8220;How can you not  this product?&#8221;</li>
</ul>



<p>Fortunately, there are methods to solve the complications mentioned above. I will explain more about them in one of my future articles. But for now, let&#8217;s stay with the basics and implement a simple classifier.</p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%"></div>
</div>



<h2 class="wp-block-heading" id="h-training-a-sentiment-classifier-using-twitter-data-in-python">Training a Sentiment Classifier Using Twitter Data in Python</h2>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<p>Venturing into the practical aspects of sentiment classification, our aim in this tutorial is to create an efficient sentiment classifier. Our focus will be on a dataset provided by Kaggle, comprising tens of thousands of tweets, each categorized as positive, neutral, or negative.</p>



<p>Our objective is to design a classifier capable of assigning one of these three sentiment categories to new text sequences. To this end, we will employ two distinct algorithms &#8211; Logistic Regression and Naive Bayes &#8211; as our estimators.</p>



<p>The tutorial culminates with a comparative analysis of the prediction performance of both models, followed by a set of test predictions. Through this hands-on approach, you will gain an understanding of the nuances of sentiment classification and its application in understanding public opinion, especially on social media platforms like Twitter.</p>



<p>Boost your sentiment analysis skills with our step-by-step guide, and learn to leverage machine learning tools for precise sentiment prediction.</p>



<p>The code is available on the GitHub repository.</p>



<div class="wp-block-kadence-advancedbtn kb-buttons-wrap kb-btns_9fa82e-91"><a class="kb-button kt-button button kb-btn_c78218-5c kt-btn-size-standard kt-btn-width-type-full kb-btn-global-inherit  kt-btn-has-text-true kt-btn-has-svg-true  wp-block-button__link wp-block-kadence-singlebtn" href="https://github.com/flo7up/relataly-public-python-tutorials/blob/master/08%20Natural%20Language%20Processing/700%20NLP%20-%20Simple%20Sentiment%20Analysis%20using%20Bayes%20and%20Logistic%20Regression.ipynb" target="_blank" rel="noreferrer noopener"><span class="kb-svg-icon-wrap kb-svg-icon-fe_eye kt-btn-icon-side-left"><svg viewBox="0 0 24 24"  fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"  aria-hidden="true"><path d="M1 12s4-8 11-8 11 8 11 8-4 8-11 8-11-8-11-8z"/><circle cx="12" cy="12" r="3"/></svg></span><span class="kt-btn-inner-text">View on GitHub </span></a>

<a class="kb-button kt-button button kb-btn_41bac4-d5 kt-btn-size-standard kt-btn-width-type-full kb-btn-global-inherit  kt-btn-has-text-true kt-btn-has-svg-true  wp-block-button__link wp-block-kadence-singlebtn" href="https://github.com/flo7up/relataly-public-python-API-tutorials" target="_blank" rel="noreferrer noopener"><span class="kb-svg-icon-wrap kb-svg-icon-fa_github kt-btn-icon-side-left"><svg viewBox="0 0 496 512"  fill="currentColor" xmlns="http://www.w3.org/2000/svg"  aria-hidden="true"><path d="M165.9 397.4c0 2-2.3 3.6-5.2 3.6-3.3.3-5.6-1.3-5.6-3.6 0-2 2.3-3.6 5.2-3.6 3-.3 5.6 1.3 5.6 3.6zm-31.1-4.5c-.7 2 1.3 4.3 4.3 4.9 2.6 1 5.6 0 6.2-2s-1.3-4.3-4.3-5.2c-2.6-.7-5.5.3-6.2 2.3zm44.2-1.7c-2.9.7-4.9 2.6-4.6 4.9.3 2 2.9 3.3 5.9 2.6 2.9-.7 4.9-2.6 4.6-4.6-.3-1.9-3-3.2-5.9-2.9zM244.8 8C106.1 8 0 113.3 0 252c0 110.9 69.8 205.8 169.5 239.2 12.8 2.3 17.3-5.6 17.3-12.1 0-6.2-.3-40.4-.3-61.4 0 0-70 15-84.7-29.8 0 0-11.4-29.1-27.8-36.6 0 0-22.9-15.7 1.6-15.4 0 0 24.9 2 38.6 25.8 21.9 38.6 58.6 27.5 72.9 20.9 2.3-16 8.8-27.1 16-33.7-55.9-6.2-112.3-14.3-112.3-110.5 0-27.5 7.6-41.3 23.6-58.9-2.6-6.5-11.1-33.3 2.6-67.9 20.9-6.5 69 27 69 27 20-5.6 41.5-8.5 62.8-8.5s42.8 2.9 62.8 8.5c0 0 48.1-33.6 69-27 13.7 34.7 5.2 61.4 2.6 67.9 16 17.7 25.8 31.5 25.8 58.9 0 96.5-58.9 104.2-114.8 110.5 9.2 7.9 17 22.9 17 46.4 0 33.7-.3 75.4-.3 83.6 0 6.5 4.6 14.4 17.3 12.1C428.2 457.8 496 362.9 496 252 496 113.3 383.5 8 244.8 8zM97.2 352.9c-1.3 1-1 3.3.7 5.2 1.6 1.6 3.9 2.3 5.2 1 1.3-1 1-3.3-.7-5.2-1.6-1.6-3.9-2.3-5.2-1zm-10.8-8.1c-.7 1.3.3 2.9 2.3 3.9 1.6 1 3.6.7 4.3-.7.7-1.3-.3-2.9-2.3-3.9-2-.6-3.6-.3-4.3.7zm32.4 35.6c-1.6 1.3-1 4.3 1.3 6.2 2.3 2.3 5.2 2.6 6.5 1 1.3-1.3.7-4.3-1.3-6.2-2.2-2.3-5.2-2.6-6.5-1zm-11.4-14.7c-1.6 1-1.6 3.6 0 5.9 1.6 2.3 4.3 3.3 5.6 2.3 1.6-1.3 1.6-3.9 0-6.2-1.4-2.3-4-3.3-5.6-2z"/></svg></span><span class="kt-btn-inner-text">Relataly GitHub Repo </span></a></div>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:33.33%"></div>
</div>



<h3 class="wp-block-heading" id="h-prerequisites">Prerequisites</h3>



<p>Before starting the coding part, make sure that you have set up your <a href="https://www.python.org/downloads/" target="_blank" rel="noreferrer noopener">Python 3</a> environment and required packages. If you don&#8217;t have an environment, follow&nbsp;<a href="https://www.relataly.com/category/data-science/setup-anaconda-environment/" target="_blank" rel="noreferrer noopener">this tutorial</a>&nbsp;to set up the&nbsp;<a href="https://www.anaconda.com/products/individual" target="_blank" rel="noreferrer noopener">Anaconda environment</a>. Also, make sure you install all required packages. In this tutorial, we will be working with the following standard packages:&nbsp;</p>



<ul class="wp-block-list">
<li><em><a href="https://pandas.pydata.org/" target="_blank" rel="noreferrer noopener">pandas</a></em></li>



<li><em><a href="https://numpy.org/" target="_blank" rel="noreferrer noopener">NumPy</a></em></li>



<li><a href="https://docs.python.org/3/library/math.html" target="_blank" rel="noreferrer noopener">math</a></li>



<li><em><a href="https://matplotlib.org/" target="_blank" rel="noreferrer noopener">matplotlib</a></em></li>
</ul>



<p>In addition, we will be using the machine learning libraries <a href="https://scikit-learn.org/stable/" target="_blank" rel="noreferrer noopener">scikit-learn</a> and <a href="https://seaborn.pydata.org/" target="_blank" rel="noreferrer noopener">seaborn</a> for visualization. </p>



<p>You can install packages using console commands:</p>



<ul class="wp-block-list">
<li><em>pip install &lt;package name&gt;</em></li>



<li><em>conda install &lt;package name&gt;</em>&nbsp;(if you are using the anaconda packet manager)</li>
</ul>



<h3 class="wp-block-heading" id="h-about-the-sentiment-dataset">About the Sentiment Dataset</h3>



<p>Let&#8217;s begin with the technical part. First, we will download the data from the <a href="https://www.kaggle.com/c/liverpool-ion-switching/data">Twitter sentiment example</a> on Kaggle.com. If you are working with the Kaggle Python environment, you can also directly save the data into your Python project. </p>



<p>We will only use the following two CSV files:</p>



<ul class="wp-block-list">
<li><strong>train.csv:</strong> contains 27480 text samples.</li>



<li><strong>test.csv:</strong> contains 3533 text samples for validation purposes</li>
</ul>



<p>The two files contain four columns:</p>



<ul class="wp-block-list">
<li>textID: An identifier</li>



<li>text: The raw text</li>



<li>selected_text: Contains a selected part of the original text</li>



<li>sentiment: Contains the prediction label</li>
</ul>



<p>We will copy the two files (train.csv and test.csv) into a folder that you can access from your Python environment. For simplicity, I recommend putting these files directly into the folder of your Python notebook. If you put them somewhere else, don&#8217;t forget to adjust the file path when loading the data.</p>



<h3 class="wp-block-heading" id="h-step-1-load-the-data">Step #1 Load the Data</h3>



<p>Assuming that you have copied the files into your Python environment, the next step is to load the data into your Python project and convert it into a Pandas DataFrame. The following code performs these steps and then prints a data summary.</p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}">import math 
import numpy as np 
import pandas as pd 
import matplotlib.pyplot as plt 
import matplotlib

from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import classification_report, multilabel_confusion_matrix
import scikitplot as skplt

import seaborn as sns

# Load the train data
train_path = &quot;train.csv&quot;
train_df = pd.read_csv(train_path) 

# Load the test data
sub_test_path = &quot;test.csv&quot;
test_df = pd.read_csv(sub_test_path) 

# Print a Summary of the data
print(train_df.shape, test_df.shape)
print(train_df.head(5))</pre></div>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:false,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;null&quot;,&quot;mime&quot;:&quot;text/plain&quot;,&quot;theme&quot;:&quot;3024-day&quot;,&quot;lineNumbers&quot;:false,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:false,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Plain Text&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;text&quot;}">			textID		text												selected_text									sentiment
0			cb774db0d1	I`d have responded, if I were going					I`d have responded, if I were going				neutral
1			549e992a42	Sooo SAD I will miss you here in San Diego!!!		Sooo SAD										negative
2			088c60f138	my boss is bullying me...							bullying me										negative
3			9642c003ef	what interview! leave me alone						leave me alone									negative
4			358bd9e861	Sons of ****, why couldn`t they put them on t...	Sons of ****,									negative
...	
27481 rows × 4 columns</pre></div>



<h3 class="wp-block-heading" id="h-step-2-clean-and-preprocess-the-data">Step #2 Clean and Preprocess the Data</h3>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<p>Next, let&#8217;s quickly clean and preprocess the data. First, as a best practice, we will transform the sentiment labels of the train and the test data into numeric values.</p>



<p>In addition, we will add a column in which we store the length of the text samples.</p>
</div>



<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow">
<figure class="wp-block-image size-large is-resized"><img decoding="async" data-attachment-id="2042" data-permalink="https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/image-12-3/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2020/06/image-12.png" data-orig-size="389,108" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-12" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2020/06/image-12.png" src="https://www.relataly.com/wp-content/uploads/2020/06/image-12.png" alt="" class="wp-image-2042" width="248" height="68" srcset="https://www.relataly.com/wp-content/uploads/2020/06/image-12.png 389w, https://www.relataly.com/wp-content/uploads/2020/06/image-12.png 300w" sizes="(max-width: 248px) 100vw, 248px" /><figcaption class="wp-element-caption">Three-class sentiment scale</figcaption></figure>
</div>
</div>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># Define Class Integer Values
cleanup_nums = {&quot;sentiment&quot;: {&quot;negative&quot;: 1, &quot;neutral&quot;: 2, &quot;positive&quot;: 3}}

# Replace the Classes with Integer Values
train_df = train_base_df.copy()
train_df.replace(cleanup_nums, inplace=True)

# Clean the Test Data
test_df = test_base_df.copy()
test_df.replace(cleanup_nums, inplace=True)

# Create a Feature based on Text Length
train_df['text_length'] = train_df['text'].str.len() # Store string length of each sample
train_df = train_df.sort_values(['text_length'], ascending=True)
train_df = train_df.dropna()
train_df </pre></div>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:false,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;null&quot;,&quot;mime&quot;:&quot;text/plain&quot;,&quot;theme&quot;:&quot;3024-day&quot;,&quot;lineNumbers&quot;:false,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:false,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Plain Text&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;text&quot;}">			textID			text			selected_text	sentiment	text_length
14339		5c6abc28a1		ow				ow				2			3.0
26005		0b3fe0ca78		?				?				2			3.0
11524		4105b6a05d		aw				aw				2			3.0
641			5210cc55ae		no				no				2			3.0
25699		ee8ee67cb3		ME				ME				2			3.0
...</pre></div>



<h3 class="wp-block-heading" id="h-step-3-explore-the-data">Step #3 Explore the Data</h3>



<p>It&#8217;s always good to check the label distribution for a potential imbalance. We do this by plotting the distribution of labels in the text samples. This is important because it helps ensure that the trained model can make accurate predictions on new data. If the class labels are unbalanced, then the model is more likely to be biased toward the more common classes, which can lead to poor performance on less common classes. </p>



<p>Also: <a href="https://www.relataly.com/exploratory-feature-preparation-for-regression-with-python-and-scikit-learn/8832/" target="_blank" rel="noreferrer noopener">Feature Engineering and Selection for Regression Models</a></p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># Print the Distribution of Sentiment Labels
sns.set_theme(style=&quot;whitegrid&quot;)
ax = train_df['sentiment'].value_counts(sort=False).plot(kind='barh', color='b')
ax.set_xlabel('Count')
ax.set_ylabel('Labels')</pre></div>



<figure class="wp-block-image size-large"><img decoding="async" width="378" height="265" data-attachment-id="4636" data-permalink="https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/image-42-4/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2021/06/image-42.png" data-orig-size="378,265" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-42" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2021/06/image-42.png" src="https://www.relataly.com/wp-content/uploads/2021/06/image-42.png" alt="Balance of Class Labels in a Machine Learning Use Case" class="wp-image-4636" srcset="https://www.relataly.com/wp-content/uploads/2021/06/image-42.png 378w, https://www.relataly.com/wp-content/uploads/2021/06/image-42.png 300w" sizes="(max-width: 378px) 100vw, 378px" /></figure>



<p>As we can see, our data is a bit imbalanced, but the differences are still within an acceptable range. </p>



<p> Let&#8217;s also quickly take a look at the distribution of text length. </p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># Visualize a distribution of text_length
sns.histplot(data=train_df, x='text_length', bins='auto', color='darkblue');
plt.title('Text Length Distribution')</pre></div>



<figure class="wp-block-image size-large is-resized"><img decoding="async" data-attachment-id="4635" data-permalink="https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/image-41-5/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2021/06/image-41.png" data-orig-size="397,279" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-41" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2021/06/image-41.png" src="https://www.relataly.com/wp-content/uploads/2021/06/image-41.png" alt="" class="wp-image-4635" width="468" height="329" srcset="https://www.relataly.com/wp-content/uploads/2021/06/image-41.png 397w, https://www.relataly.com/wp-content/uploads/2021/06/image-41.png 300w" sizes="(max-width: 468px) 100vw, 468px" /></figure>



<h3 class="wp-block-heading" id="h-step-4-train-a-sentiment-classifier">Step #4 Train a Sentiment Classifier </h3>



<p>Next, we will prepare the data and train a classification model. We will use the pipeline class of the scikit-learn framework and a bag-of-word model to keep things simple. In NLP, we typically have to transform and split up the text into sentences and words. The pipeline class is thus instrumental in NLP because it allows us to perform multiple actions on the same data in a row.</p>



<p>The pipeline contains transformation activities and a prediction algorithm, the final estimator. In the following, we create two pipelines that use two different prediction algorithms: </p>



<ul class="wp-block-list">
<li>Logistic Regression </li>



<li>Naive Bayes</li>
</ul>



<h4 class="wp-block-heading" id="h-4a-sentiment-classification-using-logistic-regression">4a) Sentiment Classification using Logistic Regression </h4>



<p>The first model that we will train uses the logistic regression algorithm. We create a new pipeline. Then we add two transformers and the logistic regression estimator. The pipeline will perform the following activities. </p>



<ul class="wp-block-list">
<li><strong><a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html">CountVectorizer</a></strong>: The vectorizer counts the number of words in each text sequence and creates the bag-of-word models. </li>



<li><strong><a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html">TfidfTransformer</a></strong>: The &#8220;Term Frequency Transformer&#8221; scales down the impact of words that occur very often in the training data and are thus less informative for the estimator than words that occur in a smaller fraction of the text samples. Examples are words such as &#8220;to&#8221; or &#8220;a.&#8221;</li>



<li><a href="https://www.relataly.com/category/machine-learning-algorithms/logistic-regression/" target="_blank" rel="noreferrer noopener">Logistic Regression</a>: By defining the multi_class as &#8216;auto,&#8217; we will use logistic regression in a one-vs-all approach. This approach will split our three-class prediction problem into two two-class problems. Our model differentiates between one class and all other classes in the first step. Then all observations that do not fall into the first class enter a second model that predicts whether it is class two or three. </li>
</ul>



<p>Our pipeline will transform the data and fit the logistic regression model to the training data. After executing the pipeline, we will directly evaluate the model&#8217;s performance. We will do this by defining a function that generates predictions on the test dataset and then evaluating the performance of our model. The function will print the performance results and store them in a dataframe. Later, when we want to compare the models, we can access the results from the dataframe. </p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># Create a transformation pipeline
# The pipeline sequentially applies a list of transforms and as a final estimator logistic regression 
pipeline_log = Pipeline([
                ('count', CountVectorizer()),
                ('tfidf', TfidfTransformer()),
                ('clf', LogisticRegression(solver='liblinear', multi_class='auto')),
        ])

# Train model using the created sklearn pipeline
model_name = 'logistic regression classifier'
model_lgr = pipeline_log.fit(train_df['text'], train_df['sentiment'])

def evaluate_results(model, test_df):
    # Predict class labels using the learner function
    test_df['pred'] = model.predict(test_df['text'])
    y_true = test_df['sentiment']
    y_pred = test_df['pred']
    target_names = ['negative', 'neutral', 'positive']

    # Print the Confusion Matrix
    results_log = classification_report(y_true, y_pred, target_names=target_names, output_dict=True)
    results_df_log = pd.DataFrame(results_log).transpose()
    print(results_df_log)
    matrix = confusion_matrix(y_true,  y_pred)
    sns.heatmap(pd.DataFrame(matrix), 
                annot=True, fmt=&quot;d&quot;, linewidths=.5, cmap=&quot;YlGnBu&quot;)
    plt.xlabel('Predictions')
    plt.xlabel('Actual')
    
    model_score = score(y_pred, y_true, average='macro')
    return model_score

    
# Evaluate model performance
model_score = evaluate_results(model_lgr, test_df)
performance_df = pd.DataFrame().append({'model_name': model_name, 
                                    'f1_score': model_score[0], 
                                    'precision': model_score[1], 
                                    'recall': model_score[2]}, ignore_index=True)</pre></div>



<figure class="wp-block-image size-large is-resized"><img decoding="async" data-attachment-id="4643" data-permalink="https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/image-47-3/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2021/06/image-47.png" data-orig-size="634,555" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-47" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2021/06/image-47.png" src="https://www.relataly.com/wp-content/uploads/2021/06/image-47.png" alt="performance of our logistic regression sentiment classifier" class="wp-image-4643" width="562" height="492" srcset="https://www.relataly.com/wp-content/uploads/2021/06/image-47.png 634w, https://www.relataly.com/wp-content/uploads/2021/06/image-47.png 300w" sizes="(max-width: 562px) 100vw, 562px" /></figure>



<h4 class="wp-block-heading" id="h-4b-sentiment-classification-using-naive-bayes">4b) Sentiment Classification using Naive Bayes</h4>



<p>We will reuse the code from the last step to create another pipeline. However, we will exchange the Logistic Regressor with Naive Bayes (&#8220;MultinomialNB&#8221;). Naive Bayes is commonly used in natural language processing. The algorithm calculates the probability of each tag for a text sequence and then outputs the tag with the highest score. For example, the probabilities of the appearance of the words &#8220;likes&#8221; and &#8220;good&#8221; in texts within the category &#8220;positive sentiment&#8221; are higher than the probabilities of formation within the &#8220;negative&#8221; or &#8220;neutral&#8221; categories. In this way, the model predicts how likely it is for an unknown text that contains those words to be associated with either category. </p>



<p>We will reuse the previously defined function to print a classification report and plot the results in a confusion matrix. </p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># Create a pipeline which transforms phrases into normalized feature vectors and uses a bayes estimator
model_name = 'bayes classifier'

pipeline_bayes = Pipeline([
                ('count', CountVectorizer()),
                ('tfidf', TfidfTransformer()),
                ('gnb', MultinomialNB()),
                ])

# Train model using the created sklearn pipeline
model_bayes = pipeline_bayes.fit(train_df['text'], train_df['sentiment'])

# Evaluate model performance
model_score = evaluate_results(model_bayes, test_df)
performance_df = performance_df.append({'model_name': model_name, 
                                    'f1_score': model_score[0], 
                                    'precision': model_score[1], 
                                    'recall': model_score[2]}, ignore_index=True)</pre></div>



<h3 class="wp-block-heading" id="h-step-5-measuring-multi-class-performance">Step #5 Measuring Multi-class Performance</h3>



<p>So which classifier achieved better performance? It&#8217;s not so easy to say because it depends on the metrics. We will compare the classification performance of our two classifiers using the following metrics:</p>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-layout-flow wp-block-column-is-layout-flow" style="flex-basis:66.66%">
<ul class="wp-block-list">
<li><strong>Accuracy </strong>is calculated as the ratio between correctly predicted observations and total observations.</li>



<li><strong>Precision</strong> is calculated as the ratio between correctly labeled values and the sum of the correctly and incorrectly labeled positive observations.</li>



<li>The formula for<strong> Recall </strong>is the ratio between correctly predicted observations and the sum of falsely classified observations. </li>



<li><strong>F1-Score</strong> takes all falsely labeled observations into account. It is, therefore, useful when you have an unequal class distribution.</li>
</ul>
</div>
</div>



<p>You may wonder which of our three classes is the positive class. The answer is that we have to determine the positive class ourselves. By defining the positive class, we can consider that some classes may be more important than others. The other classes will then be counted as negative. You can see this in the confusion matrix in sections 5 and 6, containing separate metrics for each label. </p>



<p>Another option is to define a weighted average (see confusion matrix) that weights the quantity of the different labels in the overall dataset. For example, the negative label is weighted a bit higher than the neutral label because fewer observations with negative and positive labels are present in the data. Because our classes are equally important, I decided to use the weighted average. </p>



<h3 class="wp-block-heading" id="h-step-6-comparing-model-performance">Step #6 Comparing Model Performance</h3>



<p>The following code calculates the performance metrics for the two classifiers and then creates a barplot to illustrate the results. In this specific case, the recall equals the accuracy. </p>



<p>If you want to learn more about measuring classification performance, check out<a href="https://www.relataly.com/measuring-classification-performance-with-python-and-scikit-learn/846/" target="_blank" rel="noreferrer noopener"> this article</a>.</p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}"># Compare model performance
print(performance_df)

performance_df = performance_df.sort_values('model_name')
fig, ax = plt.subplots(figsize=(12, 4))
tidy = performance_df.melt(id_vars='model_name').rename(columns=str.title)
sns.barplot(y='Model_Name', x='Value', hue='Variable', data=tidy, ax=ax, palette='husl',  linewidth=1, edgecolor=&quot;w&quot;)
plt.title('Model Outlier Detection Performance (Macro)')</pre></div>



<figure class="wp-block-image size-large is-resized"><img decoding="async" data-attachment-id="4639" data-permalink="https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/image-44-4/#main" data-orig-file="https://www.relataly.com/wp-content/uploads/2021/06/image-44.png" data-orig-size="1164,510" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image-44" data-image-description="" data-image-caption="" data-large-file="https://www.relataly.com/wp-content/uploads/2021/06/image-44.png" src="https://www.relataly.com/wp-content/uploads/2021/06/image-44-1024x449.png" alt="Performance comparison of the bayes classification model and the logistic regression classifier" class="wp-image-4639" width="744" height="326" srcset="https://www.relataly.com/wp-content/uploads/2021/06/image-44.png 1024w, https://www.relataly.com/wp-content/uploads/2021/06/image-44.png 300w, https://www.relataly.com/wp-content/uploads/2021/06/image-44.png 768w, https://www.relataly.com/wp-content/uploads/2021/06/image-44.png 1164w" sizes="(max-width: 744px) 100vw, 744px" /></figure>



<p>So we see that our Logistic Regression model performs slightly better than the Naive Bayes model. Of course, there are still many possibilities to improve the models further. In addition, there are several other methods and algorithms with which the performance could be significantly increased.</p>



<h3 class="wp-block-heading" id="h-step-7-make-test-predictions">Step #7 Make Test Predictions</h3>



<p>Finally, we use the Bayes classifier to generate some test predictions. Feel free to try it out! Change the text in the text phrases array and convince yourself that the classifier works. </p>



<div class="wp-block-codemirror-blocks-code-block code-block"><pre class="CodeMirror" data-setting="{&quot;showPanel&quot;:true,&quot;languageLabel&quot;:false,&quot;fullScreenButton&quot;:true,&quot;copyButton&quot;:true,&quot;mode&quot;:&quot;python&quot;,&quot;mime&quot;:&quot;text/x-python&quot;,&quot;theme&quot;:&quot;monokai&quot;,&quot;lineNumbers&quot;:true,&quot;styleActiveLine&quot;:false,&quot;lineWrapping&quot;:true,&quot;readOnly&quot;:true,&quot;fileName&quot;:&quot;&quot;,&quot;language&quot;:&quot;Python&quot;,&quot;maxHeight&quot;:&quot;400px&quot;,&quot;modeName&quot;:&quot;python&quot;}">testphrases = ['Mondays just suck!', 'I love this product', 'That is a tree', 'Terrible service']
for testphrase in testphrases:
    resultx = model_lgr.predict([testphrase]) # use model_bayes for predictions with the other model
    dict = {1: 'Negative', 2: 'Neutral', 3: 'Positive'}
    print(testphrase + '-&gt; ' + dict[resultx[0]])</pre></div>



<ul class="wp-block-list">
<li>Mondays suck!-&gt; Negative </li>



<li>I love this product-&gt; Positive </li>



<li>That is a tree-&gt; Neutral </li>



<li>Terrible service-&gt; Negative</li>
</ul>



<h2 class="wp-block-heading" id="h-summary">Summary</h2>



<p>That&#8217;s it! In this tutorial, you have learned to build a simple sentiment classifier that can detect sentiment expressed through text on a three-class scale. We have trained and tested two standard classification algorithms &#8211; Logistic Regression and Naive Bayes. Finally, we have compared the performance of the two algorithms and made some test predictions. </p>



<p>The best way to deepen your knowledge of sentiment analysis is to apply it in practice. I thus want to encourage you to use your knowledge by tackling other NLP challenges. For example, you could build a sentiment classifier that assigns text phrases to labels such as sports, fashion, cars, technology, etc. If you are still looking for data you can use for such a project, you will find exciting ones on Kaggle.com.</p>



<p>Let me know if you found this tutorial helpful. I appreciate your feedback!</p>



<h2 class="wp-block-heading">Sources and Further Reading</h2>



<div style="display: inline-block;">
  <iframe sandbox="allow-popups allow-scripts allow-modals allow-forms allow-same-origin" style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-eu.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&amp;OneJS=1&amp;Operation=GetAdHtml&amp;MarketPlace=DE&amp;source=ss&amp;ref=as_ss_li_til&amp;ad_type=product_link&amp;tracking_id=flo7up-21&amp;language=de_DE&amp;marketplace=amazon&amp;region=DE&amp;placement=3030181162&amp;asins=3030181162&amp;linkId=669e46025028259138fbb5ccec12dfbe&amp;show_border=true&amp;link_opens_in_new_window=true"></iframe>
<iframe sandbox="allow-popups allow-scripts allow-modals allow-forms allow-same-origin" style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-eu.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&amp;OneJS=1&amp;Operation=GetAdHtml&amp;MarketPlace=DE&amp;source=ss&amp;ref=as_ss_li_til&amp;ad_type=product_link&amp;tracking_id=flo7up-21&amp;language=de_DE&amp;marketplace=amazon&amp;region=DE&amp;placement=1999579577&amp;asins=1999579577&amp;linkId=91d862698bf9010ff4c09539e4c49bf4&amp;show_border=true&amp;link_opens_in_new_window=true"></iframe>
<iframe sandbox="allow-popups allow-scripts allow-modals allow-forms allow-same-origin" style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-eu.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&amp;OneJS=1&amp;Operation=GetAdHtml&amp;MarketPlace=DE&amp;source=ss&amp;ref=as_ss_li_til&amp;ad_type=product_link&amp;tracking_id=flo7up-21&amp;language=de_DE&amp;marketplace=amazon&amp;region=DE&amp;placement=1839217715&amp;asins=1839217715&amp;linkId=356ba074068849ff54393f527190825d&amp;show_border=true&amp;link_opens_in_new_window=true"></iframe>
<iframe sandbox="allow-popups allow-scripts allow-modals allow-forms allow-same-origin" style="width:120px;height:240px;" marginwidth="0" marginheight="0" scrolling="no" frameborder="0" src="//ws-eu.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&amp;OneJS=1&amp;Operation=GetAdHtml&amp;MarketPlace=DE&amp;source=ss&amp;ref=as_ss_li_til&amp;ad_type=product_link&amp;tracking_id=flo7up-21&amp;language=de_DE&amp;marketplace=amazon&amp;region=DE&amp;placement=1492032646&amp;asins=1492032646&amp;linkId=2214804dd039e7103577abd08722abac&amp;show_border=true&amp;link_opens_in_new_window=true"></iframe>
</div>



<p class="has-contrast-2-color has-base-3-background-color has-text-color has-background"><em>The links above to Amazon are affiliate links. By buying through these links, you support the Relataly.com blog and help to cover the hosting costs. Using the links does not affect the price.</em></p>
<p>The post <a href="https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/">Training a Sentiment Classifier with Naive Bayes and Logistic Regression in Python</a> appeared first on <a href="https://www.relataly.com">relataly.com</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.relataly.com/simple-sentiment-analysis-using-naive-bayes-and-logistic-regression/2007/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2007</post-id>	</item>
	</channel>
</rss>
