Multivariate Anomaly Detection on Time-Series Data in Python: Using Isolation Forests to Detect Credit Card Fraud

Credit card fraud has become one of the most common use cases for anomaly detection systems. The number of fraud attempts has risen sharply, resulting in billions of dollars in losses. Early detection of fraud attempts with machine learning is therefore becoming increasingly important. In this article, we take on the fight against international credit card fraud and develop a multivariate anomaly detection model in Python that spots fraudulent payment transactions. The model will use the Isolation Forest algorithm, one of the most effective techniques for detecting outliers. Isolation Forests are so-called ensemble models. They have various hyperparameters with which we can optimize model performance. However, we will not do this manually but instead, use grid search for hyperparameter tuning. To assess the performance of our model, we will also compare it with other models.

The remainder of this article is structured as follows: We start with a brief introduction to anomaly detection and look at the Isolation Forest algorithm. Equipped with these theoretical foundations, we then turn to the practical part, in which we train and validate an isolation forest that detects credit card fraud. We use an unsupervised learning approach, where the model learns to distinguish regular from suspicious card transactions. We will train our model on a public dataset from Kaggle that contains credit card transactions. Finally, we will compare the performance of our model against two nearest neighbor algorithms (LOF and KNN).

identifying anomalous data points in two dimensions, credit card fraud detection
Identifying anomalous data points in two dimensions in credit card fraud detection

Multivariate Anomaly Detection

Before we take a closer look at the use case and our unsupervised approach, let’s briefly discuss anomaly detection. Anomaly detection deals with finding points that deviate from the legitimate data regarding their mean or median in a distribution. In machine learning, the term is often used synonymously with outlier detection.

Some anomaly detection models work with a single feature (univariate data), for example, in monitoring electronic signals. However, most anomaly detection models use multivariate data, which means they have two (bivariate) or more (multivariate) features. They find a wide range of applications, including the following:

  • Predictive Maintenance and Detection of Malfunctions and Decay
  • Detection of Retail Bank Credit Card Fraud
  • Detection of Pricing Errors
  • Cyber Security, for example, Network Intrusion Detection
  • Detecting Fraudulent Market Behavior in Investment Banking

Unsupervised Algorithms for Anomaly Detection

Outlier detection is a classification problem. However, the field is more diverse as outlier detection is a problem we can approach with supervised and unsupervised machine learning techniques. It would go beyond the scope of this article to explain the multitude of outlier detection techniques. Still, the following chart provides a good overview of standard algorithms that learn unsupervised.

A prerequisite for supervised learning is that we have information about which data points are outliers and belong to regular data. In credit card fraud detection, this information is available because banks can validate with their customers whether a suspicious transaction is a fraud or not. In many other outlier detection cases, it remains unclear which outliers are legitimate and which are just noise or other uninteresting events in the data.

Whether we know which classes in our dataset are outliers and which are not affects the selection of possible algorithms we could use to solve the outlier detection problem. If the class labels are unavailable, the unsupervised learning techniques are a natural choice. And if the class labels are available, we could use both unsupervised and supervised learning algorithms.

In the following, we will focus on Isolation Forests.

Unsupervised Algorithms for Anomaly Detection. Outlier detection using random isolation forests
Unsupervised Algorithms for Anomaly Detection.

The Isolation Forest (“iForest”) Algorithm

Isolation forests (sometimes called iForests) are among the most powerful techniques for identifying anomalies in a dataset. They belong to the group of so-called ensemble models. The predictions of ensemble models do not rely on a single model. Instead, they combine the results of multiple independent models (decision trees). Nevertheless, isolation forests should not be confused with traditional random decision forests. While random forests predict given class labels (supervised learning), isolation forests learn to distinguish outliers from inliers (regular data) in an unsupervised learning process.

An Isolation Forest contains multiple independent isolation trees. The algorithm invokes a process that recursively divides the training data at random points to isolate data points from each other to build an Isolation Tree. The number of partitions required to isolate a point tells us whether it is an anomalous or regular point. The underlying assumption is that random splits can isolate an anomalous point much sooner than nominal.

Isolation Tree and Isolation Forest (Tree Ensemble), Outlier detection using random isolation forests
Isolation Tree and Isolation Forest (Tree Ensemble)

How the Isolation Forest Algorithm Works

The illustration below shows exemplary training of an Isolation Tree on univariate data, i.e., with only one feature. The algorithm has already split the data at five random points between the minimum and maximum values of a random sample. The isolated points are colored in purple. The example below has taken two partitions to isolate the point on the far left. The other purple points were separated after 4 and 5 splits.

The partitioning process ends when the algorithm has isolated all points from each other or when all remaining points have equal values. The algorithm has calculated and assigned an outlier score to each point at the end of the process, based on how many splits it took to isolate it.

When using an isolation forest model on unseen data to detect outliers, the algorithm will assign an anomaly score to the new data points. These scores will be calculated based on the ensemble trees we built during model training.

Exemplary partitioning process of an isolation tree (5 Steps); Outlier detection using random isolation forests;  Outlier detection using isolation forests
Exemplary partitioning process of an isolation tree (5 Steps)

So how does this process work when our dataset involves multiple features? For multivariate anomaly detection, partitioning the data remains almost the same. The significant difference is that the algorithm selects a random feature in which the partitioning will occur before each partitioning. Consequently, multivariate isolation forests split the data along multiple dimensions (features).

Credit Card Fraud Detection using Isolation Forests

Monitoring transactions has become a crucial task for financial institutions. In 2019 alone, more than 271,000 cases of credit card theft were reported in the U.S., causing billions of dollars in losses and making credit card fraud one of the most common types of identity theft. The vast majority of fraud cases are attributable to organized crime, which often specializes in this particular crime.

Anything that deviates from the customer’s normal payment behavior can make a transaction suspicious, including an unusual location, time, or country in which the customer conducted the transaction. Credit card providers use similar anomaly detection systems to monitor their customers’ transactions and look for potential fraud attempts. They can halt the transaction and inform their customer as soon as they detect a fraud attempt. In the following, we train an Isolation Forest algorithm for credit card fraud detection using Python.

Now that we have established the context for our machine learning problem, we are ready to begin with the implementation.

Multivariate Anomaly Detection on Time-Series Data in Python: Each application of a credit card creates a new data point to review. Outlier detection using isolation forests
Each application of a credit card creates a new data point to review.

The code is available on the GitHub repository.

Prerequisites

Before starting the coding part, make sure that you have set up your Python 3 environment and required packages. If you don’t have an environment set up yet, consider the Anaconda Python environment. To set it up, you can follow the steps in this tutorial.

Also, make sure you install all required packages. In this tutorial, we will be working with the following standard packages: 

In addition, we will be using the machine learning library Scikit-learn and Seaborn for visualization.

You can install packages using console commands:

  • pip install <package name>
  • conda install <package name> (if you are using the anaconda packet manager)

Dataset: Credit Card Transactions

In the following, we will work with a public dataset containing anonymized credit card transactions made by European cardholders in September 2013. You can download the dataset from Kaggle.com.

The dataset contains 28 features (V1-V28) obtained from the source data using Principal Component Analysis (PCA). In addition, the data includes the date and the amount of the transaction.

Transactions are labeled fraudulent or genuine, with 492 fraudulent cases out of 284,807 transactions. The positive class (frauds) accounts for only 0.172% of all credit card transactions, so the classes are highly unbalanced.

Step #1: Load the Data

We begin by setting up imports and loading the data into our Python project. Then we’ll quickly verify that the dataset looks as expected.

import math 
import numpy as np 
import pandas as pd 
import matplotlib.pyplot as plt
from datetime import date, timedelta, datetime
import seaborn as sns
from sklearn.ensemble import IsolationForest
from sklearn.model_selection import train_test_split
from sklearn.neighbors import LocalOutlierFactor, KNeighborsClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report, plot_confusion_matrix

# The Data can be downloaded from Kaggle.com: https://www.kaggle.com/mlg-ulb/creditcardfraud?select=creditcard.csv
path = 'data/credit-card-transactions/'
df = pd.read_csv(f'{path}creditcard.csv')
df
Multivariate Anomaly Detection on Time-Series Data in Python: Input data to the prediction model are credit card transactions

Everything should look good so that we can continue.

Step #2: Data Exploration

We will create some histograms that visualize the distribution of the different features. In this way, we develop a better understanding of the data.

2.1 Features

First, we will create a series of frequency histograms for our dataset’s features (V1 – V28). We will subsequently take a different look at the Class, Time, and Amount so that we can drop them at the moment.

# create histograms on all features
df_hist = df_base.drop(['Time','Amount', 'Class'], 1)
df_hist.hist(figsize=(20,20), bins = 50, color = "c", edgecolor='black')
plt.show()
Multivariate Anomaly Detection on Time-Series Data in Python: Feature frequency distributions on credit card data

Next, we will look at the correlation between the 28 features. We expect the features to be uncorrelated due to the use of PCA. Let’s verify that by creating a heatmap on their correlation values.

# feature correlation
f_cor = df_hist.corr()
sns.heatmap(f_cor)

As we have expected, our features are uncorrelated.

2.2 Class Labels

Next, let’s print an overview of the class labels to understand better how balanced the two classes are.

# Plot the balance of class labels
fig1, ax1 = plt.subplots(figsize=(14, 7))
plt.pie(df[['Class']].value_counts(), explode=[0,0.1], labels=[0,1], autopct='%1.2f%%', shadow=True, startangle=45)
balance of class labels in the credit card fraud dataset

We see that the data set is highly unbalanced. While this would constitute a problem for traditional classification techniques, it is a predestined use case for outlier detection algorithms like the Isolation Forest.

2.3 Time and Amount

Finally, we will create some plots to gain insights into time and amount. Let’s first have a look at the time variable.

# Plot istribution of the Time variable, which contains transaction data for two days
fig, ax = plt.subplots(nrows=1, ncols=2, sharex=True, figsize=(14, 7))
sns.histplot(data=df_base[df_base['Class'] == 0], x='Time', kde=True, ax=ax[0])
sns.histplot(data=df_base[df_base['Class'] == 1], x='Time', kde=True, ax=ax[1])
plt.show()
Multivariate Anomaly Detection on Time-Series Data in Python: histogram of credit card transactions

The time frame of our dataset covers two days, which reflects the distribution graph well. We can see that most transactions happen during the day – which is only plausible.

Next, let’s examine the correlation between transaction size and fraud cases. To do this, we create a scatterplot that distinguishes between the two classes.

# Plot time against amount
x = df_base['Amount']
y = df_base['Time']
fig, ax = plt.subplots(figsize=(20, 7))
ax.set(xlim=(0, 1500))
sns.scatterplot(data=df_base[df_base['Class']==0][::15], x=x, y=y, hue="Class", palette=["#BECEE9"], alpha=.5, ax=ax)
sns.scatterplot(data=df_base[df_base['Class']==1][::15], x=x, y=y, hue="Class", palette=["#EF1B1B"], zorder=100, ax=ax)
plt.legend(['no fraud', 'fraud'], loc='lower right')
fig.suptitle('Transaction Amount over Time split by Class')
Transaction Amount over Time splot by Class; Credit card fraud detection with isolation forests

The scatterplot provides the insight that suspicious amounts tend to be relatively low. In other words, there is some inverse correlation between class and transaction amount.

Step #3: Preprocessing

Now that we have a rough idea of the data, we will prepare it for training the model. For the training of the isolation forest, we drop the class label from the base dataset and then divide the data into separate datasets for training (70%) and test (30%). No need to normalize or standardize the data when using a decision tree-based algorithm.

We will use all features from the dataset. So our model will be a multivariate anomaly detection model.

# Separate the classes from the train set
df_classes = df_base['Class']
df_train = df_base.drop(['Class'], axis=1)

# split the data into train and test 
X_train, X_test, y_train, y_test = train_test_split(df_train, df_classes, test_size=0.30, random_state=42)

Step #4: Model Training

Once we have prepared the data, it’s time to start training the Isolation Forest. However, to compare the performance of our model with other algorithms, we will train several different models. In total, we will prepare and compare the following five outlier detection models:

  • Isolation Forest (default)
  • Isolation Forest (hypertuned)
  • Local Outlier Factor (default)
  • K Neared Neighbour (default)
  • K Nearest Neighbour (hypertuned)

For hyperparameter tuning of the models, we use Grid Search.

4.1 Train an Isolation Forest

4.1.1 Isolation Forest (baseline)
# train the model on the nominal train set
model_isf = IsolationForest().fit(X_train)
def measure_performance(model, X_test, y_true, map_labels):
    # predict on testset
    df_pred_test = X_test.copy()
    #df_pred_test['Class'] = y_test
    df_pred_test['Pred'] = model.predict(X_test)
    if map_labels:
        df_pred_test['Pred'] = df_pred_test['Pred'].map({1: 0, -1: 1})
    #df_pred_test['Outlier_Score'] = model.decision_function(X_test)

    # measure performance
    #y_true = df_pred_test['Class']
    x_pred = df_pred_test['Pred'] 
    matrix = confusion_matrix(x_pred, y_true)

    sns.heatmap(pd.DataFrame(matrix, columns = ['Actual', 'Predicted']),
                xticklabels=['Regular [0]', 'Fraud [1]'], 
                yticklabels=['Regular [0]', 'Fraud [1]'], 
                annot=True, fmt="d", linewidths=.5, cmap="YlGnBu")
    plt.ylabel('Predicted')
    plt.xlabel('Actual')
    
    print(classification_report(x_pred, y_true))
    
    model_score = score(x_pred, y_true,average='macro')
    print(f'f1_score: {np.round(model_score[2]*100, 2)}%')
    
    return model_score

model_name = 'Isolation Forest (baseline)'
print(f'{model_name} model')

map_labels = True
model_score = measure_performance(model_isf, X_test, y_test, map_labels)

performance_df = pd.DataFrame().append({'model_name':model_name, 
                                    'f1_score': model_score[0], 
                                    'precision': model_score[1], 
                                    'recall': model_score[2]}, ignore_index=True)
confusion matrix for our credit card fraud detection algorithm
4.1.2 Isolation Forest (Hypertuning)

Next, we will train another Isolation Forest Model using grid search hyperparameter tuning to test different parameter configurations. The code below will evaluate the different parameter configurations based on their f1_score and automatically choose the best-performing model.

# Define the parameter grid
n_estimators=[50, 100]
max_features=[1.0, 5, 10]
bootstrap=[True]
param_grid = dict(n_estimators=n_estimators, max_features=max_features, bootstrap=bootstrap)

# Build the gridsearch
model_isf = IsolationForest(n_estimators=n_estimators, 
                            max_features=max_features, 
                            contamination=contamination_rate, 
                            bootstrap=False, 
                            n_jobs=-1)

# Define an f1_scorer
f1sc = make_scorer(f1_score, average='macro')

grid = GridSearchCV(estimator=model_isf, param_grid=param_grid, cv = 3, scoring=f1sc)
grid_results = grid.fit(X=X_train, y=y_train)

# Summarize the results in a readable format
print("Best: {0}, using {1}".format(grid_results.cv_results_['mean_test_score'], grid_results.best_params_))
results_df = pd.DataFrame(grid_results.cv_results_)

# Evaluate model performance
model_name = 'KNN (tuned)'
print(f'{model_name} model')

best_model = grid_results.best_estimator_
map_labels = True # if True - maps 1 to 0 and -1 to 1 - not required for scikit-learn knn models
model_score = measure_performance(best_model, X_test, y_test, map_labels)
performance_df = performance_df.append({'model_name':model_name, 
                                    'f1_score': model_score[0], 
                                    'precision': model_score[1], 
                                    'recall': model_score[2]}, ignore_index=True)
results_df
confusion matrix for our credit card fraud detection algorithm

4.2 LOF Model

We train the Local Outlier Factor Model using the same training data and evaluation procedure.

# Train a tuned local outlier factor model
model_lof = LocalOutlierFactor(n_neighbors=3, contamination=contamination_rate, novelty=True)
model_lof.fit(X_train)

# Evaluate model performance
model_name = 'LOF (baseline)'
print(f'{model_name} model')

map_labels = True 
model_score = measure_performance(model_lof, X_test, y_test, map_labels)
performance_df = performance_df.append({'model_name':model_name, 
                                    'f1_score': model_score[0], 
                                    'precision': model_score[1], 
                                    'recall': model_score[2]}, ignore_index=True)

4.3 KNN Model

Below we add two K-Nearest Neighbor models to our list. We use the default parameter hyperparameter configuration for the first model. The second model will most likely perform better because we optimize its hyperparameters using the grid search technique.

4.3.1 KNN (default)

First, we train the default model using the same training data.

# Train a KNN Model
model_knn = KNeighborsClassifier(n_neighbors=5)
model_knn.fit(X=X_train, y=y_train)

# Evaluate model performance
model_name = 'KNN (baseline)'
print(f'{model_name} model')

map_labels = False # if True - maps 1 to 0 and -1 to 1 - set to False for classification models (e.g., KNN)
model_score = measure_performance(model_knn, X_test, y_test, map_labels)
performance_df = performance_df.append({'model_name':model_name, 
                                    'f1_score': model_score[0], 
                                    'precision': model_score[1], 
                                    'recall': model_score[2]}, ignore_index=True)
Multivariate Anomaly Detection on Time-Series Data in Python: Confusion matrix for our credit card fraud detection algorithm
4.3.1 KNN (hypertuned)

Next, we will train a second KNN model that is slightly optimized using hyperparameter tuning. KNN models have only a few parameters. Therefore, we limit ourselves to optimizing the model for the number of neighboring points considered.

# Define hypertuning parameters
n_neighbors=[1, 2, 3, 4, 5]
param_grid = dict(n_neighbors=n_neighbors)

# Build the gridsearch
model_knn = KNeighborsClassifier(n_neighbors=n_neighbors)
grid = GridSearchCV(estimator=model_knn, param_grid=param_grid, cv = 5)
grid_results = grid.fit(X=X_train, y=y_train)

# Summarize the results in a readable format
print("Best: {0}, using {1}".format(grid_results.cv_results_['mean_test_score'], grid_results.best_params_))
results_df = pd.DataFrame(grid_results.cv_results_)

# Evaluate model performance
model_name = 'KNN (tuned)'
print(f'{model_name} model')

best_model = grid_results.best_estimator_
map_labels = False # if True - maps 1 to 0 and -1 to 1 - set to False for classification models (e.g., KNN)
model_score = measure_performance(best_model, X_test, y_test, map_labels)
performance_df = performance_df.append({'model_name':model_name, 
                                    'f1_score': model_score[0], 
                                    'precision': model_score[1], 
                                    'recall': model_score[2]}, ignore_index=True)
results_df
confusion matrix for our credit card fraud detection algorithm

Step #5: Measuring and Comparing Performance

Finally, we will compare the performance of our models with a bar chart that shows the f1_score, precision, and recall. If you want to learn more about classification performance, this tutorial discusses the different metrics in more detail.

print(performance_df)

performance_df = performance_df.sort_values('model_name')

fig, ax = plt.subplots(figsize=(12, 4))
tidy = performance_df.melt(id_vars='model_name').rename(columns=str.title)
sns.barplot(y='Model_Name', x='Value', hue='Variable', data=tidy, ax=ax, palette='nipy_spectral', linewidth=1, edgecolor="w")
plt.title('Model Outlier Detection Performance (Macro)')
Multivariate Anomaly Detection on Time-Series Data in Python: Performance comparison between different algorithms for credit card fraud detection

All three metrics play an important role in evaluating performance because, on the one hand, we want to capture as many fraud cases as possible, but we also don’t want to raise false alarms too frequently.

  • As we can see, the optimized Isolation Forest performs particularly well balanced.
  • The default Isolation Forest has a high f1_score and detects many fraud cases but frequently raises false alarms.
  • The opposite is true for the KNN model. Only a few fraud cases are detected here, but the model is often correct when noticing a fraud case.
  • The default LOF model performs slightly worse than the other models. Compared to the optimized Isolation Forest, it performs worse in all three metrics.

Summary

This article has developed a multivariate anomaly detection model to spot fraudulent credit card transactions. You learned how to prepare the data for testing and training an isolation forest model and how to validate this model. Finally, we have proven that the Isolation Forest is a robust algorithm for anomaly detection that outperforms traditional techniques.

I hope you enjoyed the article and can apply what you learned to your projects. Have a great day!

Author

  • Hi, I am Florian, a Zurich-based consultant for AI and Data. Since the completion of my Ph.D. in 2017, I have been working on the design and implementation of ML use cases in the Swiss financial sector. I started this blog in 2020 with the goal in mind to share my experiences and create a place where you can find key concepts of machine learning and materials that will allow you to kick-start your own Python projects.

2 thoughts on “Multivariate Anomaly Detection on Time-Series Data in Python: Using Isolation Forests to Detect Credit Card Fraud”

    • That’s a great question! In my opinion, it depends on the features. In the example, features cover a single data point t. So the isolation tree will check if this point deviates from the norm. If you you are looking for temporal patterns that unfold over multiple datapoints, you could try to add features that capture these historical data points, t, t-1, t-n. Or you need to use a different algorithm, e.g., an LSTM neural net.

      Reply

Leave a Comment