This article introduces affinity propagation – an unsupervised clustering technique that stands out from other clustering approaches by its capacity to resolve the number of clusters in a dataset. This tutorial shows how the affinity propagation model works by applying it to the cryptocurrency market. If you have followed recent movements in the crypto market, you may have observed the emergence of certain groups of coins that follow a similar price pattern. Affinity propagation can help to identify such groups and aid the process of finding promising entry points for entering or exiting the market. We run a cluster analysis on historical price data and group cryptos into clusters based on their past price fluctuations. Then we visualize the result on a 2D plane. The result is a crypto market map.
The rest of this article proceeds as follows: First, we look at some concepts behind the approach of market structuring with affinity propagation. Essential concepts include covariance, lasso regression, and affinity propagation. The second part is a Python hands-con tutorial, in which we apply affinity propagation clustering with partial covariance to analyze price time series data. Finally, we will visualize the results in two and three dimensions.

What is Clustering Stock Markets?
Clustering stock markets refers to grouping stocks based on their similarities or common characteristics. This can be done using various clustering algorithms, which analyze the data and assign each stock market to a cluster based on its similarity to other stock markets in the same cluster. In this article, we will run a cluster analysis on historical time series data. This approach involves grouping stocks into clusters based on their historical performance over a certain period of time.
Clustering stock market data can be useful for a variety of purposes, such as identifying patterns or trends in the data, comparing the performance of different stocks or sectors, or generating investment recommendations. However, it’s important to keep in mind that clustering is just one tool among many for analyzing stock market data, and it’s important to consider a range of factors when making investment decisions. It can also be used to compare the performance of different stock markets and identify potential risks or correlations between them.
The Problem with Prototype-based Clustering
Clustering is an unsupervised learning technique that groups similar objects into clusters and separates them from different ones. One of the most popular clustering techniques is k-means. K-means belongs to the so-called prototype-based clustering techniques, which divide data points into a predefined number of groups (in the case of k-means, the groups are of equal variance).
The prototype-based clustering approach works great if the number of clusters in a dataset is known and the clusters have similar despair. However, when we deal with real-world problems, we often encounter more complex data for which the optimal number of clusters is unknown and difficult or even impossible to guess. In such a case, affinity propagation has a significant advantage because it can automatically estimate the number of clusters.
Affinity Propagation: What it is and How it Works
The idea of affinity propagation is to identify clusters by measuring the similarity of data points relative to one another. The algorithm chooses data points as cluster centers that best represent other data points near them.
We can imagine the process of identifying these representative data points as an election. Each data point (i) is a voter who casts votes and a candidate (k) who can receive votes from other voters. Votes are a measure of the similarity of data points. A voter who gives many votes to a candidate expresses that this data point is similar to him and therefore is suitable for representing him as a cluster center. The voting process continues until the algorithm reaches a consensus and selects a set number of cluster candidates.

The clustering process involves many separate steps (This article provides a detailed description of the steps involved) and works with several matrices:
- The similarity matrix assesses the suitability of data points (candidates) to act as cluster centers.
- The availability matrix (or responsibility matrix) collects the support of the data points for the candidates (potential cluster centers) and their suitability to represent them.
- The criterion matrix sums up the results and defines the clusters. Data points with equal scores in the criterion matrix are considered part of the same cluster.

Time Series Clustering using Affinity Propagation – Visualizing Cryptocurrency Market Structures in Python
Now that we have an overview of affinity propagation, we can implement it in Python. We aim to analyze the crypto market structure and create a visual representation of price similarity. We begin by defining a portfolio of cryptocurrencies and downloading their historical price quotes from coinmarketcap. Then we ensure that the data has been loaded successfully by visualizing the time series on separate line charts. We also prepare and clean the data so that the clustering algorithm can interpret them. Next, we cluster the cryptocurrencies of our portfolio into groups with similar price movements using Affinity Propagation. We won’t set the number of clusters in advance but let the affinity propagation determine it. Finally, we calculate the covariance matrix between clusters and arrange the cryptocurrencies on a 2D map into clusters and create a network overlay based on covariance.
The Python code for this tutorial is available in the relataly repository on GitHub.
Prerequisites
Before beginning the coding part, ensure that you have set up your Python 3 environment and required packages. Consider Anaconda if you don’t have a Python environment set up yet. To set it up, you can follow the steps in this tutorial. Also, make sure you install all required packages. In this tutorial, we will be working with the following standard packages:
Please also make sure you have the Cmcscaper package installed. We will be using it to download past crypto prices from coinmarketcap.
You can install these packages using console commands:
- pip install <package name>
- conda install <package name> (if you are using the anaconda packet manager)
Step #1: Load the Stock Market Data
We start by loading historical crypto price data from Coinmarketcap. To download the data, we use Cmcscraper, a Python library that allows us to collect Coinmarketcap data without signing up for the official API.
The download returns a dataframe with daily price quotes (Close, Open, Avg) for cryptocurrencies between 2016 and today. You can use the dictionary (“symbol_dict”) to control which cryptos you want to include in the data. We limit the data we use in our cluster analysis to the last 50 days. In this way, we let the correlation consider earlier price developments. But it’s up to you to specify a different period. In addition, instead of using absolute price values, we will use daily percentage fluctuations.
Loading the data can take several minutes, depending on how many cryptocurrencies we include in the request. So it makes sense not to load the data every time you run the code. Therefore, the code below stores the historical prices in a CSV file.
The script will check if the data already exists if you run the code below. If it does, it will use the data from the CSV file. Otherwise, it will load a fresh copy of the data from coinmarketcap.
# A tutorial for this file is available at www.relataly.com from cryptocmd import CmcScraper import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns from sklearn import cluster, covariance, manifold # This list defines the coins that will be considered symbol_dict = { 'BTC': 'Bitcoin', 'ETH': 'Ethereum', 'XRP': 'Ripple', 'ADA': 'Cardano', 'KMD': 'Komodo', 'BNB': 'Binance Coin', 'DOGE': 'Doge Coin', 'LTC': 'Litecoin', 'USDT': 'Tether', 'ZRX': 'Zer0', 'BAT': 'Battoken', 'UNI': 'DOT', 'PSG': 'PSG Token', 'ACM': 'ACM Token', 'RSR': 'AS Rom Token', 'JUV': 'Juventus Turin Token', 'ATM': 'Atletico Madrid Token', 'SOL': 'Solana', 'MATIC': 'Polygon', 'LINK': 'Link', 'ETC': 'Ethereum Classic', 'AVAX': 'Avalance', 'DCR': 'Decred', 'WAVES': 'WAVES', 'VET': 'Vechain', 'ARK': 'ARK', 'BCH': 'Bitcoin Cash', 'ICP': 'Internet Computer', 'DGB': 'Digibyte', 'BTT': 'BitTorrent', 'CEL': 'Celsius', 'SNX': 'Synthetix', 'ENJ': 'Enjin', 'ZIL': 'Zilliqa', 'CHZ': 'Chilliz', 'THETA': 'Theta', 'XLM': 'Stellar Lumen', 'SYS': 'Sys Coin', 'LRC': 'Loopring', 'RLC': 'LRC', 'EOS': 'EOS', 'NEO': 'NEO', 'MIOTA': 'IOTA', 'CAKE': 'Cake Defi', 'BLZ': 'BLZ', 'XMR': 'Monero', 'FORTH': 'Ampleforth' } # Download historic crypto prices via CmcScraper def load_fresh_data_and_save_to_disc(symbol_dict, save_path): # Initialize Coin Symbols List symbols, names = np.array(sorted(symbol_dict.items())).T for symbol in symbols: # Initialise scraper without time interval scraper = CmcScraper(symbol) # Pandas dataFrame for the same data df_coin_prices = scraper.get_dataframe() df = pd.DataFrame() print(f'fetching prices for {symbol}') df[symbol + '_Open'] = df_coin_prices['Open'] df[symbol + '_Close'] = df_coin_prices['Close'] df[symbol + '_Avg'] = (df_coin_prices['Close'] + df_coin_prices['Open']) / 2 # Daily price fluctuations in percent df[symbol + '_p'] = (df_coin_prices['Open'] - df_coin_prices['Close']) / df_coin_prices['Open'] if symbol == symbols[0]: # Create a new DataFrame for the first cryptocurrency in the list df_crypto = df.copy() else: # Merge the new price data with the existing DataFrame df_crypto = pd.merge( left=df_crypto, right=df, how="outer", left_index=True, right_index=True) # Remove all cryptocurrencies that filter_columns = [s for s in df_crypto.columns if '_p' in s] X_df_filtered = df_crypto[filter_columns].copy() X_df_filtered.to_csv(save_path + 'historical_crypto_prices.csv') return names, symbols, X_df_filtered save_path = '' # save_path # If set to False the data will only be downloaded when you execute the code # Set to True, if you want a fresh copy of the data. new_data = True if new_data == False: try: print('loading from disk') X_df_filtered = pd.read_csv(save_path + 'historical_crypto_prices.csv') if 'Unnamed: 0' in X_df_filtered.columns: X_df_filtered = X_df_filtered.drop(['Unnamed: 0'], axis=1) symbols, names = np.array(sorted(symbol_dict.items())).T print(list(X_df_filtered.columns)) except: print('no existing price data found - loading fresh data from coinmarketcap and saving them to disk') names, symbols, X_df_filtered = load_fresh_data_and_save_to_disc(symbol_dict, save_path) print(list(symbols)) else: print('loading fresh data from coinmarketcap and saving them to disk') names, symbols, X_df_filtered = load_fresh_data_and_save_to_disc(symbol_dict, save_path) print(list(symbols)) # Limit the price data to the last t days t= 50 # in days X_df_filtered = X_df_filtered[:t] X_df_filtered.head()
ACM_p ADA_p ARK_p ATM_p ATOM_p AVAX_p BAT_p BCH_p BLZ_p BNB_p ... THETA_p UNI_p USDT_p VET_p WAVES_p XLM_p XMR_p XRP_p ZIL_p ZRX_p 0 0.031987 -0.037645 -0.005702 0.030928 -0.005897 -0.012404 -0.012262 -0.022529 0.008072 -0.007111 ... -0.021994 -0.023758 -0.000103 -0.021024 -0.015416 -0.004096 -0.022988 -0.027397 -0.016659 -0.012255 1 0.028192 0.065034 0.122306 0.010310 0.093558 0.106811 0.082863 0.075567 0.062105 0.054733 ... 0.067264 0.081040 0.000136 0.077203 0.092987 0.078562 0.111519 0.071696 0.076484 0.085094 2 0.040771 0.016097 -0.133345 0.018963 0.011304 -0.033328 -0.007616 0.011458 -0.019993 0.005134 ... -0.005104 -0.024190 0.000077 0.002218 0.008920 0.004139 -0.031822 -0.012107 -0.003906 -0.021170 3 -0.027698 0.005129 -0.031516 -0.002639 0.022235 -0.008117 0.003969 0.019119 0.015403 0.005920 ... 0.007992 0.027203 0.000003 0.000701 0.010739 0.005324 -0.007914 0.007168 0.004556 -0.003786 4 -0.021129 -0.019053 0.003273 -0.008121 0.002883 -0.004927 0.002548 -0.000599 0.028492 -0.012181 ... 0.000198 -0.025817 -0.000047 -0.002800 -0.051515 -0.004861 0.015134 -0.000596 -0.010343 0.004530
The data looks good, so let’s continue.
Step #2 Plotting Crypto Price Charts
Now that the data is available, we can visualize it in various line graphs. The visualization helps us better understand what kind of data we are dealing with and check if the download was successful.
# Create Prices Charts for all Cryptocurrencies list_length = X_df_filtered.shape[1] ncols = 10 nrows = int(round(list_length / ncols, 0)) height = list_length/3 if list_length > 30 else 4 fig, axs = plt.subplots(nrows=nrows, ncols=ncols, sharex=True, sharey=True, figsize=(20, height)) for i, ax in enumerate(fig.axes): if i < list_length: sns.lineplot(data=X_df_filtered, x=X_df_filtered.index, y=X_df_filtered.iloc[:, i], ax=ax) ax.set_title(X_df_filtered.columns[i]) plt.show()

We can see the lineplots for all cryptocurrencies and everything looks as expected.
Step #3 Clustering Cryptocurrencies using Affinity Propagation
Next, we must prepare the data and run the affinity propagation algorithm. For some cryptocurrencies, we may encounter data that contains NaN values. Because clustering is sensitive to missing values, we must ensure good data quality. In addition, the Python code below will convert the DataFrame into a NumPy array and transpose it into a form where we have crypto assets as records and the days as columns.
Running the code below returns a dictionary of clusters with the cryptocurrencies assigned to them by the affinity propagation algorithm.
# Drop NaN values X_df = pd.DataFrame(np.array(X_df_filtered)).dropna() # Transpose the data to structure prices along columns X = X_df.copy() X /= X.std(axis=0) X = np.array(X) # Define an edge model based on covariance edge_model = covariance.GraphicalLassoCV() # Standardize the time series edge_model.fit(X) # Group cryptos to clusters using affinity propagation # The number of clusters will be determined by the algorithm cluster_centers_indices , labels = cluster.affinity_propagation(edge_model.covariance_, random_state=1) cluster_dict = {} n_labels = labels.max() print(f"{n_labels} Clusters") for i in range(n_labels + 1): clusters = ', '.join(names[labels == i]) print('Cluster %i: %s' % ((i + 1), clusters)) cluster_dict[i] = (clusters)
9 Clusters Cluster 1: Binance Coin, Cake Defi Cluster 2: Bitcoin Cash, Bitcoin, BitTorrent, Decred, EOS, Ethereum Classic, Ethereum, Ampleforth, Komodo, Solana, Sys Coin, DOT Cluster 3: Celsius Cluster 4: Doge Coin Cluster 5: Cardano, ATOM, Avalance, Enjin, Internet Computer, Link, Loopring, Polygon, IOTA, NEO, Synthetix, Theta, Vechain Cluster 6: Litecoin Cluster 7: ACM Token, Atletico Madrid Token, Chilliz, Juventus Turin Token, PSG Token Cluster 8: LRC Cluster 9: Tether Cluster 10: ARK, Battoken, BLZ, Digibyte, AS Rom Token, WAVES, Stellar Lumen, Monero, Ripple, Zilliqa, Zer0
We can see that the algorithm has identified 13 different clusters in the data and a couple of clusters with only a single member. You will most likely encounter different results depending on when you run it.
Step #4 Create a 2D Positioning Model based on the Graph Structure
In addition to clusters, we want to show the covariance between cryptocurrencies in our Crypto Market map. We need a graph-like structure that contains the covariance and position data of the cryptocurrencies for each crypto pair.
In addition, we use a node position model that calculates their relative position on a 2D plane from the covariance of the cryptocurrencies. However, the positions are only relative, so the absolute axes have no meaning.
# Create a node_position_model that find the best position of the cryptos on a 2D plane # The number of components defines the dimensions in which the nodes will be positioned node_position_model = manifold.LocallyLinearEmbedding(n_components=2, eigen_solver='dense', n_neighbors=20) embedding = node_position_model.fit_transform(X.T).T # The result are x and y coordindates for all cryptocurrencies pd.DataFrame(embedding) # Create an edge_model that represents the partial correlations between the nodes partial_correlations = edge_model.precision_.copy() d = 1 / np.sqrt(np.diag(partial_correlations)) partial_correlations *= d partial_correlations *= d[:, np.newaxis] # Only consider partial correlations above a specific threshold (0.02) non_zero = (np.abs(np.triu(partial_correlations, k=1)) > 0.02) # Convert the Positioning Model into a DataFrame data = pd.DataFrame.from_dict({"embedding_x":embedding[0],"embedding_y":embedding[1]}) # Add the labels to the 2D positioning model data["labels"] = labels print(data.shape) data.head()
(48, 3) embedding_x embedding_y labels 0 0.400590 -0.136473 6 1 -0.081908 -0.086039 4 2 -0.033982 -0.038526 9 3 0.416745 0.076849 6 4 -0.041938 0.031966 4
The next step is to create a graph of the partial correlations.
Step #5 Visualize the Crypto Market Structure
Our goal is to visualize differences in the covariance between crypto pairs by varying the connection strengths. We calculate the line strength by normalizing the covariance of the crypto pairs. In addition, we visualize the distribution of the covariance.
# Create an array with the segments for connecting the data points start_idx, end_idx = np.where(non_zero) segments = [[np.array([embedding[:, start], embedding[:, stop]]).T, start, stop] for start, stop in zip(start_idx, end_idx)] # Create a normalized representation of partial correlation between crypto currencies # We can later use covariance to vizualize the strength of the connections pc = np.abs(partial_correlations[non_zero]) normalized = (pc-min(pc))/(max(pc)-min(pc)) # plot the distribution of covariance between the cryptocurrencies sns.histplot(pc)

The hist plot shows that the covariance between the crypto pairs is mostly below 0.005.
Finally, it is time to map cryptocurrencies on a 2D plane. To do this, we first define the cryptocurrencies using their relative position data with a scatterplot. We set the color of the points based on their clusters so that points in the same cluster are colored the same. Subsequently, we connect the points to the data from the edge model. The covariance between the crypto pairs determines the strength of their connections.
We also define the color of the connections as follows.
- The map only shows connections with a covariance greater than 0.002.
- Connections with a covariance greater than 0.05 are colored red.
- Otherwise, connections between points within a cluster are shown in the cluster’s color.
- We color connections in grey that are between points of different clusters.
Last but not least, we add the labels of the cryptocurrencies.
# Visualization plt.figure(1, facecolor='w', figsize=(20, 8)) plt.clf() ax = plt.axes([0., 0., 1., 1.]) # Plot the nodes using the coordinates of our embedding sc = sns.scatterplot(data=data, x="embedding_x", y="embedding_y", zorder=1, s = 350 * d ** 2, c=labels, cmap=plt.cm.nipy_spectral, alpha=.9, palette="muted") # Plot the covariance edges between the nodes (scatter points) line_strength = 3.2 for index, ((x, y), start, stop) in enumerate(segments): norm_partial_correlation = normalized[index] if list(data.iloc[[start]]['labels'])[0] == list(data.iloc[[stop]]['labels'])[0]: if norm_partial_correlation > 0.5: color = 'red'; linestyle='solid' else: color = plt.cm.nipy_spectral(list(data.iloc[[start]]['labels'])[0] / float(n_labels)); linestyle='solid' else: if norm_partial_correlation > 0.5: color = 'red'; linestyle='solid' else: color = 'grey'; linestyle='dashed' plt.plot(x, y, alpha=.4, zorder=0, linewidth=normalized[index]*line_strength, color=color, linestyle=linestyle) # Labels the nodes and position the labels to avoid overlap with other labels for index, (name, label, (x, y)) in enumerate(zip(names, labels, embedding.T)): dx = x - embedding[0] dy = y - embedding[1] dy[index], dx[index] = 1, 1 this_dx = dx[np.argmin(np.abs(dy))] this_dy = dy[np.argmin(np.abs(dx))] if this_dx > 0: horizontalalignment = 'left' x = x + .005 else: horizontalalignment = 'right' x = x - .004 if this_dy > 0: verticalalignment = 'bottom' y = y + .01 else: verticalalignment = 'top' y = y + .01 plt.text(x, y, name, size=10, horizontalalignment=horizontalalignment, verticalalignment=verticalalignment) # Label customization options # color = plt.cm.nipy_spectral(label / float(n_labels)) # bbox=dict(facecolor=plt.cm.nipy_spectral(label / float(n_labels)), edgecolor="w", alpha=.2) plt.axis('off') plt.show()

Note that you will likely see a different map when you run the code on your machine. Differences result from changes in market prices and covariance that lead to other graph structures.
Let’s see what the crypto market map tells us.
Interpreting the Cryptomarket Map
The 2D crypto market map tells us several things:
- Most cryptos fall into the light green and dark green clusters corresponding to different types of crypto (Decentralized Finance Coins, NFT/Metaverse Coins).
- There is a significant covariance between large-cap players in the crypto space, such as Cardano and Loopring and Ethereum and Bitcoin, which is plausible considering recent price movements. Some results are surprising, for example, the partial correlation between NEO and Ethereum Classic.
- Some clusters are isolated and contain only a single member, for example, Tether, Komodo, AC Milan token, Wave token, and Dogecoin). The reason is that the prices of these coins/tokens have developed independently of the market.
- Tether is a stablecoin that does not change in price. It, therefore, strongly differs from the other cryptocurrencies on our map.
- Komodo has been trading sideways without following the general market trend.
- And the MCM token is a soccer token that has recently outperformed the market.
- Soccer tokens are colored in dark blue. These tokens’ prices correlate with how the soccer clubs performed during the current season. It, therefore, makes perfect sense that these tokens are grouped into a cluster. An exception is the AC Milan token, which recently performed better than the other soccer tokens.
Step #6 Creating a 3D Representation
Instead of a 2D representation of the data points, we can also use a 3D node positioning model. For this purpose, the node positioning model distributes the affinity values over three dimensions.
# Find the best position of the cryptos on a 3D plane node_position_model = manifold.LocallyLinearEmbedding(n_components=3, eigen_solver='dense', n_neighbors=20) embedding = node_position_model.fit_transform(X.T).T # The result are x and y coordindates for all cryptocurrencies pd.DataFrame(embedding) # Display a graph of the partial correlations partial_correlations = edge_model.precision_.copy() d = 1 / np.sqrt(np.diag(partial_correlations)) partial_correlations *= d partial_correlations *= d[:, np.newaxis] non_zero = (np.abs(np.triu(partial_correlations, k=1)) > 0.02) data = pd.DataFrame.from_dict({"embedding_x":embedding[0],"embedding_y":embedding[1],"embedding_z":embedding[1]}) data["labels"] = labels data["names"] = names import matplotlib.pyplot as plt import numpy as np fig = plt.figure(figsize=(20,20)) ax = fig.add_subplot(projection='3d') xs = data["embedding_x"] ys = data["embedding_y"] zs = data["embedding_z"] sc = ax.scatter(xs, ys, zs, c=labels, s=100) for i in range(len(data)): x = xs[i] y = ys[i] z = zs[i] label = data["names"][i] ax.text(x, y, z, label) plt.legend(*sc.legend_elements(), bbox_to_anchor=(1.05, 1), loc=2) plt.show()

Summary
Affinity propagation is a useful technique for clustering items for which the number of optimal clusters is unknown. This article has shown how to apply affinity propagation in financial market analysis to structure the cryptocurrency market and identify groups of assets based on similar price fluctuations. In the given example, we could identify 13 groups of cryptocurrencies. As mentioned, we did not specify the number of clusters in advance and instead let the algorithm determine it based on the data. We also see that we can illustrate the market structure on a 2D and 3D map using a node distribution technique. You can apply the same method to analyze and cluster stock markets.
Such a market map can highlight complex price patterns among multiple financial assets. Once you have identified a bunch of clusters, you can take the analysis further and look at the individual groups. Individual assets temporarily breaking out of their usual pattern sometimes indicate interesting investment opportunities. Often these outliers will eventually return to the price pattern of their group, or they represent forerunners of their group, indicating broader market movements.
I hope this article helped show you an exciting way to visualize financial assets. If you have any questions or comments, please let me know.
Sources and Further Reading
This article modifies some of the code from Scikit-learn and adapts it from the stock market to cryptocurrencies.
- Jansen (2020) Machine Learning for Algorithmic Trading: Predictive models to extract signals from market and alternative data for systematic trading strategies with Python
- Aurélien Géron (2019) Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
- David Forsyth (2019) Applied Machine Learning Springer
- Andriy Burkov (2020) Machine Learning Engineering
The links above to Amazon are affiliate links. By buying through these links, you support the Relataly.com blog and help to cover the hosting costs. Using the links does not affect the price.
Very interesting !