Stock-Market Prediction using Neural Networks for Multi-Output Regression in Python

This article showcases multi-output neural networks for time-series regression and demonstrates their use in the context of stock market forecasting. We will train and test a Keras neural network with ten output neurons using historical price quotes for Apple stock. According to the ten neurons in the output layer, the network can forecast ten days in advance.

The number of output nodes of a neural network should correspond to the number of values we want to predict. It is sufficient to work with a network architecture with a single output node for simple binary classification or single-step regression cases. Although using the single-output architecture for multi-label classification or multi-step regression is possible, it is often impractical as it requires extra work to prepare models and data. A more elegant approach is to train a model that produces multiple outputs right away. In the following, we take a closer look at how this works.

An exemplary architecture of a neural network with five output neurons

The rest of this article is organized as follows: First, we look at various ways to generate time series forecasts with neural networks. Then, we will develop a neural network for multi-output regression ourselves. For this, we perform all the classic steps in machine learning, including data preparation, data splitting, and training and testing the model. Our model will be a Keras neural network trained on historical daily prices for Apple stock. The architecture includes multiple LSTM layers with ten output neurons. Corresponding to the architecture, we finally use this model to generate a ten-day forecast.

Multi-Output Regression vs Single-Output Regression

Time series regression is about using past values to make statements about the further development of the time series. The input data are provided to the model as batches, containing the time series data for a specific past time period. The number of values we can predict with a neural network per input batch is determined by the number of neurons in the output layer.

The standard case of time series prediction uses a single layer model with a single neuron in the last layer. During model training, the single-output model takes a series of past input values, followed by the subsequent value for validation. Consequently, for each input series supplied, this model predicts the following value of the time series. To predict multiple steps with such a model, one must use a rolling approach.

With multiple neurons in the output layer, it is also possible to directly predict numerous steps at once per batch. In multi-output regression, we need to provide the model with a sequence of subsequent values, in addition to the input time series data. The graphic below illustrates the input and output data of a neural network with four outputs.

The inputs and outputs of a neural network for time series regression with five input neurons and four outputs

Implementing a Neural Network Model for Multi-Output Regression in Python

In the following, we will train a neural network that forecasts the Apple stock price. We will load historical price data via the yahoo finance API and then conduct the necessary steps to prepare the data and train the neural network. As usual, you can find the code for this example, the relataly GitHub repository.

Prerequisites

Before beginning with the coding part, ensure that you have set up your Python 3 environment and required packages. If you don’t have an environment set up yet, consider the Anaconda Python environment. To set it up, you can follow the steps in this tutorial.

Also, make sure you install all required packages. In this tutorial, we will be working with the following standard packages: 

In addition, we will be using the machine learning libraries Keras, Scikit-learn, and Tensorflow. For visualization, we will be using the Seaborn package.

Please also make sure you have either the pandas_datareader or the yfinance package installed. You will use one of these packages to retrieve the historical stock quotes.

You can install these packages using console commands:

  • pip install <package name>
  • conda install <package name> (if you are using the anaconda packet manager)

Step #1: Load the Data

So let’s get started. Our goal is to train a neural network based on historical price quotes of Apple stock. As a first step, we load the historical price quotes into our Python project via an API. For this, you can either use the yfinance package or the Pandas_Datareader. I am purposely giving two different APIs here because it happens from time to time that the APIs do not work. So, if you experience this, try to use the other API.

Regardless, which package you use, the data should comprise the following fields:

  • Close
  • Open
  • High
  • Low
  • Adj Close
  • Volume

The target variables that we are trying to predict is the Closing price (Close).

#import pandas_datareader as webreader # Remote data access for pandas
import math # Mathematical functions 
import numpy as np # Fundamental package for scientific computing with Python
import pandas as pd # Additional functions for analysing and manipulating data
from datetime import date, timedelta, datetime # Date Functions
from pandas.plotting import register_matplotlib_converters # This function adds plotting functions for calender dates
import matplotlib.pyplot as plt # Important package for visualization - we use this to plot the market data
import matplotlib.dates as mdates # Formatting dates
from sklearn.metrics import mean_absolute_error, mean_squared_error # Packages for measuring model performance / errors
from keras.models import Sequential # Deep learning library, used for neural networks
from keras.layers import LSTM, Dense, Dropout # Deep learning classes for recurrent and regular densely-connected layers
from keras.callbacks import EarlyStopping # EarlyStopping during model training
from sklearn.preprocessing import RobustScaler, MinMaxScaler # This Scaler removes the median and scales the data according to the quantile range to normalize the price data 
import seaborn as sns

#from pandas_datareader.nasdaq_trader import get_nasdaq_symbols
#symbols = get_nasdaq_symbols()

# Setting the timeframe for the data extraction
today = date.today()
date_today = today.strftime("%Y-%m-%d")
date_start = '2010-01-01'

# Getting NASDAQ quotes
stockname = 'Apple'
symbol = 'AAPL'
# df = webreader.DataReader(
#     symbol, start=date_start, end=date_today, data_source="yahoo"
# )

import yfinance as yf #Alternative package if webreader does not work: pip install yfinance
df = yf.download(symbol, start=date_start, end=date_today)

# # Create a quick overview of the dataset
df

Step #2: Explore the Data

Once we have loaded the data, we print a quick overview of the time-series data using different line graphs.

# Plot line charts
df_plot = df.copy()

list_length = df_plot.shape[1]
ncols = 2
nrows = int(round(list_length / ncols, 0))

fig, ax = plt.subplots(nrows=nrows, ncols=ncols, sharex=True, figsize=(14, 7))
fig.subplots_adjust(hspace=0.5, wspace=0.5)
for i in range(0, list_length):
        ax = plt.subplot(nrows,ncols,i+1)
        sns.lineplot(data = df_plot.iloc[:, i], ax=ax)
        ax.set_title(df_plot.columns[i])
        ax.tick_params(axis="x", rotation=30, labelsize=10, length=0)
        ax.xaxis.set_major_locator(mdates.AutoDateLocator())
fig.tight_layout()
plt.show()
The Apple Stock's historical price data, including quotes, highs, lows, and volume

Step #3: Preprocess the Data

Next, we prepare the data for the training process. Preparing the data for a multivariate multi-output regression model involves several steps, including scaling and splitting the data into training and testing and splitting the time series into several shifted training batches.

Before we start transforming the data, we first create a copy and reset the index.

# Indexing Batches
df_train = df.sort_values(by=['Date']).copy()

# We safe a copy of the dates index, before we need to reset it to numbers
date_index = df_train.index

# We reset the index, so we can convert the date-index to a number-index
df_train = df_train.reset_index(drop=True).copy()
df_train.head(5)

Next, we select a subset of features. We will be using all fields except the Adj_Close field.

In addition, we scale the data. To ease the process of unscaling the data after training, we create two different scalers. One for the training data, which takes five columns, and one for the output data, scale a single column (the Close Price).

def prepare_data(df):

    # List of considered Features
    FEATURES = ['Open', 'High', 'Low', 'Close', 'Volume']

    print('FEATURE LIST')
    print([f for f in FEATURES])

    # Create the dataset with features and filter the data to the list of FEATURES
    df_filter = df[FEATURES]
    
    # Convert the data to numpy values
    np_filter_unscaled = np.array(df_filter)
    #np_filter_unscaled = np.reshape(np_unscaled, (df_filter.shape[0], -1))
    print(np_filter_unscaled.shape)

    np_c_unscaled = np.array(df['Close']).reshape(-1, 1)
    
    return np_filter_unscaled, np_c_unscaled
    
np_filter_unscaled, np_c_unscaled = prepare_data(df_train)
                                          
# Creating a separate scaler that works on a single column for scaling predictions
# Scale each feature to a range between 0 and 1
scaler_train = MinMaxScaler()
np_scaled = scaler_train.fit_transform(np_filter_unscaled)
    
# Create a separate scaler for a single column
scaler_pred = MinMaxScaler()
np_scaled_c = scaler_pred.fit_transform(np_c_unscaled)   

The final step of the data preparation is to create the structure for the input data. For this, we code an algorithm that cycles through the data and produces multiple input time-series that are shifted by a single time step. Each batch comprises a period of 50 steps from the time series and an output sequence of ten consecutive values. Because we are working with multivariate input data, the input time series consists of five input columns/features.

Finally, we validate that these batches have the right shape by selecting three and creating line graphs for a single input feature and the consecutive output values.

# Set the input_sequence_length length - this is the timeframe used to make a single prediction
input_sequence_length = 50
# The output sequence length is the number of steps that the neural network predicts
output_sequence_length = 10 #

# Prediction Index
index_Close = df_train.columns.get_loc("Close")

# Split the training data into train and train data sets
# As a first step, we get the number of rows to train the model on 80% of the data 
train_data_length = math.ceil(np_scaled.shape[0] * 0.8)

# Create the training and test data
train_data = np_scaled[0:train_data_length, :]
test_data = np_scaled[train_data_length - input_sequence_length:, :]

# The RNN needs data with the format of [samples, time steps, features]
# Here, we create N samples, input_sequence_length time steps per sample, and f features
def partition_dataset(input_sequence_length, output_sequence_length, data):
    x, y = [], []
    data_len = data.shape[0]
    for i in range(input_sequence_length, data_len - output_sequence_length):
        x.append(data[i-input_sequence_length:i,:]) #contains input_sequence_length values 0-input_sequence_length * columns
        y.append(data[i:i + output_sequence_length, index_Close]) #contains the prediction values for validation (3rd column = Close),  for single-step prediction
    
    # Convert the x and y to numpy arrays
    x = np.array(x)
    y = np.array(y)
    return x, y

# Generate training data and test data
x_train, y_train = partition_dataset(input_sequence_length, output_sequence_length, train_data)
x_test, y_test = partition_dataset(input_sequence_length, output_sequence_length, test_data)


# Print the shapes: the result is: (rows, training_sequence, features) (prediction value, )
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)

# Validate that the prediction value and the input match up
# The last close price of the second input sample should equal the first prediction value
# print(x_train[1][input_sequence_length-1][index_Close])
# print(y_train[0])

nrows = 3 # number of shifted plots
fig, ax = plt.subplots(nrows=nrows, ncols=1, figsize=(14, 7))
for i in range(nrows):
    sns.lineplot(y = pd.DataFrame(x_train[i])[index_Close], x = range(input_sequence_length), ax = ax[i])
    sns.lineplot(y = y_train[i], x = range(input_sequence_length -1, input_sequence_length + output_sequence_length - 1), ax = ax[i])
plt.show
input time series and subsequent data points extracted from three exemplary training batches

Step #3: Train the Multi-Output Neural Network Model

Now that we have the training data prepared and ready, the next step is to configure the architecture of the multi-out neural network. Because we will be using multiple input series, our model is, in fact, a multi-variate model. We configure the architecture so that it corresponds to the input training batches.

We choose a comparably simple architecture with only two LSTM layers and two additional dense layers. The first dense layer has 20 neurons, and the second layer is the output layer, which has ten output neurons. If you wonder how I got to the number of neurons in the third layer, I conducted several experiments and found that this number leads to solid results.

To ensure that the architecture matches our input data’s structure, we reuse the variables for the previous code section (n_input_neurons, n_output_neurons. As a reminder, the input sequence length is 50, and the output sequence (the steps for the period we want to predict) is ten.

# Configure the neural network model
model = Sequential()
n_output_neurons = output_sequence_length

# Model with n_neurons = inputshape Timestamps, each with x_train.shape[2] variables
n_input_neurons = x_train.shape[1] * x_train.shape[2]
print(n_input_neurons, x_train.shape[1], x_train.shape[2])
model.add(LSTM(n_input_neurons, return_sequences=True, input_shape=(x_train.shape[1], x_train.shape[2]))) 
model.add(LSTM(n_input_neurons, return_sequences=False))
model.add(Dense(20))
model.add(Dense(n_output_neurons))

# Compile the model
model.compile(optimizer='adam', loss='mse')

After configuring the model architecture, we can initiate the training process and illustrate how the loss develops over the training epochs.

# Training the model
epochs = 10
batch_size = 16
early_stop = EarlyStopping(monitor='loss', patience=5, verbose=1)
history = model.fit(x_train, y_train, 
                    batch_size=batch_size, 
                    epochs=epochs,
                    validation_data=(x_test, y_test)
                   )
                    
                    #callbacks=[early_stop])
we trained the multi-output model in ten training iterations
# Plot training & validation loss values
fig, ax = plt.subplots(figsize=(10, 5), sharex=True)
plt.plot(history.history["loss"])
plt.title("Model loss")
plt.ylabel("Loss")
plt.xlabel("Epoch")
ax.xaxis.set_major_locator(plt.MaxNLocator(epochs))
plt.legend(["Train", "Test"], loc="upper left")
plt.grid()
plt.show()
loss curve after training the multi-output neural network

Step #5 Evaluate Model Performance

Now that we have trained the model, we can make forecasts on the test data and use traditional regression metrics such as the MAE, MAPE, or MDAPE to measure the performance of our model.

# Get the predicted values
y_pred_scaled = model.predict(x_test)

# Unscale the predicted values
y_pred = scaler_pred.inverse_transform(y_pred_scaled)
y_test_unscaled = scaler_pred.inverse_transform(y_test).reshape(-1, output_sequence_length)
y_test_unscaled.shape

# Mean Absolute Error (MAE)
MAE = mean_absolute_error(y_test_unscaled, y_pred)
print(f'Median Absolute Error (MAE): {np.round(MAE, 2)}')

# Mean Absolute Percentage Error (MAPE)
MAPE = np.mean((np.abs(np.subtract(y_test_unscaled, y_pred)/ y_test_unscaled))) * 100
print(f'Mean Absolute Percentage Error (MAPE): {np.round(MAPE, 2)} %')

# Median Absolute Percentage Error (MDAPE)
MDAPE = np.median((np.abs(np.subtract(y_test_unscaled, y_pred)/ y_test_unscaled)) ) * 100
print(f'Median Absolute Percentage Error (MDAPE): {np.round(MDAPE, 2)} %')

The model performance is not outstanding, but considering that we are using a simple architecture and a little optimized model, it is ok.

Like always when dealing with time-series data, it is a good idea to illustrate the results.

def plot_multi_test_forecast(i, s, x_test, y_test_unscaled, y_pred_unscaled): 
    
    # reshape the testset into a one-dimensional array, so that it fits to the pred scaler
    x_test_scaled_reshaped = np.array(pd.DataFrame(x_test[i])[index_Close]).reshape(-1, 1)
    
    # undo the scaling on the testset
    df_test = pd.DataFrame(scaler_pred.inverse_transform(x_test_scaled_reshaped) )

    # set the max index 
    test_max_index = df_test.shape[0]
    pred_max_index = y_pred_unscaled[0].shape[0]
    test_index = range(i, i + test_max_index)
    pred_index = range(i + test_max_index, i + test_max_index + pred_max_index)
    
    # package y_pred_unscaled and y_test_unscaled into a dataframe with columns pred and true
    data = pd.DataFrame(list(zip(y_pred_unscaled[s], y_test_unscaled[i])), columns=['pred', 'true']) #
    
    fig, ax = plt.subplots(figsize=(8, 4))
    plt.title(f"Predictions vs Ground Truth {pred_index}", fontsize=12)
    ax.set(ylabel = stockname + "_stock_price_quotes")
    
    sns.lineplot(data = df_test,  y = df_test[0], x=test_index, color="#039dfc", linewidth=1.0, label='test')
    sns.lineplot(data = data,  y='true', x=pred_index, color="g", linewidth=1.0, label='true')
    sns.lineplot(data = data,  y='pred', x=pred_index, color="r", linewidth=1.0, label='pred')

x_test_unscaled = scaler_pred.inverse_transform(np.array(pd.DataFrame(x_test[0])[index_Close]).reshape(-1, 1)) 
df_test = pd.DataFrame(x_test_unscaled)
    
for i in range(5, 7): #i is the starting point for the batch in the time-series
    #data = pd.DataFrame(list(zip(y_pred[i], y_test_unscaled[i])), columns=['pred', 'true'], index=range(55,65))
    plot_multi_test_forecast(i, i, x_test, y_test_unscaled, y_pred)
    

Step #6 Create a New Forecast

Finally, let’s create a forecast on a new dataset. For this, we take the scaled dataset from section 2 (np_scaled) and extract a series with the latest 50 values. Then we use these values to generate a new prediction for the next ten days. We visualize the multi-step forecast in another line chart.

def plot_new_multi_forecast(i_test, i_pred, x_test, y_pred_unscaled): 
    
    # reshape the testset into a one-dimensional array, so that it fits to the pred scaler
    x_test_scaled_reshaped = np.array(pd.DataFrame(x_test[i_test])[index_Close]).reshape(-1, 1)
    
    # undo the scaling on the testset
    df_test = pd.DataFrame(scaler_pred.inverse_transform(x_test_scaled_reshaped) )

    # set the max index 
    test_max_index = df_test.shape[0]
    pred_max_index = y_pred_unscaled[0].shape[0]
    test_index = range(i_test, i_test + test_max_index)
    pred_index = range(i_test + test_max_index, i_test + test_max_index + pred_max_index)
    
    data = pd.DataFrame(list(zip(y_pred_unscaled[i_pred])), columns=['pred']) #
    
    fig, ax = plt.subplots(figsize=(8, 4))
    plt.title(f"Predictions vs Ground Truth {pred_index}", fontsize=12)
    sns.lineplot(data = df_test,  y = df_test[0], x=test_index, color="#039dfc", linewidth=1.0, label='test')
    sns.lineplot(data = data,  y='pred', x=pred_index, color="r", linewidth=1.0, label='pred')
    
# get the highest index from the x_test dataset
index_max = x_test.shape[0]
x_test_new = np_scaled[-51:-1,:].reshape(1,50,5)

# undo the scaling of the predictions
y_pred_scaled = model.predict(x_test_new)
y_pred = scaler_pred.inverse_transform(y_pred_scaled)

# plot the predictions
plot_new_multi_forecast(0, 0, x_test_new, y_pred)

Summary

This article has shown how we can use neural networks with multiple outputs to make predictions over various time steps. You learned how to prepare the data for training and testing the model. In addition, you now know how to match the model architecture to the input data structure. The goal of this article was not to create a perfect model, and there is plenty of room to optimize the model further. Feel free to play around with the hyperparameters and the model architecture. You can also increase the prediction horizon by adding more neurons to the output layers. However, be aware that the more steps you forecast, the higher the prediction error will become. 

I hope this article was helpful in understanding multi-output neural networks better. If you have any questions or comments, please let me know in the comments.

Author

  • Hi, I am Florian, a Zurich-based consultant for AI and Data. Since the completion of my Ph.D. in 2017, I have been working on the design and implementation of ML use cases in the Swiss financial sector. I started this blog in 2020 with the goal in mind to share my experiences and create a place where you can find key concepts of machine learning and materials that will allow you to kick-start your own Python projects.

4 Responses

  1. gianna giavelli

    So, first, this IS a useful article how to work with data. However, it seems to be a terrible architecture design for the task at hand. There needs to be a lot more custom functions in the neural design not just “oh use LSTM”

  2. Florian Müller

    Thanks for sharing your experience with the synthetic data! Just as with single outputs it’s difficult to say in advance which parameters will lead to good results. There is often no way around conducting experiments. Two things you could try are modifying the hidden layers and the length of the input periods. I’ll also experiment with the data as soon as I get back from vacation. 🙂

  3. Ed

    hi, I ran this code using your Sine function from an earlier example. So I imported the date as per your code (from Yahoo) and then below that I added:

    steps = df.shape[0]
    gradient = 0.02
    list_a = []
    for i in range(0, steps, 1):
    y = round(gradient * i + math.sin(math.pi * 0.125 * i), 5)
    list_a.append(y)

    df2 = pd.DataFrame({“Close”: list_a,
    “Open”: list_a,
    “High”: list_a,
    “Low”: list_a,
    “Volume”: list_a,
    “Adj Close”: list_a},
    columns=[“Close”,”Open”,”High”,”Low”,”Volume”,”Adj Close”])

    df2.index = df.index
    df = df2

    so the original dataframe df from Yahoo is now filled with artificial data keeping the same dates. The results are not very good if you compare it to your example given here: https://www.relataly.com/stock-market-prediction-using-multivariate-time-series-in-python/1815/

    Even if I increase the number of epochs. Is this to be expected when using multiple nodes in the output layer?

  4. Ed

    thanks, I just found this 1 just now. I already implemented 2 of your examples. Will implement this one as well!

Leave a Reply