From Pirates to Nobleman: Simulating Conversations between Various Characters using OpenAI’s ChatGPT and Python

Are you curious how artificial intelligence and natural language processing can transport you back in time to witness a conversation between a Pirate and a Nobleman from the 18th century? Well, wonder no more! In this article, we will delve into the fascinating world of ChatGPT, a large generative language model developed by OpenAI that (among many other things) enables natural conversations between distinct characters. We will use a Python script to showcase how ChatGPT can generate a conversation between a pirate and a nobleman about the sense of life. Additionally, we’ll provide insights into how you can personalize the characters and make the conversation even more intriguing. So, fasten your seatbelts and prepare to be transported through time to a conversation that will spark your curiosity and make you contemplate the meaning of existence!

Also: ChatGPT Prompt Engineering: A Practical Guide for Businesses

Conversations between Two instances of ChatGPT of itself – How does it work?

The GPT in ChatGPT stands for “Generative Pretrained Transformer“, which has become a synonym for one of the most exciting innovations of the last decade. One interesting aspect of ChatGPT is that it can be used to create a conversation between two instances of itself. This is made possible by using the response from one instance as the prompt for the next.

To achieve this, we can feed the response from the first instance as the input to the second instance, and so on. Each instance will generate a new response based on the previous response, creating a chain of conversations between the instances.

This approach can be useful for various applications, such as creating chatbots that can hold more complex and natural conversations or for generating realistic and diverse dialogue for fictional characters in a story.

However, it is important to note that this approach may also lead to issues such as instances getting stuck in loops or producing nonsensical responses. Therefore, it is important to carefully monitor and control the conversations to ensure that they remain coherent and relevant to the intended purpose.

The illustration shows the first three steps in a conversation between ChatGPT and itself - In the sample, the prompt includes information that makes ChatGPT alternate between the roles of a pirate and an aristocrat.
The illustration shows the first three steps in a conversation between ChatGPT and itself. In the sample, the prompt includes information that makes ChatGPT alternate between the roles of a pirate and an aristocrat.

Also: Eliminating Friction: How LLMs such as OpenAI’s ChatGPT Streamline Digital Experiences and Reduce the Need for Search

How to Manage a Conversation Context with ChatGPT

When using ChatGPT models, there are three different roles available: system, user, and assistant. Understanding these roles is critical when designing a conversation because it allows us to manage the context and information provided to the model.

  • The system role indicates that a message comes from the system or the model itself. While it is not required to include system messages in a conversation, doing so can help set up the conversation’s context and provide information that can be used to guide the model’s subsequent responses.
  • The user role indicates that a message comes from an end-user or an application. This role represents prompts that trigger a response from the model. When designing a conversation, it is essential to carefully craft these prompts to provide the model with the necessary context and information to provide a useful response.
  • The assistant role indicates that a message is coming from the assistant, which is the model itself. This role helps maintain continuity between the user and the model during a conversation. Using the assistant role, previous conversations can be saved and sent again in subsequent requests to help guide the model’s responses.

The code below shows how to use these roles when requesting the ChatGPT model.

prompt = [{"role": "system", "content": general_instructions}, 
          {"role": "user", "content": task},
         {"role": "assistant": previous_responses}] 

#print('Generating response from OpenAI...') 
completion = openai.ChatCompletion.create(

Understanding the different roles available when using ChatGPT models is critical for designing effective conversations. By carefully crafting messages with the appropriate role, we can provide the model with the necessary context and information it needs to provide useful and relevant responses.

Simulating a ChatGPT to ChatGPT Conversation between a Pirate and a Nobleman

Now that you understand what this article aims to do, and you have a broad understanding of how we will achieve this, we can focus on the main task – creating a GPT-to-GPT conversation. In the following, we will use ChatGPT to simulate a discussion between a pirate and a noblemen on the sense of life. The characters have distinct personalities and goals, which will be reflected in their responses and the unqiue style how they converse.

The code is available on the GitHub repository.

OpenAI ChatGPT has various use cases. Among others, it can be used to simulate conversations between fictional and non-fictional characters.

Some Words on the OpenAI API Key and Inference Costs

To run the code below, you will need an OpenAI API that you can obtain from OpenAI or from Azure OpenAI. Yes, you will have to sign up.

How about the cost of inferring the model? We will be utilizing the ChatGPT Turbo model for every conversation, which will require using the completion endpoint of the model several times, leading to some costs. The cost of using GPT models varies depending on the model type and the number of processed tokens. In the use case discussed in this article, each code execution will involve 20 API calls to the ChatGPT Turbo model (3.5). Since this model is highly cost-effective and we will not be generating an excessive amount of text, executing the code will only incur a few cents in costs.

Step #1 Imports and OpenAI Key

Let’s begin by importing a few libraries such as openai, datetime, and azure. We will also set up an API key for OpenAI. You can for example store and retrieve the API key from an environment variable or from an Azure Key Vault. Storing the key directly in the code is not advised, as you might accidently commit your code to a public repository and expose the key.

The code below will also create a folder to store conversations. Each conversation will get stored in HTML format.

import openai
import datetime as dt
from azure.identity import AzureCliCredential
from azure.keyvault.secrets import SecretClient
import time
import os

# use environment variables for API key
timestamp ="%Y%m%d_%H%M%S")

# set your OpenAI API key here
API_KEY = os.environ.get("OPENAI_API_KEY")
if API_KEY is None:
    print('trying to get API key from azure keyvault')
    keyvault_name = 'your-keyvault-name'
    client = SecretClient(f"https://{keyvault_name}", AzureCliCredential())
    API_KEY = client.get_secret('openai-api-key').value
openai.api_key = API_KEY

# create a folder to store the conversations if it does not exist
path = 'ChatGPT_conversations'
if not os.path.exists(path):

Step #2 Functions for Prompts and OpenAI Completion

We begin by defining three essential functions used to create and maintain a conversation using the OpenAI ChatGPT completion endpoint.

  • The first function, initialize_conversation, sets up the conversation by creating an initial prompt for the user to start the conversation with a given topic and character description. For instance, the code below sets the initial prompt as “Good day, Sir. Wonderful day isn’t it?” and waits for ChatGPT to respond with an appropriate reply.
  • The second function, respond_prompt, generates a response to the previous response and creates a prompt for the user to continue the conversation. This function is called multiple times as the conversation progresses.
  • Finally, the openai_request function utilizes the OpenAI chat engine to generate a response to the prompt created by the previous two functions. The response generated by the chat engine is then returned to the calling function.

By using these functions, we can establish a fully functional and engaging conversation system that utilizes the power of OpenAI’s ChatGPT model.

# This function creates a prompt that initializes the conversation 
def initialize_conversation(topic='', character=''):
    instructions = f' You have a conversation on {topic}. You can bring up any topic that comes to your mind'
    instructions = character['description'] + instructions
    task = f'Good day, Sir.'
    if topic != '':
        task = task + f' Wonderful day isn t it?'
    return instructions, task

# This function creates a prompt that responds to the previous response
def respond_prompt(response, topic='', character=''):    
    instructions = f'You have a conversation with someone on {topic}. \
    Reply to questions and bring up any topic that comes to your mind.\
    Dont say more than 2 sentences at a time.'
    instructions = character['description'] + instructions
    task = f'{response}' 
    return instructions, task

# OpenAI Engine using the turbo model
def openai_request(instructions, task, model_engine='gpt-3.5-turbo'):
    prompt = [{"role": "system", "content": instructions }, 
              {"role": "user", "content": task }]

    #print('Generating response from OpenAI...')
    completion = openai.ChatCompletion.create(
    temperature=1.0, # this will lead to create responses that are more creative

    response = completion.choices[0].message.content

    return response

Step #3 Defining Characters

After defining the functions, the next step is to create the conversation by specifying the topic and the characters that will participate in the debate. In the following example, the topic is “the sense of life,” and the two characters are James, an Aristocrat, and Blackbeard, a Pirate. These characters are described with unique personalities and behaviors that will be utilized in the conversation simulation.

With this framework, it is simple to modify the behaviors, voice, and tone of the characters by adjusting their descriptions. This flexibility allows for the creation of intriguing conversations on any topic between different characters. The possibilities are endless; one could simulate a debate between Darth Vader and Harry Potter, or even describe the behavior and background of the characters in more detail.

In short, this framework provides a flexible and adaptable platform for generating engaging conversations, allowing for the exploration of various perspectives and topics.

# initialize conversation on the following topic
topic = 'The sense of life'
conversation_rounds = 20

# description of character 1
color_1 = 'darkblue' 
character_1 = {
"name": 'James (Aristocrat)',
"description": 'You are a French nobleman from the 18th century. \
    Your knowledge and wordlview corresponds to that of a common aristocrate from that time. \
    You speak in a distinguished manner. \
    You response in one or two sentences. \
    You are afraid of pirates but also curious to meet one.'}

# description of character 2 
color_2 = 'brown'
character_2 = {
"name": 'Blackbeard (Pirate)',
"description": 'You are a devious pirate from the 18th century who tends to swear. \
    Your knowledge and wordlview corresponds to that of a common pirate from that time. \
    You response in one or two sentences. \
    You are looking for a valuable treasure and trying to find where it is hidden. \
    You try to steer the conversation back to the treasure no matter what.'}

Step #4 Start the Conversation

Now that we have our characters defined, it’s time to start the conversation. The next section of the code invokes the conversation between the two characters by initializing it with a greeting from one of the characters. In the code below, James starts the conversation with a prompt: “Good day, Sir. Wonderful day, isn’t it?”.

The conversation then proceeds with each character taking turns responding to the previous prompt. The response from each character is generated by calling the openai_request() function, which sends a prompt to OpenAI’s GPT-3.5 model and returns a response generated by the model.

The conversation continues for a specified number of rounds, which can be set by changing the value of the “num_rounds” variable. Each round consists of a response from one character followed by a response from the other character.

conversation = ''
for i in range(conversation_rounds):
        # initialize conversation
        if i == 0:
            print('Initializing conversation...')
            text_color = color_1
            name = character_1['name']
            instructions, task = initialize_conversation(topic, character_1)
            response = openai_request(instructions, task)
            print(f'{name}: {task}')
            conversation = f'<p style="color: {text_color};"><b>{name}</b>: {task}</p> \n'
        # alternate between character_1 and character_2
            if i % 2 == 0:
                text_color = color_1
                name = character_1['name']
                instructions, task = respond_prompt(response, topic, character_1)
                text_color = color_2
                name = character_2['name']
                instructions, task = respond_prompt(response, topic, character_2)

            # OpenAI request
            response = openai_request(instructions, task)

            # wait some seconds 

            # add response to conversation after linebreak
            print(f'{name}: {response}')
            conversation += ' ' + f'<p style="color: {text_color};"><b>{name}</b>: {response}</p> \n'

        #print('storing conversation')
        # store conversation with timestamp
        filename = f'{path}/GPTconversation_{timestamp}.html'
        with open(filename, 'w') as f:

As you can see, the conversation is quite entertaining and even touches upon some aspects of an interesting philosophical question. But foremost it’s an amusing example of an exotic use case for ChatGPT.

By modifying the character descriptions and prompts, you can create interesting conversations on any topic between various characters. For example, you can create a conversation between Albert Einstein and Marie Curie on the topic of physics, or between Abraham Lincoln and Martin Luther King Jr. on the topic of civil rights. The possibilities are endless!


Arrr sailor, you have reached the end of this post! So let’s do a quick recap. This article has presented a python script that allowed for the simulation of a conversation between a pirate and a nobleman. The script used the ChatGPT language model to generate responses for the characters based on a given topic. We have provided instructions for running the script and customizing the personalities and behaviors of the characters. In addition, we have discussed how the script works by generating a response from OpenAI that is then used as a prompt for another ChatGPT instance. In this way, ChatGPT can be used to simulate a conversation on any topic.

Overall, I hope this article was able to demonstrate the potential of using GenerativeAI and natural language processing to create engaging and entertaining dialogues between virtual characters.

If you have any questions or want to share your experiences with what you could achieve with the script, please let me and everyone else know in the comment.

Sources and Further Reading
Azure OpenAI Service
ChatGPT helped to revise this article, but the thoughts are human-made.
Images generated with


  • Florian Follonier

    Hi, I am Florian, a Zurich-based Cloud Solution Architect for AI and Data. Since the completion of my Ph.D. in 2017, I have been working on the design and implementation of ML use cases in the Swiss financial sector. I started this blog in 2020 with the goal in mind to share my experiences and create a place where you can find key concepts of machine learning and materials that will allow you to kick-start your own Python projects.

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x