With our development environment set up and our first API call to OpenAI under our belt, it's time to bring our chatbot to life. In this article, we'll build a basic command-line interface that allows for interactive conversations with our chatbot. We'll focus on handling user input, integrating it with the OpenAI API, and displaying the responses in a user-friendly manner. By the end of this tutorial, you'll have a functional chatbot that you can converse with right from your terminal.
Why a Command-Line Interface?
While graphical user interfaces (GUIs) are visually appealing, a command-line interface (CLI) is quicker to develop and sufficient for testing purposes. It allows us to focus on the core functionality of our chatbot without the overhead of GUI development.
Designing the Chatbot Interface
Our goal is to create a loop where the user can input messages, and the chatbot responds accordingly. We'll need to handle:
- Continuous input from the user.
- Sending the input to the OpenAI API.
- Displaying the chatbot's response.
- An exit condition to end the conversation.
Setting Up the Project Structure
Let's create a new Python script called chatbot_cli.py
in our project directory.
touch chatbot_cli.py
Ensure your virtual environment is activated:
# On Windows
venv\Scripts\activate
# On macOS/Linux
source venv/bin/activate
Writing the Chatbot Script
Importing Necessary Libraries
We'll start by importing the required modules.
import os
import openai
from dotenv import load_dotenv
Loading Environment Variables
Load your API key from the .env
file.
load_dotenv()
openai.api_key = os.getenv('OPENAI_API_KEY')
Defining the Response Generation Function
We'll use a function similar to the one we wrote previously but make it more interactive.
def generate_response(prompt, context=None):
if context is None:
context = []
context.append(f"User: {prompt}")
prompt_formatted = "\n".join(context) + "\nChatbot:"
response = openai.Completion.create(
engine='text-davinci-003',
prompt=prompt_formatted,
max_tokens=150,
n=1,
stop=["User:", "Chatbot:"],
temperature=0.7,
)
answer = response.choices[0].text.strip()
context.append(f"Chatbot: {answer}")
return answer, context
Explanation:
- Context Management: We're maintaining a conversation context by keeping track of previous exchanges.
- Prompt Formatting: We format the prompt to include previous interactions.
- Stop Sequences: We define
stop
tokens to indicate when the API should stop generating text.
Building the Chat Loop
Now, we'll create a loop that allows continuous conversation until the user decides to exit.
def chat():
print("Welcome to the Chatbot! Type 'exit' to end the conversation.")
context = []
while True:
user_input = input("You: ")
if user_input.lower() in ['exit', 'quit']:
print("Chatbot: Goodbye!")
break
response, context = generate_response(user_input, context)
print(f"Chatbot: {response}")
The Main Function
Let's tie everything together.
if __name__ == '__main__':
chat()
Full Script: chatbot_cli.py
import os
import openai
from dotenv import load_dotenv
load_dotenv()
openai.api_key = os.getenv('OPENAI_API_KEY')
def generate_response(prompt, context=None):
if context is None:
context = []
context.append(f"User: {prompt}")
prompt_formatted = "\n".join(context) + "\nChatbot:"
response = openai.Completion.create(
engine='text-davinci-003',
prompt=prompt_formatted,
max_tokens=150,
n=1,
stop=["User:", "Chatbot:"],
temperature=0.7,
)
answer = response.choices[0].text.strip()
context.append(f"Chatbot: {answer}")
return answer, context
def chat():
print("Welcome to the Chatbot! Type 'exit' to end the conversation.")
context = []
while True:
user_input = input("You: ")
if user_input.lower() in ['exit', 'quit']:
print("Chatbot: Goodbye!")
break
response, context = generate_response(user_input, context)
print(f"Chatbot: {response}")
if __name__ == '__main__':
chat()
Running the Chatbot
Activate your virtual environment and run the script:
python chatbot_cli.py
Sample Interaction:
Welcome to the Chatbot! Type 'exit' to end the conversation.
You: Hello!
Chatbot: Hello there! How can I assist you today?
You: What's the weather like today?
Chatbot: I'm not able to check the weather, but I hope it's nice where you are!
You: Tell me a joke.
Chatbot: Why did the programmer quit his job? Because he didn't get arrays.
You: exit
Chatbot: Goodbye!
Enhancing User Input Handling
Handling Empty Input
We can modify the loop to handle cases where the user presses Enter without typing anything.
if not user_input.strip():
print("Chatbot: Please say something so I can assist you.")
continue
Implementing Basic Commands
Let's add a help command to list available commands.
if user_input.lower() == 'help':
print("Available commands:\n - exit: Quit the chatbot\n - help: Show this help message")
continue
Updated Chat Loop
def chat():
print("Welcome to the Chatbot! Type 'help' for a list of commands.")
context = []
while True:
user_input = input("You: ")
if user_input.lower() in ['exit', 'quit']:
print("Chatbot: Goodbye!")
break
if user_input.lower() == 'help':
print("Available commands:\n - exit: Quit the chatbot\n - help: Show this help message")
continue
if not user_input.strip():
print("Chatbot: Please say something so I can assist you.")
continue
response, context = generate_response(user_input, context)
print(f"Chatbot: {response}")
Testing the Enhanced Chatbot
Run the script again and test the new features.
Sample Interaction:
Welcome to the Chatbot! Type 'help' for a list of commands.
You:
Chatbot: Please say something so I can assist you.
You: help
Available commands:
- exit: Quit the chatbot
- help: Show this help message
You: What can you do?
Chatbot: I can chat with you on a variety of topics, answer questions, and provide information. How may I assist you today?
Integrating with the OpenAI API
Our chatbot now sends each user input to the OpenAI API, including the conversation context. This approach helps the AI generate more relevant and coherent responses.
Understanding the Prompt Structure
By structuring the prompt as a conversation between "User" and "Chatbot," we're providing context that helps the AI understand the flow.
Example Prompt Sent to OpenAI:
User: Hello!
Chatbot: Hello there! How can I assist you today?
User: What's the capital of France?
Chatbot:
Adjusting API Parameters
You can tweak parameters to improve the chatbot's performance.
-
Max Tokens: Increase if you want longer responses.
max_tokens=200
-
Temperature: Adjust to control creativity.
- Lower Values (e.g., 0.5): More deterministic responses.
- Higher Values (e.g., 0.9): More creative and varied responses.
-
Top P: Another parameter to control diversity.
top_p=0.9
Example Adjustment
response = openai.Completion.create(
engine='text-davinci-003',
prompt=prompt_formatted,
max_tokens=200,
n=1,
stop=["User:", "Chatbot:"],
temperature=0.8,
top_p=0.9,
)
Handling API Limitations
Rate Limiting
If you make too many requests in a short period, you may encounter rate limits.
- Solution: Implement a short delay between requests using
time.sleep()
if necessary.
Error Handling
Enhance the generate_response
function to catch exceptions.
import time
def generate_response(prompt, context=None):
if context is None:
context = []
context.append(f"User: {prompt}")
prompt_formatted = "\n".join(context) + "\nChatbot:"
try:
response = openai.Completion.create(
engine='text-davinci-003',
prompt=prompt_formatted,
max_tokens=150,
n=1,
stop=["User:", "Chatbot:"],
temperature=0.7,
)
answer = response.choices[0].text.strip()
context.append(f"Chatbot: {answer}")
time.sleep(0.5) # To prevent hitting rate limits
return answer, context
except openai.error.OpenAIError as e:
print(f"An error occurred: {e}")
return "I'm sorry, but I'm having trouble processing your request.", context
Testing the Chatbot's Conversational Abilities
Engaging in Extended Conversations
Try having a longer conversation to see how well the chatbot maintains context.
Sample Interaction:
You: Hi there!
Chatbot: Hello! How can I help you today?
You: I'm feeling a bit stressed about work.
Chatbot: I'm sorry to hear that. Would you like to talk about what's causing the stress?
You: It's just a lot of deadlines.
Chatbot: Managing multiple deadlines can be overwhelming. Have you tried prioritizing tasks or taking short breaks to clear your mind?
Observations
- The chatbot remembers previous inputs.
- It provides relevant and empathetic responses.
Recommended Tools and Accessories
To further enhance your development experience:
Terminal Multiplexers
Using a terminal multiplexer like tmux or screen allows you to manage multiple terminal sessions.
-
tmux: Install it via:
# On Ubuntu/Debian sudo apt-get install tmux # On macOS (using Homebrew) brew install tmux
-
Benefits: Split your terminal window, run multiple sessions, and keep processes running after disconnecting.
Python Debugger (pdb)
For debugging your scripts, Python's built-in debugger can be invaluable.
-
Usage:
import pdb; pdb.set_trace()
-
Alternative: Use ipdb for an enhanced debugging experience.
pip install ipdb
Recommended Reading
"Automate the Boring Stuff with Python" by Al Sweigart is a great resource for learning practical Python programming.
Conclusion
You've now built a basic command-line chatbot that interacts with users in real-time. This chatbot maintains context, handles user input gracefully, and leverages the power of the OpenAI API to generate human-like responses. This foundation sets the stage for more advanced features, such as enhancing contextual awareness and deploying the chatbot as a web application.
In the next article, we'll delve deeper into making the chatbot more contextually aware, allowing for even more coherent and meaningful conversations. We'll explore techniques for managing conversation history more effectively and ensuring the chatbot remains on topic.
For more tutorials and insights on boosting your developer productivity, be sure to check out slaptijack.com.