skip navigation
skip mega-menu
Posts

First Steps using LangChain and the ChatGPT API — Hands-on insights

You can create a simple command line tool to interact with the ChatGPT API via the command line. Here's how.

We will write a simple script in Python which reads the question via command line and connects to the ChatGPT API using LangChain and retrieves an answer and then stores the result of the interaction in a simple text file.

The generated text file captures the conversation and looks like this:

Langchain Session at 2023-06-02T122046 with gpt-3.5-turbo

2023-06-02T122147:
Q: Which are the most commonly used algorithms in computer science?
A: There are many algorithms used in computer science, but some of the most commonly used ones are:

1. Sorting algorithms: These are used to sort data in a specific order, such as alphabetical or numerical.

2. Search algorithms: These are used to find specific data within a larger set of data.

3. Graph algorithms: These are used to analyze and manipulate graphs, which are used to represent relationships between objects.

4. Dynamic programming algorithms: These are used to solve complex problems by breaking them down into smaller, more manageable sub-problems.

5. Divide and conquer algorithms: These are used to solve problems by breaking them down into smaller, more manageable parts.

6. Greedy algorithms: These are used to solve optimization problems by making the best possible choice at each step.

7. Backtracking algorithms: These are used to solve problems by trying out different solutions and backtracking when a solution is found to be incorrect.

8. Randomized algorithms: These are used to solve problems by using randomization to find a solution.

2023-06-02T122247:
Q: Which are the most popular sorting algorithms in computer science?
A: The most popular sorting algorithms in computer science are:

1. Bubble Sort
2. Selection Sort
3. Insertion Sort
4. Merge Sort
5. Quick Sort
6. Heap Sort
7. Radix Sort
8. Counting Sort
9. Bucket Sort
10. Shell Sort

These algorithms are widely used in various applications and are taught in most computer science courses.

2023-06-02T122428:
Q: Can you show me an implementation of Heap Sort written in Rust?
A: Sure, here's an implementation of Heap Sort in Rust:

```rust
fn heap_sort(arr: &mut [i32]) {
let len = arr.len();
for i in (0..len / 2).rev() {
heapify(arr, len, i);
}
for i in (1..len).rev() {
arr.swap(0, i);
heapify(arr, i, 0);
}
}

fn heapify(arr: &mut [i32], n: usize, i: usize) {
let mut largest = i;
let left = 2 * i + 1;
let right = 2 * i + 2;
if left < n && arr[left] > arr[largest] {
largest = left;
}
if right < n && arr[right] > arr[largest] {
largest = right;
}
if largest != i {
arr.swap(i, largest);
heapify(arr, n, largest);
}
}

fn main() {
let mut arr = [5, 2, 9, 1, 5, 6];
heap_sort(&mut arr);
println!("{:?}", arr);
}
```

In this implementation, `heap_sort` takes a mutable reference to an array of `i32` values and sorts it using the Heap Sort algorithm. The `heapify` function is a helper function that recursively builds a max heap from the given array. The `main` function demonstrates how to use the `heap_sort` function to sort an array.

Pre-requisites

Conda, a Python package manager was used to create a dedicated environment in which LangChain and the OpenAI packages were installed.

In order to create an environment with Conda we executed this command:

conda create --name langchain python=3.10

These are the commands used to install the packages:

conda install -c conda-forge openai
conda install -c conda-forge langchain

If you want to import the environment definitions I have here is a definition export file of the enviroment I created with Conda:

view rawenvironment.yml hosted with ❤ by GitHub

You will also need to generate an API key the OpenAI website: https://platform.openai.com/account/api-keys

I ended up using a paid account to be able to use the API. You can check more about payment options here:

OpenAI API:

An API for accessing new AI models developed by OpenAI

platform.openai.com


The Python Script

The script starts off by importing some standard Python libraries and then LangChain packages which we need for our chat:

import sys
import os
import re
from pathlib import Path
from datetime import datetime

# LangChain
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage
)

Then we initialize some configuration parameters, including the OpenAI key and the model we would like to use.

# Configuration
# Put here your API key
os.environ["OPENAI_API_KEY"] = '<key>'

# Put here your model
# Other possible options for chatting are 'gpt-3.5-turbo-0301'.
model_name = "gpt-3.5-turbo"

There should be other models coming up soon. I have tried gpt-4 but that threw an error as I have no access to the public API Beta. Here is some extra information on other models.

After this you just create the main object used to interact with the REST API’s of the model you want to use:

# Initialize the chat object.
chat = ChatOpenAI(model_name=model_name, temperature=0)

Here we initialize the chat object using the selected model and a temperature of 0, which is used to reduce the randomness of the responses. The default value for temperature is 1. Check for more information about this parameter here.

The script contains a method used to the generate the current date:

def generate_iso_date():
current_date = datetime.now()
return re.sub(r"\.\d+$", "", current_date.isoformat().replace(':', ''))

And a class which is used to capture the content of the chat:

class ChatFile:
def __init__(self, current_file: Path, model_name: str) -> None:
self.current_file = current_file
self.model_name = model_name
print(f"Writing to file {current_file}")
with open(self.current_file, 'w') as f:
f.write(f"Langchain Session at {generate_iso_date()} with {self.model_name}\n\n")

def store_to_file(self, question: str, answer: str):
print(f"{answer}")
with open(self.current_file, 'a') as f:
f.write(f"{generate_iso_date()}:\nQ: { question}\nA: {answer}\n\n")

# Create a chat file
chat_file = ChatFile(Path(f"{model_name}_{generate_iso_date()}.txt"), model_name)

And then a simple loop which reads the user input and sends the input to the ChatGPT model and receives the answer and displays it on the console and saves it on a local file.

for line in sys.stdin:
print(f"[{model_name}]", end =">> ")
question = line.strip()
if 'q' == question:
break
# The LLM takes a prompt as an input and outputs a completion
resp = chat([HumanMessage(content=question)])
answer = resp.content
chat_file.store_to_file(question, answer)

Here is the whole script:


Conclusion

LangChain offers an easy integration with ChatGPT which you can use via a simple script like the one shown above. You just need to have an OpenAI key and in most cases a paid OpenAI account.

The ChatGPT 4 models are not yet accessible to the wider public, so for now most people will have to stick to ChatGPT 3.5 models.


Gil Fernandes, Onepoint Consulting

Subscribe to our newsletter

Sign up here