from simpleaichat import AIChat ai = AIChat(system="Write a fancy GitHub README based on the user-provided project name.") ai("simpleaichat")
simpleaichat is a Python package for easily interfacing with chat apps like ChatGPT and GPT-4 with robust features and minimal code complexity. This tool has many features optimized for working with ChatGPT as fast and as cheap as possible, but still much more capable of modern AI tricks than most implementations:
- Create and run chats with only a few lines of code!
- Optimized workflows which minimize the amount of tokens used, reducing costs and latency.
- Run multiple independent chats at once.
- Minimal codebase: no code dives to figure out what’s going on under the hood needed!
- Chat streaming responses and the ability to use tools.
- Async support, including for streaming and tools.
- Ablity to create more complex yet clear workflows if needed, such as Agents. (Demo soon!)
- Coming soon: more chat model support (PaLM, Claude)!
Here’s some fun, hackable examples on how simpleaichat works:
- Creating a Python coding assistant without any unnecessary accompanying output, allowing 5x faster generation at 1/3rd the cost. (Colab)
- Allowing simpleaichat to provide inline tips following ChatGPT usage guidelines. (Colab)
- Async interface for conducting many chats in the time it takes to receive one AI message. (Colab)
- Create your own Tabletop RPG (TTRPG) setting and campaign by using advanced structured data models. (Colab)
Installation
simpleaichat can be installed from PyPI:
pip3 install simpleaichat
Quick, Fun Demo
You can demo chat-apps very quickly with simpleaichat! First, you will need to get an OpenAI API key, and then with one line of code:
from simpleaichat import AIChat AIChat(api_key="sk-...")
And with that, you’ll be thrust directly into an interactive chat!
This AI chat will mimic the behavior of OpenAI’s webapp, but on your local computer!
You can also pass the API key by storing it in an .env
file with a OPENAI_API_KEY
field in the working directory (recommended), or by setting the environment variable of OPENAI_API_KEY
directly to the API key.
But what about creating your own custom conversations? That’s where things get fun. Just input whatever person, place or thing, fictional or nonfictional, that you want to chat with!
AIChat("GLaDOS") # assuming API key loaded via methods above
But that’s not all! You can customize exactly how they behave too with additional commands!
AIChat("GLaDOS", "Speak in the style of a Seinfeld monologue")
AIChat("Ronald McDonald", "Speak using only emoji")
Need some socialization immediately? Once simpleaichat is installed, you can also start these chats directly from the command line!
simpleaichat simpleaichat "GlaDOS" simpleaichat "GLaDOS" "Speak in the style of a Seinfeld monologue"
Building AI-based Apps
The trick with working with new chat-based apps that wasn’t readily available with earlier iterations of GPT-3 is the addition of the system prompt: a different class of prompt that guides the AI behavior throughout the entire conversation. In fact, the chat demos above are actually using system prompt tricks behind the scenes! OpenAI has also released an official guide for system prompt best practices to building AI apps.
For developers, you can instantiate a programmatic instance of AIChat
by explicitly specifying a system prompt, or by disabling the console.
ai = AIChat(system="You are a helpful assistant.") ai = AIChat(console=False) # same as above
You can also pass in a model
parameter, such as model="gpt-4"
if you have access to GPT-4, or model="gpt-3.5-turbo-16k"
for a larger-context-window ChatGPT.
You can then feed the new ai
class with user input, and it will return and save the response from ChatGPT:
response = ai("What is the capital of California?") print(response)
The capital of California is Sacramento.
Alternatively, you can stream responses by token with a generator if the text generation itself is too slow:
for chunk in ai.stream("What is the capital of California?", params={"max_tokens": 5}): response_td = chunk["response"] # dict contains "delta" for the new token and "response" print(response_td)
The
The capital
The capital of
The capital of California
The capital of California is
Further calls to the ai
object will continue the chat, automatically incorporating previous information from the conversation.
response = ai("When was it founded?") print(response)
Sacramento was founded on February 27, 1850.
You can also save chat sessions (as CSV or JSON) and load them later. The API key is not saved so you will have to provide that when loading.
ai.save_session() # CSV, will only save messages ai.save_session(format="json", minify=True) # JSON ai.load_session("my.csv") ai.load_session("my.json")
Functions
A large number of popular venture-capital-funded ChatGPT apps don’t actually use the “chat” part of the model. Instead, they just use the system prompt/first user prompt as a form of natural language programming. You can emulate this behavior by passing a new system prompt when generating text, and not saving the resulting messages.
The AIChat
class is a manager of chat sessions, which means you can have multiple independent chats or functions happening! The examples above use a default session, but you can create new ones by specifying a id
when calling ai
.
json = '{"title": "An array of integers.", "array": [-1, 0, 1]}' functions = [ "Format the user-provided JSON as YAML.", "Write a limerick based on the user-provided JSON.", "Translate the user-provided JSON from English to French." ] params = {"temperature": 0.0, "max_tokens": 100} # a temperature of 0.0 is deterministic # We namespace the function by `id` so it doesn't affect other chats. # Settings set during session creation will apply to all generations from the session, # but you can change them per-generation, as is the case with the `system` prompt here. ai = AIChat(id="function", params=params, save_messages=False) for function in functions: output = ai(json, id="function", system=function) print(output)
title: "An array of integers." array: - -1 - 0 - 1
An array of integers so neat, With values that can't be beat, From negative to positive one, It's a range that's quite fun, This JSON is really quite sweet!
{"titre": "Un tableau d'entiers.", "tableau": [-1, 0, 1]}
Newer versions of ChatGPT also support “function calling“, but the real benefit of that feature is the ability for ChatGPT to support structured input and/or output, which now opens up a wide variety of applications! simpleaichat streamlines the workflow to allow you to just pass an input_schema
and/or an output_schema
.
You can construct a schema using a pydantic BaseModel.
from pydantic import BaseModel, Field ai = AIChat( console=False, save_messages=False, # with schema I/O, messages are never saved model="gpt-3.5-turbo-0613", params={"temperature": 0.0}, ) class get_event_metadata(BaseModel): """Event information""" description: str = Field(description="Description of event") city: str = Field(description="City where event occured") year: int = Field(description="Year when event occured") month: str = Field(description="Month when event occured") # returns a dict, with keys ordered as in the schema ai("First iPhone announcement", output_schema=get_event_metadata)
{'description': 'The first iPhone was announced by Apple Inc.', 'city': 'San Francisco', 'year': 2007, 'month': 'January'}
See the TTRPG Generator Notebook for a more elaborate demonstration of schema capabilities.
Tools
One of the most recent aspects of interacting with ChatGPT is the ability for the model to use “tools.” As popularized by LangChain, tools allow the model to decide when to use custom functions, which can extend beyond just the chat AI itself, for example retrieving recent information from the internet not present in the chat AI’s training data. This workflow is analogous to ChatGPT Plugins.
Parsing the model output to invoke tools typically requires a number of shennanigans, but simpleaichat uses a neat trick to make it fast and reliable! Additionally, the specified tools return a context
for ChatGPT to draw from for its final response, and tools you specify can return a dictionary which you can also populate with arbitrary metadata for debugging and postprocessing. Each generation returns a dictionary with the response
and the tool
function used, which can be used to set up workflows akin to LangChain-style Agents, e.g. recursively feed input to the model until it determines it does not need to use any more tools.
You will need to specify functions with docstrings which provide hints for the AI to select them:
from simpleaichat.utils import wikipedia_search, wikipedia_search_lookup # This uses the Wikipedia Search API. # Results from it are nondeterministic, your mileage will vary. def search(query): """Search the internet.""" wiki_matches = wikipedia_search(query, n=3) return {"context": ", ".join(wiki_matches), "titles": wiki_matches} def lookup(query): """Lookup more information about a topic.""" page = wikipedia_search_lookup(query, sentences=3) return page params = {"temperature": 0.0, "max_tokens": 100} ai = AIChat(params=params, console=False) ai("San Francisco tourist attractions", tools=[search, lookup])
{'context': "Fisherman's Wharf, San Francisco, Tourist attractions in the United States, Lombard Street (San Francisco)", 'titles': ["Fisherman's Wharf, San Francisco", 'Tourist attractions in the United States', 'Lombard Street (San Francisco)'], 'tool': 'search', 're