Slash Your LLM API Costs by 10x
Quick Install
pip install gptcache
🚀 What is GPTCache?
ChatGPT and various large language models (LLMs) boast incredible versatility, enabling the development of a wide range of applications. However, as your application grows in popularity and encounters higher traffic levels, the expenses related to LLM API calls can become substantial. Additionally, LLM services might exhibit slow response times, especially when dealing with a significant number of requests.
To tackle this challenge, we have created GPTCache, a project dedicated to building a semantic cache for storing LLM responses.
😊 Quick Start
Note:
- You can quickly try GPTCache and put it into a production environment without heavy development. However, please note that the repository is still under heavy development.
- By default, only a limited number of libraries are installed to support the basic cache functionalities. When you need to use additional features, the related libraries will be automatically installed.
- Make sure that the Python version is 3.8.1 or higher, check:
python --version
- If you encounter issues installing a library due to a low pip version, run:
python -m pip install --upgrade pip
.
dev install
# clone GPTCache repo git clone -b dev https://github.com/zilliztech/GPTCache.git cd GPTCache # install the repo pip install -r requirements.txt python setup.py install
example usage
These examples will help you understand how to use exact and similar matching with caching. You can also run the example on Colab. And more examples you can refer to the Bootcamp
Before running the example, make sure the OPENAI_API_KEY environment variable is set by executing echo $OPENAI_API_KEY
.
If it is not already set, it can be set by using export OPENAI_API_KEY=YOUR_API_KEY
on Unix/Linux/MacOS systems or set OPENAI_API_KEY=YOUR_API_KEY
on Windows systems.
It is important to note that this method is only effective temporarily, so if you want a permanent effect, you’ll need to modify the environment variable configuration file. For instance, on a Mac, you can modify the file located at
/etc/profile
.
Click to SHOW example code
OpenAI API original usage
import os import time import openai def response_text(openai_resp): return openai_resp['choices'][0]['message']['content'] question = 'what‘s chatgpt' # OpenAI API original usage openai.api_key = os.getenv("OPENAI_API_KEY") start_time = time.time() response = openai.ChatCompletion.create( model='gpt-3.5-turbo', messages=[ { 'role': 'user', 'content': question } ], ) print(f'Question: {question}') print("Time consuming: {:.2f}s".format(time.time() - start_time)) print(f'Answer: {response_text(response)}n')
OpenAI API + GPTCache, exact match cache
If you ask ChatGPT the exact same two questions, the answer to the second question will be obtained from the cache without requesting ChatGPT again.
import time def response_text(openai_resp): return openai_resp['choices'][0]['message']['content'] print("Cache loading.....") # To use GPTCache, that's all you need # ------------------------------------------------- from gptcache import cache from gptcache.adapter import openai cache.init() cache.set_openai_key() # ------------------------------------------------- question = "what's github" for _ in range(2): start_time = time.time() response = openai.ChatCompletion.create( model='gpt-3.5-turbo', messages=[ { 'role': 'user', 'content': question } ], ) print(f'Question: {question}') print("Time consuming: {:.2f}s".format(time.time() - start_time)) print(f'Answer: {response_text(response)}n')
OpenAI API + GPTCache, similar search cache
After obtaining an answer from ChatGPT in response to several similar questions, the answers to subsequent questions can be retrieved from the cache without the need to request ChatGPT again.
import time def response_text(openai_resp): return openai_resp['choices'][0]['message']['content'] from gptcache import cache from gptcache.adapter import openai from gptcache.embedding import Onnx from gptcache.manager import CacheBase, VectorBase, get_data_manager from gptcache.similarity_evaluation.distance import SearchDistanceEvaluation print("Cache loading.....") onnx = Onnx() data_manager = get_data_manager(CacheBase("sqlite"), VectorBase("faiss", dimension=onnx.dimension)) cache.init( embedding_func=onnx.to_embeddings, data_manager=data_manager, similarity_evaluation=SearchDistanceEvaluation(), ) cache.set_openai_key() questions = [ "what's github", "can you explain what GitHub is", "can you tell me more about GitHub", "what is the purpose of GitHub" ] for question in questions: start_time = time.time() response = openai.ChatCompletion.create( model='gpt-3.5-turbo', messages=[ { 'role': 'user', 'content': question } ], ) print(f'Question: {question}') print("Time consuming: {:.2f}s".format(time.time() - start_time)) print(f'Answer: {response_text(response)}n')
OpenAI API + GPTCache, use temperature
You can always pass a parameter of temperature while requesting the API service or model.
The range of
temperature
is [0, 2], default value is 0.0.A higher temperature means a higher possibility of skipping cache search and requesting large model directly.
When temperature is 2, it will skip cache and send request to large model directly for sure. When temperature is 0, it will search cache before requesting large model service.The default
post_process_messages_func
istemperature_softmax
. In this case, refer to API reference to learn about howtemperature
affects output.
import time from gptcache import cache, Config from gptcache.manager import manager_factory from gptcache.embedding import Onnx from gptcache.processor.post import temperature_softmax from gptcache.similarity_evaluation.distance import SearchDistanceEvaluation from gptcache.adapter import openai cache.set_openai_key() onnx = Onnx() data_manager = manager_factory("sqlite,faiss", vector_params={"dimension": onnx.dimension}) cache.init( embedding_func=onnx.to_embeddings, data_manager=data_manager, similarity_evaluation=SearchDistanceEvaluation(), post_process_messages_func=temperature_softmax ) # cache.config = Config(similarity_threshold=0.2) question = "what's github" for _ in range(3): start = time.time() response = openai.ChatCompletion.create( model="gpt-3.5-turbo", temperature = 1.0, # Change temperature here messages=[{ "role": "user", "content": question }], ) print("Time elapsed:", round(time.time() - start, 3)) print("Answer:", response["choices"][0]["message"]["content"])
To use GPTCache exclusively, only the following lines of code are required, and there is no need to modify any existing code.
from gptcache import cache from gptcache.adapter import openai cache.init() cache.set_openai_key()
More Docs:
- Usage, how to use GPTCache better
- Features, all features currently supported by the cache
- Examples, learn better custom caching
🎓 Bootcamp
- GPTCache with LangChain
- GPTCache with Llama_index
- GPTCache with OpenAI
- GPTCache with Replicate
- GPTCache with Temperature Param
😎 What can this help with?
GPTCache offers the following primary benefits:
- Decreased expenses: Most LLM services charge fees based on a combination of number of requests and token count. GPTCache effectively minimizes your expenses by caching query results, which in turn reduces the number of requests and tokens sent to the LLM service. As a result, you can enjoy a more cost-efficient experience when using the service.
- Enhanced performance: LLMs employ generative AI algorithms to generate responses in real-time, a process that can sometimes be time-consuming. However, when a similar query is cached, the response time significantly improves, as the result is fetched directly from the cache, eliminating the need to interact with the LLM service. In most situations, GPTCache can also provide superior query throughput compared to standard LLM services.
- Adaptable development and testing environment: As a developer working on LLM applications, you’re aware that connecting to LLM APIs is generally necessary, and comprehensive testing of your application is crucial before moving it to a production environment. GPTCache provides an interface that mirrors LLM APIs and accommodates storage of both LLM-generated and mocked data. This feature enables you to effortlessly develop and test your application, eliminating the need to connect to the LLM service.
- Improved scalability and availability: LLM services frequently enforce rate limits, which are constraints that APIs place on the number of times a user or client can access the server within a given timeframe. Hitting a rate limit means that additional requests will be blocked until a certain period has elapsed, leading to a service outage. With GPTCache, you can easily scale to accommodate an increasing volume of of queries, ensuring consistent performance as your application’s user base expands.
🤔 How does it work?
Online services often exhibit data locality, with users frequently accessing popular or trending content. Cache systems take advantage of this behavior by storing commonly accessed data, which in turn reduces data retrieval time, improves response times, and eases the burden on backend servers. Traditional cache systems typically utilize an exact match between a new query and a cached query to determine if the requested content is available in the cache before fetching the data.
However, using an exact match approach for LLM caches is less effective due to the complexity and variability of LLM queries, resulting in a low cache hit rate. To address this issue, GPTCache adopt alternative strategies like semantic caching. Semantic caching identifies and stores similar or related queries, thereby increasing cache hit probability and enhancing overall caching efficiency.
GPTCache employs embedding algorithms to convert queries into embeddings and uses a vector store for similarity search on these embeddings. This process allows GPTCache to identify and retrieve similar or related queries from the cache storage, as illustrated in the Modules section.
Featuring a modular design, GPTCache makes it easy for users to customize their own semantic cache. The system offers various implementations for each module, and users can even develop their own implementations to suit their specific needs.
In a semantic cache, you may encounter false positives during cache hits and false negatives during cache misses. GPTCache offers three metrics to gauge its performance, which are helpful for developers to optimize their caching systems:
- Hit Ratio: This metric quantifies the cache’s ability to fulfill content requests successfully, compared to the total number of requests it receives. A higher hit ratio indicates a more effective cache.
- Latency: This metric measures the time it takes for a query to be processed and the corresponding data to be retrieved from the cache. Lower latency signifies a more efficient and responsive caching system.
- Recall: This metric represents the proportion of queries served by the cache out of the total number of queries that should have been served by the cache. Higher recall percentages indicate that the cache is effectively serving the appropriate content.
A sample benchmark is included for users to start with assessing the performance of their semantic cache.
🤗 Modules
-
LLM Adapter:
The LLM Adapter is designed to integrate different LLM models by unifying their APIs and request protocols. GPTCache offers a standardized interface for this purpose, with current support for ChatGPT integration. -
Multimodal Adapter (experimental):
The Multimodal Adapter is designed to integrate different large m