ts_server is a web server proposing a REST API to large
language models. They can be used for example for text completion,
question answering, classification, chat, translation, image
generation, …
It has the following characteristics:
- Supports many Transformer variants (GPT-J, GPT-NeoX, GPT-Neo, OPT, Fairseq GPT, M2M100, CodeGen, GPT2, T5, RWKV, LLAMA) and Stable Diffusion.
- Integrated REST JSON API for text completion, translation and image generation. It is used by textsynth.com.
- Very high performance for small and large batches on CPU and GPU.
- Efficient custom 8 bit and 4 bit quantization.
- Larger models work optimally on lower cost GPUs (e.g. RTX 3090, RTX A6000) thanks to efficient quantization.
- All is included in a single binary. Very few external dependencies (Python is not needed) so installation is easy on most Linux distributions.
- Uses the LibNC library for simple tensor manipulation using the
C language. - Simple command line tools (ts_test, ts_sd) are provided to test the various models.
The CPU version is released as binary code under the MIT license. The GPU version is commercial software. Please contact
for the exact terms.
Download
- Linux version ts_server-2023-03-15.tar.gz (Changelog).
Documentation
Benchmarks
- CPU: the speed is measured on an AMD Epyc 7313 CPU using 8 threads (ts_test -T 8). 100 tokens are generated.
- GPU: the speed is measured on a RTX A6000 GPU. 100 tokens
are generated.
Model(3) | CPU Speed (tokens/s) |
GPU Speed (tokens/s) |
---|---|---|
gptj_6B_q8 | 12.6 | 84.2 |
gptneox_20B_q4 | 4.3 | 40.8 |
gptneox_20B_q8 | 3.5 | 27.4 |
llama_65B_q4 | 1.4 | 13.8 |
Available Models
We provide here the model files that can be used with the TextSynth
Server. Each model was evaluated with
the lm-evaluation-harness
with the TextSynth server on a RTX A6000 GPU.
Language Models:
bloom_560M | 2 | 29.176 | 36.8% | 35.8% | 51.4% | 63.7% | 36.0% | 44.7% |
codegen_6B_mono_q4 | 5 | 69.409 | 28.0% | 35.7% | 51.1% | 60.2% | 38.0% | 42.6% |
codegen_6B_mono_q8 | 8 | 67.262 | 28.1% | 35.8% | 50.8% | 60.1% | 39.1% | 42.8% | fairseq_gpt_13B | 27 | 3.567 | 71.9% | 72.7% | 67.5% | 77.6% | 70.1% | 71.9% |
fairseq_gpt_13B_bf4 | 9 | 3.646 | 71.2% | 72.5% | 67.6% | 77.4% | 70.6% | 71.9% |
fairseq_gpt_13B_bf8 | 15 | 3.565 | 71.8% | 72.7% | 67.2% | 77.7% | 70.0% | 71.9% |
flan_t5_base | 1 | 12.891 | 54.2% | 36.5% | 54.7% | 65.8% | 62.1% | 54.7% |
flan_t5_base_q8 | 1 | 13.098 | 54.2% | 36.4% | 54.2% | 65.7% | 61.8% | 54.5% |
flan_t5_small | 1 | 23.343 | 46.7% | 29.2% | 50.0% | 62.4% | 47.9% | 47.2% |
flan_t5_small_q8 | 1 | 23.449 | 46.7% | 29.2% | 49.7% | 62.4% | 48.2% | 47.2% |
flan_t5_xxl_q4 | 7 | 3.010 | 77.7% | 71.5% | 73.4% | 77.6% | 71.8% | 74.4% |
flan_t5_xxl_q8 | 13 | 3.049 | 77.8% | 72.1% | 75.1% | 77.8% | 73.1% | 75.2% |
gpt2_117M | 1 | 40.110 | 32.9% | 31.1% | 52.1% | 62.9% | 27.3% | 41.3% | gpt2_1558M | 4 | 10.637 | 51.3% | 50.8% | 58.4% | 70.8% | 53.2% | 56.9% |
gpt2_1558M_q8 | 2 | 10.655 | 51.2% | 50.8% | 58.6% | 70.8% | 53.2% | 56.9% |
gpt2_345M | 1 | 18.272 | 43.5% | 39.4% | 53.3% | 67.7% | 43.1% | 49.4% |
gpt2_345M_q8 | 1 | 18.452 | 43.1% | 39.4% | 53.1% | 67.5% | 41.9% | 49.0% |
gpt2_774M | 2 | 12.966 | 47.8% | 45.4% | 55.6% | 70.4% | 48.5% | 53.5% |
gpt2_774M_q8 | 1 | 12.928 | 47.9% | 45.4% | 55.3% | 70.3% | 48.2% | 53.4% | gptj_6B | 13 | 4.124 | 69.0% | 66.2% | 64.8% | 75.5% | 66.9% | 68.5% |
gptj_6B_q4 | 4 | 4.153 | 68.9% | 65.7% | 63.9% | 74.4% | 67.0% | 68.0% |
gptj_6B_q8 | 7 | 4.122 | 69.1% | 66.2% | 64.4% | 75.4% | 66.4% | 68.3% | gptneox_20B | 43 | 3.657 | 72.6% | 71.4% | 65.5% | 77.5% | 73.3% | 72.0% |
gptneox_20B_q4 | 13 | 3.711 | 72.0% | 69.3% | 64.8% | 76.7% | 70.8% | 70.7% |
gptneox_20B_q8 | 23 | 3.659 | 72.6% | 71.3% | 65.8% | 77.3% | 72.9% | 72.0% | llama_13B_q4 | 8 | 3.130 | 77.1% | 78.6% | 72.2% | 78.3% | 77.8% | 76.8% | llama_13B_q8 | 15 | 3.178 | 76.5% | 79.1% | 73.2% | 79.1% | 77.1% | 77.0% | llama_30B_q4 | 20 | 2.877 | 77.5% | 82.4% | 75.7% | 80.2% | 80.2% | 79.2% | llama_30B_q8 |