Run š¤ Transformers in your browser!
Demo
Don’t believe us? Play around with some of these models:
Notes:
- Clicking Generate for the first time will download the corresponding model from the
HuggingFace Hub.
All subsequent requests will use the cached model. - For more information about the different parameters, check out HuggingFace’s
guide to text generation.
Getting Started
Installation
If you use npm,
you can install it using:
npm i @xenova/transformers
Alternatively, you can use it in a
tag from a CDN, for example:
Basic example
It’s super easy to translate from existing code!
from transformers import pipeline
# Allocate a pipeline for sentiment-analysis
pipe = pipeline('sentiment-analysis')
out = pipe('I love transformers!')
# [{'label': 'POSITIVE', 'score': 0.999806941}]
Python (original)
import { pipeline } from "@xenova/transformers";
// Allocate a pipeline for sentiment-analysis
let pipe = await pipeline('sentiment-analysis');
let out = await pipe('I love transformers!');
// [{'label': 'POSITIVE', 'score': 0.999817686}]
JavaScript (ours)
In the same way as the Python library, you can use a different model by providing its
name as the second argument to the pipeline function. For example:
// Use a different model for sentiment-analysis
let pipe = await pipeline('sentiment-analysis', 'nlptown/bert-base-multilingual-uncased-sentiment');
Custom setup
By default, Transformers.js uses hosted models
precompiled
WASM binaries,
which should work out-of-the-box. You can override this behaviour as follows:
import { env } from "@xenova/transformers";
// Use a different host for models.
// - `remoteURL` defaults to use the HuggingFace Hub
// - `localURL` defaults to '/models/onnx/quantized/'
env.remoteURL = 'https://www.example.com/';
env.localURL = '/path/to/models/';
// Set whether to use remote or local models. Defaults to true.
// - If true, use the path specified by `env.remoteURL`.
// - If false, use the path specified by `env.localURL`.
env.remoteModels = false;
// Set parent path of .wasm files. Defaults to use a CDN.
env.onnx.wasm.wasmPaths = '/path/to/files/';
Usage
We currently support the following
tasks and
models, which can be used with the
pipeline
function.
-
sentiment-analysis (a.k.a. text-classification)
Supported models:
distilbert-base-uncased-finetuned-sst-2-english
,
nlptown/bert-base-multilingual-uncased-sentiment
,
distilgpt2
.
For more information, check out the
Text Classification docs
. -
question-answering
Supported models:
distilbert-base-cased-distilled-squad
,
distilbert-base-uncased-distilled-squad
.
For more information, check out t