July 20, 2023 by Anders Hejlsberg, Steve Lucco, Daniel Rosenwasser, Pierce Boggan, Umesh Madan, Mike Hopcroft, and Gayathri Chandrasekaran
In the last few months, we’ve seen a rush of excitement around the newest wave of large language models.
While chat assistants have been the most direct application, there’s a big question around how to best integrate these models into existing app interfaces.
In other words, how do we augment traditional UI with natural language interfaces?
How do we use AI to take a user request and turn it into something our apps can operate on?
And how do we make sure our apps are safe, and doing work that developers and users alike can trust?
Today we’re releasing TypeChat, an experimental library that aims to answer these questions.
It uses the type definitions in your codebase to retrieve structured AI responses that are type-safe.
You can get up and running with TypeChat today by running
npm install typechat
and hooking it up with any language model to work with your app.
But let’s first quickly explore why TypeChat exists.
Pampering and Parsing
The current wave of LLMs default to conversational natural language — languages that humans communicate in like English.
Parsing natural language is an extremely difficult task, no matter how much you pamper a prompt with rules like “respond in the form a bulleted list”.
Natural language might have structure, but it’s hard for typical software to reconstruct it from raw text.
Surprisingly, we can ask LLMs to respond in the form of JSON, and they generally respond with something sensible!
User:
Translate the following request into JSON.
Could I get a blueberry muffin and a grande latte?
Respond only in JSON like the following:
{ "items": [ { "name": "croissant", "quantity": 2 }, { "name": "latte", "quantity": 1, "size": "tall" } ] }
ChatBot:
{ "items": [ { "name": "blueberry muffin", "quantity": 1 }, { "name": "latte", "quantity": 1, "size": "grande" } ] }
This is good — though this example shows the best-case response.
While examples can help guide structure, they don’t define what an AI should return extensively, and they don’t provide anything we can validate against.
Just Add Types!
Luckily types do precisely that.
What we’ve found is that because LLMs have seen so many type definitions in the wild, types also act as a great guide for how an AI should respond.
Because we’re typically working with JSON — JavaScript Object Notation — and because it’s is very near and dear to our hearts, we’ve been using TypeScript types in our prompts.
User:
Translate the following request into JSON.
Could I get a blueberry muffin and a grande latte?
Respond only in JSON that satisfies the
Response
type:type Response = { items: Item[]; }; type Item = { name: string; quantity: number; size?: string; notes?: string; }
ChatBot:
{ "items": [ { "name": "blueberry muffin", "quantity": 1 }, { "name": "latte", "quantity": 1, "size": "grande" } ] }
This is pretty great!
TypeScript has shown that it’s well-suited to precisely describe JSON.
But what happens when a language model stumbles and makes up a response that doesn’t conform to our types?
Well because these types are val