With pinocchio, I can write little YAML files that are interpreted as command line applications. When run, these commands do rudimentary template expansion, send the prompt to the openai APIs, and print out the results. As far as tools go, this is one of the simplest I’ve ever built. It is also one of the more mind-bending ones.
I soon realized that most of my prompts ended up being something like this (so-called one-shot or few-shot prompting):
Here’s how I did Y:
- something that does Y
Now do X.
The LLM will (hopefully) complete this prompt with something that does X. The trick is of course knowing what Y and “Y producer” to provide, and what Y and X stand for in the first place. Most people doing prompt engineering will know what I am referring to.
One of my favourite techniques once I get a prompt going for a certain domain A, of which Y is an example, is to write the pinocchio program that ask the LLM to generate the pinocchio program for all the domains, not just A. You pretty quickly reach the meta-level where domain A is the domain of prompts, and you ask a prompt to generate a prompt generating prompts, at which point you basically summoned the singularity into being.
To allow everybody to create their own singularity, here is the pinocchio program that generates itself, the so-called GPT3 quine:
name: quine short: Generate yourself! factories: openai: client: timeout: 120 completion: engine: text-davinci-003 temperature: 0.7 max_response_tokens: 2048 stop: [] # stream: true flags: - name: example_goal short: Example goal type: string default: Generate a program to generate itself. - name: instructions type: string help: Additional language specific instructions required: false - name: example type: stringFromFile