The LLM ecosystem has seen an explosion of frameworks, particularly in JavaScript and Python. While popular, these frameworks often serve two purposes: they either hide language limitations around concurrency or task orchestration, or they abstract “common” LLM operations. But do we really need this abstraction? Usually we would be better off building simple, bespoke solutions.
I’ve found, and from what I’ve seen the general consensus online as well is, that a lot of these frameworks work well enough for a POC but as the project grows and needs for custom behaviours arise, it’s failing to meet the challenge.
langchain isn’t a library, it’s a collection of demos held together by duct tape, fstrings and prayers. It doesn’t provide building blocks, it provides someone’s fantasy of one line of code is all you need. And it mainly breaks apart if you’re using anything but openai.
Disastrous_Elk_6375 on reddit
I’d only use it to quickly validate some ideas, PoC style to test the waters, but for sanity and production you need to pick alternatives or write your own stack.
The chain of messages
I don’t think we should move the abstraction too far from the actual chain of messsages. It’s the the basic unit of operation with LLMs and it doesn’t seem to be going away – especially if you rely on idiosyncracies of specific models, like Claude Prefilling.
I’ve been working with LLMs for a while now (check out my AI swarm powered relationship coach Reyote) and I’ve found that real magic happens when one manipulates and actively manages the chain, tailoring it for specific tasks and not giving the LLM more information than it absolutely needs. For instance, I’m a big fan of aider and have been studying their prompting techniquies. The power it gives us comes from constantly manipulating the chain for specific tasks (for example custom architect and code prompts, or conversation summarization when the conversation becomes too long.)
A case for Elixir
Apart from working with LLMs I’ve been working with Elixir/OTP for even longer and have fallen in love with the language, and the power, thoughtfulness and expressiveness behind it. At first glance, Elixir doesn’t seem to have much in terms of library support for building LLM workflows or agentic behavior, but in these series of posts I’d like to make a case that it’s just because the building blocks are all there already. This is not specific to only iteracting with LLMS, that is often the case with Elixir – most people just prefer to roll their own using the wonderful building blocks Elixir and the OTP library give you.
In the next few posts I’d like to showcase how Elixir can be used to interact with LLMs and build complex and effective solutions. We’ll start simple, by going through the article “Building effective AI agents” by Anthropic, showcasing how simple Elixir makes implementing them, and over the course of the series increasing the complexity of solutions.
Let’s start
Only library we’ll be needing for now is InstructorLite. It’s pretty basic in its functionality, where it allows you to take in a chain of messages and define an output schema, usually by using Structured outputs.
Workflow 1: Prompt chaining
Prompt chaining decomposes a task into a sequence of steps, where each LLM call processes the output of the previous one.
When to use this workflow: This workflow is ideal for situations where the task can be easily and cleanly decomposed into fixed subtasks. The main goal is to trade off latency for higher accuracy, by making each LLM call an easier task.
We’ll be implementing the following scenario:
- Based on an input an outline for a potential article will be generated
- Using this outline we’ll generate a full article
- We’ll translate this article to another language (keep in mind the translation doesn’t need the full context)
Let’s first generate the Response types we need, in this case an Article
and an Outline.
Keep in mind we’re not doing any sanity checks in these examples, even though the InstructorLite library supports it.
defmodule Article do
use Ecto.Schema
use InstructorLite.Instruction
@primary_key false
embedded_schema do
field(:response, :string)
end
def represent(%__MODULE__{} = item) do
item.response
end
end
defmodule Outline do
use Ecto.Schema
use InstructorLite.Instruction
@primary_key false
embedded_schema do
field(:outline, {:array, :string})
end
def represent(%__MODULE__{} = item) do
item.outline |> Enum.join("n")
end
end
Now let’s create a simple workflow using just Elixir’s with
statements. While simple, it actually offers a lot of flexibility, as we have access to the chain snapshot at every stage and we can choose whether we take in the whole history (as is the case with topic -> outline -> article) or we can ignore it and take only what we need (in the case of the translation).
defmodule ArticleBuilder do
def generate_content(topic) do
initial_messages = [
%{role: :system, content: "You are an expert at crafting SEO friendly blog articles"}
]
with {:outline, {:ok, _outline, outline_messages}} <-
{:outline, create_outline(initial