By
James Arthur
on
09 February 2023
We’re ElectricSQL, we provide cloud sync for local-first apps using standard, open-source Postgres and SQLite.
Check it out here »
Local-first software is the future. It’s the natural evolution of state-transfer. It enables a modern, realtime multi-user experience, with built in offline support, resilience, privacy and data ownership. You get sub-millisecond reactivity and a network-free interaction path. Plus it’s much cheaper to operate and scale.
There’s a range of local-first tooling now emerging. Not just Electric but also projects like Evolu, Homebase, Instant, lo-fi, Replicache, sqlite_crdt and Vlcn. With these, and others, local-first is becoming more accessible. However, it’s still a fundamentally different paradigm. You code directly against a local, embedded database. Your data access code runs in an untrusted environment. You have to work within the limitations of what you can store and sync onto the device – and what your users allow you to sync off it.
This post aims to walk through the key differences and trade-offs, from working directly against a local database to the challenges of concurrent writes, partitioning and partial replication.
Cloud-first vs local-first
Cloud-first systems are the status quo. You have a backend and a frontend. State transfer protocols like REST, GraphQL and LiveView manage how data moves across the network. You typically need online connectivity to confirm writes. Systems are mainly integrated and monetised in the cloud.
Local-first systems are different. You replace your backend with a sync system and write application code that reads and writes data directly to and from a local database. Applications naturally work and support writes offline. State transfer moves into the database layer.
This model has huge benefits. You eliminate APIs and microservices and cut out the boilerplate associated with imperative state transfer. However, on the flip side, you need to move your business logic into the client, codify your auth and validation logic in security rules and hang your background processing off database events.
Security rules
When you have a backend application, you can have controllers and middleware on the write path. This gives you a (relatively!) trusted environment to run arbitrary auth and validation code. For example, here’s an Elixir Plug performing arbitrary logic and whatever database calls are behind the Accounts.is_admin?(user)
to enforce that users must be admins:
defmodule RequireAdminPlug do
import Plug.Conn
def init(opts), do: opts
def call(conn, _opts) do
user = conn.assigns.current_user
case Accounts.is_admin?(user) do
true ->
conn
false ->
conn
|> Conn.put_status(403)
|> Conn.halt()
end
end
end
When you go local-first, you can’t write middleware like this because there’s nowhere to run it. You write directly to the database in the client. As a result, you need to codify that logic into some kind of rule system, like Firebase Security Rules or Postgres row-level security (RLS). For example, the following SQL uses row-level security to enforce that only admins can access items:
CREATE TABLE items (
value text PRIMARY KEY NOT NULL
);
ALTER TABLE items
ENABLE ROW LEVEL SECURITY;
CREATE ROLE admin;
GRANT ALL ON items TO admin;
This is an example of transposing auth logic into security rules. But, actually, row-level security is typically not what you need for local-first applications. Because with standard RLS the user is set by the database connection string and the rules are scoped to tables. Instead, what you need is to connect the rules to the end-user of the application and to the context in which the data is being loaded through.
For example, Supabase extends RLS with an auth
context. This allows rules to be connected to the end-user of the application, rather than the user in the database connection string:
CREATE TABLE items (
value text PRIMARY KEY NOT NULL,
owner_id uuid references auth.users
);
ALTER TABLE items
ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Owners can update items"
ON items FOR UPDATE USING (
auth.uid() = owner_id
);
You also want to codify different things. Traditional database access rules tend to be used in modern web and mobile applications for quite blunt, high level permissions. Like limiting the rights of the backend application. Whereas what you want with local-first systems is to codify the type of high level business logic normally implemented in controllers and middleware. Like the Plug code we saw above.
This logic can be quite flexible, often makes database queries and uses information that’s available on the request. In Supabase’s system, you model this using the auth
context in place of the request and SQL queries to emulate the business logic. In Firebase’s rules language, you have similar access to the auth context from a request
object and the traversal context in a resource
object:
service cloud.firestore {
match /databases/{database}/documents {
function signedInOrPublic() {
return request.auth.uid != null || resource.data.visibility == 'public';
}
match /items/{item} {
allow read, write: if signedInOrPublic();
}
}
}
Business logic
As we’ve said above, cloud-first software has a backend layer where you can run abitrary business logic. Going local-first, you cut out this layer. So your logic needs to either run in the client, or be run in response to database change events. This impacts your system design, data model and programme semantics.
For example, this is a simple backend function that could be called by a controller to sign a user up and send a verification email:
def sign_up(user) do
user
|> Repo.insert!()
|> Mailer.send_verification_email()
end
This function would either need to be ported to the client side or s