diff --git a/examples/write-patterns/README.md b/examples/write-patterns/README.md index c68154e43d..7af71e14fe 100644 --- a/examples/write-patterns/README.md +++ b/examples/write-patterns/README.md @@ -1,30 +1,81 @@ + # Write patterns example -This example demonstrates the four different [write-patterns](https://electric-sql.com/docs/guides/writes#patterns) described in the [Writes](https://electric-sql.com/docs/guides/writes#patterns) guide. +This example implements and describes four different patterns for handling writes in an application built with [ElectricSQL](https://electric-sql.com). + +These patterns are described in the [Writes guide](https://electric-sql.com/docs/guides/writes#patterns) from the ElectricSQL documentation. It's worth reading the guide for context. The idea is that if you walk through these patterns in turn, you can get a sense of the range of techniques and their evolution in both power and complexity. -All running together, at the same time, within a single web application. +The example is set up to run all the patterns together, in the page, at the same time, as components of a single React application. So you can also evaluate their behaviour side-by-side and and with different network connectivity. -> ... screenshot ... +[![Screenshot of the application running](./public/screenshot.png)](https://write-patterns.electric-sql.com) - +https://write-patterns.electric-sql.com + +## Patterns + +The main code is in the [`./patterns`](./patterns) folder, which has a subfolder for each pattern. There's also some shared code, including an API server and some app boilerplate in [`./shared`](./shared). + +All of the patterns use [Electric](https://electric-sql.com/product/sync) for the read-path (i.e.: syncing data from Postgres into the local app) and implement a different approach to the write-path (i.e.: how they handle local writes and get data from the local app back into Postgres). + +### [1. Online writes](./patterns/1-online-writes) + +The first pattern is in [`./patterns/1-online-writes`](./patterns/1-online-writes). + +This is the simplest approach, which just sends writes to an API and only works if you're online. It has a resilient client that will retry in the event of network failure but the app doesn't update until the write goes through. + +### [2. Optimistic state](./patterns/2-optimistic-state) + +The second pattern is in [`./patterns/2-optimistic-state`](./patterns/2-optimistic-state). + +It extends the first pattern with support for local offline writes with simple optimistic state. The optimistic state is "simple" in the sense that it's only available within the component that makes the write and it's not persisted if the page reloads or the component unmounts. + +### [3. Shared persistent optimistic state](./patterns/3-shared-persistent) + +The third pattern is in [`./patterns/3-shared-persistent`](./patterns/3-shared-persistent). + +It extends the second pattern by storing the optimistic state in a shared, persistent local store. This makes offline writes more resilient and avoids components getting out of sync. It's a compelling point in the design space: providing good UX and DX without introducing too much complexity or any heavy dependencies. + +### [4. Through-the-database sync](./patterns/4-database-sync) -## Source code +The fourth pattern is in [`./patterns/4-database-sync`](./patterns/4-database-sync). -There's some shared boilerplate in [`./shared`](./shared). The code implementing the different patterns is in the [`./patterns`](./patterns) folder. +It extends the concept of shared, persistent optimistic state all the way to a local embedded database. Specifically, it: -### Patterns +1. syncs data from Electric into an immutable table +2. persists local optimistic state in a shadow table +2. combines the two into a view that provides a unified interface for reads and writes +4. automatically detects local changes and syncs them to the server -All of the patterns use [Electric](https://electric-sql.com/product/sync) for the read-path (i.e.: syncing data from Postgres into the local app) and implement a different approach to the write-path (i.e.: how they handle local writes and get data from the local app back into Postgres): +This provides a pure local-first development experience, where the application code talks directly to a single database "table" and changes sync automatically in the background. However, this "power" does come at the cost of increased complexity in the form of an embedded database, complex local schema and loss of context when handling rollbacks. -- [`1-online-writes`](./patterns/1-online-writes) works online, writing data through the backend API -- [`2-optimistic-state`](./patterns/2-optimistic-state) supports offline writes with simple optimistic state (component-scoped, no persistence) -- [`3-combine-on-read`](./patterns/3-combine-on-read) syncs into an immutable table, persists optimistic state in a shadow table and combines the two on read -- [`4-through-the-db`](./patterns/4-through-the-db) uses the local database as a unified mutable store, syncs changes to the server and keeps enough history and bookkeeping data around to be able to revert local changes when necessary +## Complexities -For more context about the patterns and their benefits and trade-offs, see the [Writes](https://electric-sql.com/docs/guides/writes#patterns) guide. +There are two key complexities introduced by handling optimistic state: + +1. merge logic when receiving synced state from the server +2. handling rollbacks when writes are rejected + +### 1. Merge logic + +When a change syncs in over the Electric replication stream, the application has to decide how to handle any overlapping optimistic state. In this example, we implement a blunt strategy of discarding the local state whenever the corresponding row is updated in the synced state. + +This approach works and is simple to reason about. However, it won't preserve local changes on top of concurrent changes by other users (or tabs or devices). In this case, you may want to preserve the local state until *your* change syncs through. For example, rebasing the local changes on the updated synced state. For reference, this is implemented in the more realistic [Linearlite example](../linearlite). + +### 2. Rollbacks + +If an offline write is rejected by the server, the local application needs to find some way to revert the local state and potentially notify the user. This example just clears all local state if any write is rejected. More sophisticated and forgiving strategies are possible, such as: + +- marking local writes as rejected and displaying for manual conflict resolution +- only clearing the set of writes that are causally dependent on the rejected operation + +One consideration is the indirection between making a write and handling a rollback. When sending write operations directly to an API, your application code can effect a rollback with the write context still available. When syncing through the database, the original write context is harder to reconstruct. + +### YAGNI + +Adam Wiggins, one of the authors of the local-first paper, developed Muse, the collaborative whiteboard app, specifically to support concurrent, collaborative editing of an infinite canvas. Having operated at scale with a large user base, one of his main findings [reported back at the first local-first meetup in Berlin in 2023](https://www.youtube.com/watch?v=WEFuEY3fHd0) was that in reality, conflicts are extremely rare and can be mitigated well by strategies like presence. + +If you're crafting a highly concurrent, collaborative experience, you may well want to engage with the complexities of sophisticated merge logic and rebasing local state. However, blunt strategies as illustrated in this example can be much easier to implement and reason about — and are often perfectly serviceable for most applications. ## How to run @@ -46,9 +97,3 @@ Start the dev server: ```shell pnpm dev ``` - -When done, tear down the backend containers so you can run other examples: - -```shell -pnpm backend:down -``` \ No newline at end of file diff --git a/examples/write-patterns/package.json b/examples/write-patterns/package.json index 6417d4cdad..fa1108790b 100644 --- a/examples/write-patterns/package.json +++ b/examples/write-patterns/package.json @@ -33,6 +33,7 @@ "react": "19.0.0-rc.1", "react-dom": "19.0.0-rc.1", "uuid": "^10.0.0", + "valtio": "^2.1.2", "zod": "^3.23.8" }, "devDependencies": { diff --git a/examples/write-patterns/patterns/1-online-writes/README.md b/examples/write-patterns/patterns/1-online-writes/README.md index b85e08d29a..d1dd60f04c 100644 --- a/examples/write-patterns/patterns/1-online-writes/README.md +++ b/examples/write-patterns/patterns/1-online-writes/README.md @@ -1,5 +1,5 @@ -# Online writes example +# Online writes pattern This is an example of an application using: diff --git a/examples/write-patterns/patterns/1-online-writes/index.tsx b/examples/write-patterns/patterns/1-online-writes/index.tsx index 10d9b79938..a0658e1295 100644 --- a/examples/write-patterns/patterns/1-online-writes/index.tsx +++ b/examples/write-patterns/patterns/1-online-writes/index.tsx @@ -71,6 +71,8 @@ export default function OnlineWrites() { return
Loading …
} + // The template below the heading is identical to the other patterns. + // prettier-ignore return (
diff --git a/examples/write-patterns/patterns/2-optimistic-state/README.md b/examples/write-patterns/patterns/2-optimistic-state/README.md index 4030a703a0..cd52e3d7ec 100644 --- a/examples/write-patterns/patterns/2-optimistic-state/README.md +++ b/examples/write-patterns/patterns/2-optimistic-state/README.md @@ -1,5 +1,5 @@ -# Optimistic state example +# Optimistic state pattern This is an example of an application using: @@ -22,7 +22,9 @@ Good use-cases include: ## Drawbacks -The local optimistic state is not persistent. Optimistic state is managed within the component. This can become tricky to manage when state is shared across components. More complex apps may benefit from the more comprehensive [combine on read](../../3-combine-on-read) pattern. +The optimistic state is only available within the component that makes the write. This means that other components rendering the same state may not see it and may display stale data. The optimistic state is also not peristent. So it's lost if you unmount the component or reload the page. + +These limitations are addressed by the [shared persistent optimistic state](../../3-shared-persistent) pattern. ## How to run diff --git a/examples/write-patterns/patterns/2-optimistic-state/index.tsx b/examples/write-patterns/patterns/2-optimistic-state/index.tsx index 246eba47d3..1c8a3d221e 100644 --- a/examples/write-patterns/patterns/2-optimistic-state/index.tsx +++ b/examples/write-patterns/patterns/2-optimistic-state/index.tsx @@ -12,10 +12,13 @@ type Todo = { completed: boolean created_at: Date } +type PartialTodo = Partial & { + id: string +} -type OptimisticState = { +type Write = { operation: 'insert' | 'update' | 'delete' - value: Todo + value: PartialTodo } export default function OptimisticState() { @@ -44,20 +47,20 @@ export default function OptimisticState() { // are being sent-to and syncing-back-from the server. const [todos, addOptimisticState] = useOptimistic( sorted, - (syncedTodos: Todo[], { operation, value }: OptimisticState) => { + (synced: Todo[], { operation, value }: Write) => { switch (operation) { case 'insert': - return syncedTodos.some((todo) => todo.id === value.id) - ? syncedTodos - : [...syncedTodos, value] + return synced.some((todo) => todo.id === value.id) + ? synced + : [...synced, value as Todo] case 'update': - return syncedTodos.map((todo) => - todo.id === value.id ? value : todo + return synced.map((todo) => + todo.id === value.id ? { ...todo, ...value } : todo ) case 'delete': - return syncedTodos.filter((todo) => todo.id !== value.id) + return synced.filter((todo) => todo.id !== value.id) } } ) @@ -86,16 +89,11 @@ export default function OptimisticState() { id: uuidv4(), title: title, created_at: new Date(), + completed: false, } startTransition(async () => { - addOptimisticState({ - operation: 'insert', - value: { - ...data, - completed: false, - }, - }) + addOptimisticState({ operation: 'insert', value: data }) const fetchPromise = api.request(path, 'POST', data) const syncPromise = matchStream( @@ -115,17 +113,12 @@ export default function OptimisticState() { const path = `/todos/${id}` const data = { + id, completed: !completed, } startTransition(async () => { - addOptimisticState({ - operation: 'update', - value: { - ...todo, - completed: !completed, - }, - }) + addOptimisticState({ operation: 'update', value: data }) const fetchPromise = api.request(path, 'PUT', data) const syncPromise = matchStream(stream, ['update'], matchBy('id', id)) @@ -142,12 +135,7 @@ export default function OptimisticState() { const path = `/todos/${id}` startTransition(async () => { - addOptimisticState({ - operation: 'delete', - value: { - ...todo, - }, - }) + addOptimisticState({ operation: 'delete', value: { id } }) const fetchPromise = api.request(path, 'DELETE') const syncPromise = matchStream(stream, ['delete'], matchBy('id', id)) @@ -160,7 +148,7 @@ export default function OptimisticState() { return
Loading …
} - // The template below the heading is identical to the online example. + // The template below the heading is identical to the other patterns. // prettier-ignore return ( diff --git a/examples/write-patterns/patterns/3-combine-on-read/README.md b/examples/write-patterns/patterns/3-combine-on-read/README.md deleted file mode 100644 index ee1d081d48..0000000000 --- a/examples/write-patterns/patterns/3-combine-on-read/README.md +++ /dev/null @@ -1,60 +0,0 @@ - -# Combine on read example - -This is an example of an application using: - -- Electric for read-path sync -- local optimistic writes with shared, persistent optimistic state - -This pattern can be implemented with a variety of client-side state management and storage mechanisms. For example, we have a [TanStack example](../../../tanstack-example) that uses the TanStack mutation cache for shared optimistic state. - -In this implementation, we use Electric together with [PGlite](https://electric-sql.com/product/pglite). Specifically, we: - -1. sync data into an immutable table -2. persist optimistic state in a shadow table -3. combine the two on read using a view - -## Benefits - -This is a powerful and pragmatic pattern, occupying a compelling point in the design space. It's relatively simple to implement. Persisting optimistic state makes local writes more resilient. - -Storing optimistic state in a shared table allows all your components to see and react to it. This avoids one of the weaknesses with component-scoped optimistic state with a [more naive optimistic state pattern](../2-optimistic-state) and makes this pattern more suitable for more complex, real world apps. - -Seperating immutable synced state from mutable local state makes it easy to reason about and implement rollback strategies. - -Good use-cases include: - -- building local-first software -- interactive SaaS applications -- collaboration and authoring software - -## Drawbacks - -Combining data on-read makes local reads slightly slower. - -Using a local embedded database adds a relatively-heavy dependency to your app. This impacts build/bundle size, initialization speed and memory use. The shadow table and trigger machinery complicate your client side schema definition. - -Whilst the database is used for local optimistic state, writes are still made via an API. This can often be helpful and pragmatic, allowing you to [re-use your existing API](https://electric-sql.com/blog/2024/11/21/local-first-with-your-existing-api). However, you may want to avoid running an API and leverage [through the DB sync](../../3-through-the-db) for a purer local-first approach. - -## Complexities - -This implementation simplifies two key complexities: - -1. merge logic when receiving synced state from the server -2. handling rollbacks when writes are rejected - -### 1. Merge logic - -The entrypoint in the code for merge logic is the very blunt `delete_local_on_synced_trigger` defined in the [`./local-schema.sql`](./local-schema.sql). The current implementation just wipes any local state for a row when any insert, updater or delete to that row syncs in from the server. - -This approach works and is simple to reason about. However, it won't preserve local changes on top of concurrent changes by other users (or tabs or devices). More sophisticated implementations could do more sophisticated merge logic here. Such as rebasing the local changes on the new server state. This typically involved maintaining more bookkeeping info and having more complex triggers. - -### 2. Rollbacks - -The entrypoint for handling rollbacks is handling the fetchPromise return values in the `createTodo`, `updateTodo`, `deleteTodo` event handler functions in [`./index.tsx`](./index.tsx). At the moment, in this implementation, we simply ignore the return value and assume that the write succeeded. - -More sophisticated applications could revert the local state for that write if the write is rejected. The benefits of still using HTTP requests to the API for writes instead of syncing [through the DB](../4-through-the-db) is that the write context is still available when handling the rollback. - -## How to run - -See the [How to run](../../README.md#how-to-run) section in the example README. diff --git a/examples/write-patterns/patterns/3-combine-on-read/index.tsx b/examples/write-patterns/patterns/3-combine-on-read/index.tsx deleted file mode 100644 index cee009057b..0000000000 --- a/examples/write-patterns/patterns/3-combine-on-read/index.tsx +++ /dev/null @@ -1,209 +0,0 @@ -import React, { useState } from 'react' -import { v4 as uuidv4 } from 'uuid' - -import { - PGliteProvider, - useLiveQuery, - usePGlite, -} from '@electric-sql/pglite-react' - -import api from '../../shared/app/client' -import pglite from '../../shared/app/db' - -import localSchemaMigrations from './local-schema.sql?raw' - -const ELECTRIC_URL = import.meta.env.ELECTRIC_URL || 'http://localhost:3000' - -type Todo = { - id: string - title: string - completed: boolean - created_at: Date -} - -await pglite.exec(localSchemaMigrations) - -// This starts the read path sync using Electric. -await pglite.electric.syncShapeToTable({ - shape: { - url: `${ELECTRIC_URL}/v1/shape`, - table: 'todos', - }, - shapeKey: 'todos', - table: 'todos_synced', - primaryKey: ['id'], -}) - -export default function Wrapper() { - return ( - - - - ) -} - -function CombineOnRead() { - const db = usePGlite() - const results = useLiveQuery('SELECT * FROM todos ORDER BY created_at') - - // Allows us to track when writes are being made to the server. - const [pendingState, setPendingState] = useState([]) - const isPending = pendingState.length === 0 ? false : true - - // These are the same event handler functions from the online and - // optimistic state examples, revised to write local optimistic - // state to the database. - - async function createTodo(event: React.FormEvent) { - event.preventDefault() - - const form = event.target as HTMLFormElement - const formData = new FormData(form) - const title = formData.get('todo') as string - - form.reset() - - const key = Math.random() - setPendingState((keys) => [...keys, key]) - - const id = uuidv4() - const created_at = new Date() - - const localWritePromise = db.sql` - INSERT INTO todos_local ( - id, - title, - completed, - created_at - ) - VALUES ( - ${id}, - ${title}, - ${false}, - ${created_at} - ) - ` - - const path = '/todos' - const data = { - id: id, - title: title, - created_at: created_at, - } - const fetchPromise = api.request(path, 'POST', data) - - await Promise.all([localWritePromise, fetchPromise]) - - setPendingState((keys) => keys.filter((k) => k !== key)) - } - - async function updateTodo(todo: Todo) { - const { id, completed } = todo - - const key = Math.random() - setPendingState((keys) => [...keys, key]) - - const localWritePromise = db.sql` - INSERT INTO todos_local ( - id, - completed - ) - VALUES ( - ${id}, - ${!completed} - ) - ON CONFLICT (id) - DO UPDATE - SET completed = ${!completed} - ` - - const path = `/todos/${id}` - const data = { - completed: !completed, - } - const fetchPromise = api.request(path, 'PUT', data) - - await Promise.all([localWritePromise, fetchPromise]) - - setPendingState((keys) => keys.filter((k) => k !== key)) - } - - async function deleteTodo(event: React.MouseEvent, todo: Todo) { - event.preventDefault() - - const { id } = todo - - const key = Math.random() - setPendingState((keys) => [...keys, key]) - - const localWritePromise = db.sql` - INSERT INTO todos_local ( - id, - deleted - ) - VALUES ( - ${id}, - ${true} - ) - ON CONFLICT (id) - DO UPDATE - SET deleted = ${true} - ` - - const path = `/todos/${id}` - const fetchPromise = api.request(path, 'DELETE') - - await Promise.all([localWritePromise, fetchPromise]) - - setPendingState((keys) => keys.filter((k) => k !== key)) - } - - if (results === undefined) { - return
Loading …
- } - - const todos = results.rows - - // The template below the heading is identical to the other patterns. - - // prettier-ignore - return ( -
-

- - 3. Combine on read - - -

-
    - {todos.map((todo: Todo) => ( -
  • - - deleteTodo(event, todo)}> - ✕ -
  • - ))} - {todos.length === 0 && ( -
  • All done 🎉
  • - )} -
-
- - -
-
- ) -} diff --git a/examples/write-patterns/patterns/3-combine-on-read/local-schema.sql b/examples/write-patterns/patterns/3-combine-on-read/local-schema.sql deleted file mode 100644 index 3ed9248dc9..0000000000 --- a/examples/write-patterns/patterns/3-combine-on-read/local-schema.sql +++ /dev/null @@ -1,57 +0,0 @@ --- This is the local database schema for PGlite. It mirrors the server schema --- defined in `../../shared/migrations/01-create-todos.sql` but rather than --- just defining a single `todos` table to sync into, it defines two tables: --- `todos_synced` and `todos_local` and a `todos` view to combine on read. - --- The `todos_synced` table for immutable, synced state from the server. -CREATE TABLE IF NOT EXISTS todos_synced ( - id UUID PRIMARY KEY, - title TEXT NOT NULL, - completed BOOLEAN NOT NULL, - created_at TIMESTAMP WITH TIME ZONE NOT NULL -); - --- The `todos_local` table for local optimistic state. -CREATE TABLE IF NOT EXISTS todos_local ( - id UUID PRIMARY KEY, - title TEXT, - completed BOOLEAN, - created_at TIMESTAMP WITH TIME ZONE, - -- Track soft deletes - deleted BOOLEAN DEFAULT FALSE -); - --- The `todos` view to combine the two tables on read. -CREATE OR REPLACE VIEW todos AS - SELECT - COALESCE(local.id, synced.id) AS id, - CASE WHEN local.title IS NOT NULL - THEN local.title - ELSE synced.title - END AS title, - CASE WHEN local.completed IS NOT NULL - THEN local.completed - ELSE synced.completed - END AS completed, - CASE WHEN local.created_at IS NOT NULL - THEN local.created_at - ELSE synced.created_at - END AS created_at - FROM todos_synced AS synced - FULL OUTER JOIN todos_local AS local - ON synced.id = local.id - WHERE local.id IS NULL OR local.deleted = FALSE; - --- Automatically remove local optimistic state. -CREATE OR REPLACE FUNCTION delete_local_on_sync_trigger() -RETURNS TRIGGER AS $$ -BEGIN - DELETE FROM todos_local WHERE id = OLD.id; - RETURN NEW; -END; -$$ LANGUAGE plpgsql; - -CREATE OR REPLACE TRIGGER delete_local_on_sync -AFTER INSERT OR UPDATE OR DELETE ON todos_synced -FOR EACH ROW -EXECUTE FUNCTION delete_local_on_sync_trigger(); diff --git a/examples/write-patterns/patterns/3-shared-persistent/README.md b/examples/write-patterns/patterns/3-shared-persistent/README.md new file mode 100644 index 0000000000..0d92845db5 --- /dev/null +++ b/examples/write-patterns/patterns/3-shared-persistent/README.md @@ -0,0 +1,37 @@ + +# Shared persistent optimistic state pattern + +This is an example of an application using: + +- Electric for read-path sync +- local optimistic writes with shared, persistent optimistic state + +This pattern can be implemented with a variety of client-side state management and storage mechanisms. This example uses [valtio](https://valtio.dev) for a shared reactive store and persists this store to localStorage on any change. This allows us to keep the code very similar to the previous [`../2-optimistic-state`](../2-optimistic-state) pattern (with a valtio `useSnapshot` and a custom reduce function playing almost exactly the same role as the React `useOptimistic` hook). + +## Benefits + +This is a powerful and pragmatic pattern, occupying a compelling point in the design space. It's relatively simple to implement. Persisting optimistic state makes local writes more resilient. + +Storing optimistic state in a shared store allows all your components to see and react to it. This avoids one of the weaknesses with component-scoped optimistic state with a [more naive optimistic state pattern](../2-optimistic-state) and makes this pattern more suitable for more complex, real world apps. + +Seperating immutable synced state from mutable local state makes it easy to reason about and implement rollback strategies. + +Good use-cases include: + +- building local-first software +- interactive SaaS applications +- collaboration and authoring software + +## Drawbacks + +Combining data on-read makes local reads slightly slower. Whilst the database is used for local optimistic state, writes are still made via an API. This can often be helpful and pragmatic, allowing you to [re-use your existing API](https://electric-sql.com/blog/2024/11/21/local-first-with-your-existing-api). However, you may want to avoid running an API and leverage [through the DB sync](../../3-through-the-db) for a purer local-first approach. + +## Complexities + +This approach works and is simple to reason about. Because it clears local optimistic state only once the specific local write has synced, it does preserve local changes on top of concurrent changes by other users (or tabs or devices). + +The entrypoint for handling rollbacks has the local write context as well as the shared store, so it's easy to make rollbacks relatively surgical. + +## How to run + +See the [How to run](../../README.md#how-to-run) section in the example README. diff --git a/examples/write-patterns/patterns/3-shared-persistent/index.tsx b/examples/write-patterns/patterns/3-shared-persistent/index.tsx new file mode 100644 index 0000000000..52d4ee2220 --- /dev/null +++ b/examples/write-patterns/patterns/3-shared-persistent/index.tsx @@ -0,0 +1,235 @@ +import React, { useTransition } from 'react' +import { v4 as uuidv4 } from 'uuid' +import { subscribe, useSnapshot } from 'valtio' +import { proxyMap } from 'valtio/utils' + +import { type Operation, ShapeStream } from '@electric-sql/client' +import { matchBy, matchStream } from '@electric-sql/experimental' +import { useShape } from '@electric-sql/react' + +import api from '../../shared/app/client' + +const ELECTRIC_URL = import.meta.env.ELECTRIC_URL || 'http://localhost:3000' +const KEY = 'electric-sql/examples/write-patterns/shared-persistent' + +type Todo = { + id: string + title: string + completed: boolean + created_at: Date +} +type PartialTodo = Partial & { + id: string +} + +type Write = { + key: string + operation: Operation + value: PartialTodo +} + +// Define a shared, persistent, reactive store for local optimistic state. +const optimisticState = proxyMap( + JSON.parse(localStorage.getItem(KEY) || '[]') +) +subscribe(optimisticState, () => { + localStorage.setItem(KEY, JSON.stringify([...optimisticState])) +}) + +/* + * Add a local write to the optimistic state + */ +function addLocalWrite(operation: Operation, value: PartialTodo): Write { + const key = uuidv4() + const write: Write = { + key, + operation, + value, + } + + optimisticState.set(key, write) + + return write +} + +/* + * Subscribe to the shape `stream` until the local write syncs back through it. + * At which point, delete the local write from the optimistic state. + */ +async function matchWrite(stream: ShapeStream, write: Write) { + const { key, operation, value } = write + + try { + await matchStream(stream, [operation], matchBy('id', value.id)) + } catch (_err) { + return + } + + optimisticState.delete(key) +} + +/* + * Make an HTTP request to send the write to the API server. + * If the request fails, delete the local write from the optimistic state. + */ +async function sendRequest(path: string, method: string, write: Write) { + const { key, value } = write + + try { + await api.request(path, method, value) + } catch (_err) { + optimisticState.delete(key) + } +} + +export default function SharedPersistent() { + const [isPending, startTransition] = useTransition() + + // Use Electric's `useShape` hook to sync data from Postgres. + const { isLoading, data, stream } = useShape({ + url: `${ELECTRIC_URL}/v1/shape`, + params: { + table: 'todos', + }, + parser: { + timestamptz: (value: string) => new Date(value), + }, + }) + const sorted = data ? data.sort((a, b) => +a.created_at - +b.created_at) : [] + + // Get the local optimistic state. + const writes = useSnapshot>(optimisticState) + + // Merge the synced state with the local state. + const todos = writes + .values() + .reduce((synced: Todo[], { operation, value }: Write) => { + switch (operation) { + case 'insert': + return synced.some((todo) => todo.id === value.id) + ? synced + : [...synced, value as Todo] + + case 'update': + return synced.map((todo) => + todo.id === value.id ? { ...todo, ...value } : todo + ) + + case 'delete': + return synced.filter((todo) => todo.id !== value.id) + } + }, sorted) + + // These are the same event handler functions from the previous optimistic + // state pattern, adapted to add the state to the shared, persistent store. + + async function createTodo(event: React.FormEvent) { + event.preventDefault() + + const form = event.target as HTMLFormElement + const formData = new FormData(form) + const title = formData.get('todo') as string + + const path = '/todos' + const data = { + id: uuidv4(), + title: title, + completed: false, + created_at: new Date(), + } + + startTransition(async () => { + const write = addLocalWrite('insert', data) + + const fetchPromise = sendRequest(path, 'POST', write) + const syncPromise = matchWrite(stream, write) + + await Promise.all([fetchPromise, syncPromise]) + }) + + form.reset() + } + + async function updateTodo(todo: Todo) { + const { id, completed } = todo + + const path = `/todos/${id}` + const data = { + id: id, + completed: !completed, + } + + startTransition(async () => { + const write = addLocalWrite('update', data) + + const fetchPromise = sendRequest(path, 'PUT', write) + const syncPromise = matchWrite(stream, write) + + await Promise.all([fetchPromise, syncPromise]) + }) + } + + async function deleteTodo(event: React.MouseEvent, todo: Todo) { + event.preventDefault() + + const { id } = todo + + const path = `/todos/${id}` + + startTransition(async () => { + const write = addLocalWrite('delete', { id }) + + const fetchPromise = sendRequest(path, 'DELETE', write) + const syncPromise = matchWrite(stream, write) + + await Promise.all([fetchPromise, syncPromise]) + }) + } + + if (isLoading) { + return
Loading …
+ } + + // The template below the heading is identical to the other patterns. + + // prettier-ignore + return ( +
+

+ + 3. Shared persistent + + +

+
    + {todos.map((todo) => ( +
  • + + deleteTodo(event, todo)}> + ✕ +
  • + ))} + {todos.length === 0 && ( +
  • All done 🎉
  • + )} +
+
+ + +
+
+ ) +} diff --git a/examples/write-patterns/patterns/4-through-the-db/README.md b/examples/write-patterns/patterns/4-through-the-db/README.md index 3418becde6..951fab6ab2 100644 --- a/examples/write-patterns/patterns/4-through-the-db/README.md +++ b/examples/write-patterns/patterns/4-through-the-db/README.md @@ -1,5 +1,5 @@ -# Through the DB sync example +# Through-the-database sync pattern This is an example of an application using: @@ -8,16 +8,23 @@ This is an example of an application using: - shared, persistent optimistic state - automatic change detection and background sync -The implementation builds on the approach of storing optimistic state in a local [PGlite](https://electric-sql.com/product/pglite) database, introduced in the [combine on read](../../3-combine-on-read) pattern and extends it to automatically manage optimistic state lifecycle, present a single table interface for reads and writes and auto-sync the local writes. +The implementation builds on the approach of storing optimistic state in a local [PGlite](https://electric-sql.com/product/pglite) database, introduced in the [shared persistent optimistic state](../../3-shared-persistent) pattern and extends it to automatically manage optimistic state lifecycle, present a single table interface for reads and writes and auto-sync the local writes. Specifically, we: -1. sync data into an immutable table, persist optimistic state in a shadow table and combine the two on read using a view -4. detect local writes, write them into a log of change messages and send these to the server +1. sync data into an immutable table +2. persist optimistic state in a shadow table +3. combine the two on read using a view + +Plus for the write path sync, we: + +4. detect local writes +5. write them into a change log table +6. POST the changes to the API server ## Benefits -This provides full offline support, shared optimistic state and allows your components to purely interact with the local database. Data fetching and sending is abstracted away behind the Electric sync (for reads) and the change message log (for writes). +This provides full offline support, shared optimistic state and allows your components to interact purely with the local database. No coding over the network is needed. Data fetching and sending is abstracted away behind the Electric sync (for reads) and the change message log (for writes). Good use-cases include: @@ -31,11 +38,6 @@ Combining data on-read makes local reads slightly slower. Using a local embedded ## Complexities -This implementation has the same two key complexities as the [combine-on-read](../3-combine-on-read) example: - -1. merge logic when receiving synced state from the server -2. handling rollbacks when writes are rejected - ### 1. Merge logic The entrypoint in the code for merge logic is the very blunt `delete_local_on_synced_trigger` defined in the [`./local-schema.sql`](./local-schema.sql). The current implementation just wipes any local state for a row when any insert, updater or delete to that row syncs in from the server. @@ -44,7 +46,7 @@ This approach works and is simple to reason about. However, it won't preserve lo ### 2. Rollbacks -Syncing changes in the background complicates any potential rollback handling. In the [combine on read](../../3-combine-on-read) pattern, you can detect a write being rejected by the server whilst still in context, handling user input. With through the database sync, this context is harder to reconstruct. +Syncing changes in the background complicates any potential rollback handling. In the [shared persistent optimistic state](../../3-shared-persistent) pattern, you can detect a write being rejected by the server whilst still in context, handling user input. With through the database sync, this context is harder to reconstruct. In this example implementation, we implement an extremely blunt rollback strategy of clearing all local state and writes in the event of any write being rejected by the server. diff --git a/examples/write-patterns/patterns/4-through-the-db/db.ts b/examples/write-patterns/patterns/4-through-the-db/db.ts new file mode 100644 index 0000000000..3ac8c56162 --- /dev/null +++ b/examples/write-patterns/patterns/4-through-the-db/db.ts @@ -0,0 +1,43 @@ +import { PGlite } from '@electric-sql/pglite' +import { type PGliteWithLive, live } from '@electric-sql/pglite/live' +import { electricSync } from '@electric-sql/pglite-sync' + +import localSchemaMigrations from './local-schema.sql?raw' + +const DATA_DIR = 'idb://electric-write-patterns-example' +const ELECTRIC_URL = import.meta.env.ELECTRIC_URL || 'http://localhost:3000' + +const registry = new Map>() + +export default async function loadPGlite(): Promise { + const loadingPromise = registry.get('loadingPromise') + + if (loadingPromise === undefined) { + registry.set('loadingPromise', _loadPGlite()) + } + + return loadingPromise as Promise +} + +async function _loadPGlite(): Promise { + const pglite: PGliteWithLive = await PGlite.create(DATA_DIR, { + extensions: { + electric: electricSync(), + live, + }, + }) + + await pglite.exec(localSchemaMigrations) + + await pglite.electric.syncShapeToTable({ + shape: { + url: `${ELECTRIC_URL}/v1/shape`, + table: 'todos', + }, + shapeKey: 'todos', + table: 'todos_synced', + primaryKey: ['id'], + }) + + return pglite +} diff --git a/examples/write-patterns/patterns/4-through-the-db/index.tsx b/examples/write-patterns/patterns/4-through-the-db/index.tsx index e353ac856b..05f17a7af4 100644 --- a/examples/write-patterns/patterns/4-through-the-db/index.tsx +++ b/examples/write-patterns/patterns/4-through-the-db/index.tsx @@ -1,4 +1,4 @@ -import React from 'react' +import React, { useEffect, useState } from 'react' import { v4 as uuidv4 } from 'uuid' import { @@ -6,13 +6,10 @@ import { useLiveQuery, usePGlite, } from '@electric-sql/pglite-react' +import { type PGliteWithLive } from '@electric-sql/pglite/live' -import pglite from '../../shared/app/db' - -import SyncChanges from './sync' -import localSchemaMigrations from './local-schema.sql?raw' - -const ELECTRIC_URL = import.meta.env.ELECTRIC_URL || 'http://localhost:3000' +import loadPGlite from './db' +import ChangeLogSynchronizer from './sync' type Todo = { id: string @@ -21,29 +18,52 @@ type Todo = { created_at: Date } -// Note that the resources defined in the schema for this pattern -// are all suffixed with `p4_`. -await pglite.exec(localSchemaMigrations) - -// This starts the read path sync using Electric. -await pglite.electric.syncShapeToTable({ - shape: { - url: `${ELECTRIC_URL}/v1/shape`, - table: 'todos', - }, - shapeKey: 'p4_todos', - table: 'p4_todos_synced', - primaryKey: ['id'], -}) - -// This starts the write path sync of changes captured in the triggers from -// writes to the local DB. -const syncChanges = new SyncChanges(pglite) -syncChanges.start() - +/* + * Setup the local PGlite database, with automatic change detection and syncing. + * + * See `./local-schema.sql` for the local database schema, including view + * and trigger machinery. + * + * See `./sync.ts` for the write-path sync utility, which listens to changes + * using pg_notify, as per https://pglite.dev/docs/api#listen + */ export default function Wrapper() { + const [db, setDb] = useState() + + useEffect(() => { + let isMounted = true + let writePathSync: ChangeLogSynchronizer + + async function init() { + const pglite = await loadPGlite() + + if (!isMounted) { + return + } + + writePathSync = new ChangeLogSynchronizer(pglite) + writePathSync.start() + + setDb(pglite) + } + + init() + + return () => { + isMounted = false + + if (writePathSync !== undefined) { + writePathSync.stop() + } + } + }, []) + + if (db === undefined) { + return
Loading …
+ } + return ( - + ) @@ -51,9 +71,7 @@ export default function Wrapper() { function ThroughTheDB() { const db = usePGlite() - const results = useLiveQuery( - 'SELECT * FROM p4_todos ORDER BY created_at' - ) + const results = useLiveQuery('SELECT * FROM todos ORDER BY created_at') async function createTodo(event: React.FormEvent) { event.preventDefault() @@ -63,7 +81,7 @@ function ThroughTheDB() { const title = formData.get('todo') as string await db.sql` - INSERT INTO p4_todos ( + INSERT INTO todos ( id, title, completed, @@ -84,7 +102,7 @@ function ThroughTheDB() { const { id, completed } = todo await db.sql` - UPDATE p4_todos + UPDATE todos SET completed = ${!completed} WHERE id = ${id} ` @@ -94,7 +112,7 @@ function ThroughTheDB() { event.preventDefault() await db.sql` - DELETE FROM p4_todos + DELETE FROM todos WHERE id = ${todo.id} ` } diff --git a/examples/write-patterns/patterns/4-through-the-db/local-schema.sql b/examples/write-patterns/patterns/4-through-the-db/local-schema.sql index 0a7de2c099..7d9b8bebcd 100644 --- a/examples/write-patterns/patterns/4-through-the-db/local-schema.sql +++ b/examples/write-patterns/patterns/4-through-the-db/local-schema.sql @@ -1,18 +1,20 @@ -- This is the local database schema for PGlite. --- Note that the resources are prefixed by a `p4` namespace (standing for pattern 4) --- in order to avoid clashing with the resources defined in pattern 3. +-- It uses two tables: `todos_synced` and `todos_local`. These are combined +-- into a `todos` view that provides a merged view on both tables and supports +-- local live queries. Writes to the `todos` view are redirected using +-- `INSTEAD OF` triggers to the `todos_local` and `changes` tables. --- The `p4_todos_synced` table for immutable, synced state from the server. -CREATE TABLE IF NOT EXISTS p4_todos_synced ( +-- The `todos_synced` table for immutable, synced state from the server. +CREATE TABLE IF NOT EXISTS todos_synced ( id UUID PRIMARY KEY, title TEXT NOT NULL, completed BOOLEAN NOT NULL, created_at TIMESTAMP WITH TIME ZONE NOT NULL ); --- The `p4_todos_local` table for local optimistic state. -CREATE TABLE IF NOT EXISTS p4_todos_local ( +-- The `todos_local` table for local optimistic state. +CREATE TABLE IF NOT EXISTS todos_local ( id UUID PRIMARY KEY, title TEXT, completed BOOLEAN, @@ -21,8 +23,8 @@ CREATE TABLE IF NOT EXISTS p4_todos_local ( is_deleted BOOLEAN DEFAULT FALSE ); --- The `p4_todos` view to combine the two tables on read. -CREATE OR REPLACE VIEW p4_todos AS +-- The `todos` view to combine the two tables on read. +CREATE OR REPLACE VIEW todos AS SELECT COALESCE(local.id, synced.id) AS id, CASE @@ -40,50 +42,54 @@ CREATE OR REPLACE VIEW p4_todos AS THEN local.created_at ELSE synced.created_at END AS created_at - FROM p4_todos_synced AS synced - FULL OUTER JOIN p4_todos_local AS local + FROM todos_synced AS synced + FULL OUTER JOIN todos_local AS local ON synced.id = local.id WHERE local.id IS NULL OR local.is_deleted = FALSE; --- A trigger to automatically remove local optimistic state. -CREATE OR REPLACE FUNCTION p4_delete_local_on_sync_trigger() +-- A trigger to automatically remove local optimistic state when the +-- corresponding row syncs over the replication stream. This is a blunt +-- merge strategy. More sophisticated apps can implement more +-- sophisticated merge / rebase strategies. +CREATE OR REPLACE FUNCTION delete_local_on_sync_trigger() RETURNS TRIGGER AS $$ BEGIN - DELETE FROM p4_todos_local WHERE id = OLD.id; + DELETE FROM todos_local WHERE id = OLD.id; RETURN NEW; END; $$ LANGUAGE plpgsql; -CREATE OR REPLACE TRIGGER p4_delete_local_on_sync -AFTER INSERT OR UPDATE OR DELETE ON p4_todos_synced +CREATE OR REPLACE TRIGGER delete_local_on_sync +AFTER INSERT OR UPDATE OR DELETE ON todos_synced FOR EACH ROW -EXECUTE FUNCTION p4_delete_local_on_sync_trigger(); +EXECUTE FUNCTION delete_local_on_sync_trigger(); -- The local `changes` table for capturing and persisting a log -- of local write operations that we want to sync to the server. -CREATE TABLE IF NOT EXISTS p4_changes ( +CREATE TABLE IF NOT EXISTS changes ( id BIGSERIAL PRIMARY KEY, operation TEXT NOT NULL, value JSONB NOT NULL, transaction_id XID8 NOT NULL ); --- We now define `INSTEAD OF` triggers to: +-- The following `INSTEAD OF` triggers: -- 1. allow the app code to write directly to the view -- 2. to capture write operations and write change messages into the -- The insert trigger -CREATE OR REPLACE FUNCTION p4_todos_insert_trigger() +CREATE OR REPLACE FUNCTION todos_insert_trigger() RETURNS TRIGGER AS $$ BEGIN - IF EXISTS (SELECT 1 FROM p4_todos_synced WHERE id = NEW.id) THEN + IF EXISTS (SELECT 1 FROM todos_synced WHERE id = NEW.id) THEN RAISE EXCEPTION 'Cannot insert: id already exists in the synced table'; END IF; - IF EXISTS (SELECT 1 FROM p4_todos_local WHERE id = NEW.id) THEN + IF EXISTS (SELECT 1 FROM todos_local WHERE id = NEW.id) THEN RAISE EXCEPTION 'Cannot insert: id already exists in the local table'; END IF; - INSERT INTO p4_todos_local ( + -- Insert into the local table. + INSERT INTO todos_local ( id, title, completed, @@ -98,7 +104,8 @@ BEGIN ARRAY['title', 'completed', 'created_at'] ); - INSERT INTO p4_changes ( + -- Record the write operation in the change log. + INSERT INTO changes ( operation, value, transaction_id @@ -119,16 +126,16 @@ END; $$ LANGUAGE plpgsql; -- The update trigger -CREATE OR REPLACE FUNCTION p4_todos_update_trigger() +CREATE OR REPLACE FUNCTION todos_update_trigger() RETURNS TRIGGER AS $$ DECLARE - synced p4_todos_synced%ROWTYPE; - local p4_todos_local%ROWTYPE; + synced todos_synced%ROWTYPE; + local todos_local%ROWTYPE; changed_cols TEXT[] := '{}'; BEGIN -- Fetch the corresponding rows from the synced and local tables - SELECT * INTO synced FROM p4_todos_synced WHERE id = NEW.id; - SELECT * INTO local FROM p4_todos_local WHERE id = NEW.id; + SELECT * INTO synced FROM todos_synced WHERE id = NEW.id; + SELECT * INTO local FROM todos_local WHERE id = NEW.id; -- If the row is not present in the local table, insert it IF NOT FOUND THEN @@ -143,7 +150,7 @@ BEGIN changed_cols := array_append(changed_cols, 'created_at'); END IF; - INSERT INTO p4_todos_local ( + INSERT INTO todos_local ( id, title, completed, @@ -161,7 +168,7 @@ BEGIN -- Otherwise, if the row is already in the local table, update it and adjust -- the changed_columns ELSE - UPDATE p4_todos_local + UPDATE todos_local SET title = CASE @@ -203,7 +210,8 @@ BEGIN WHERE id = NEW.id; END IF; - INSERT INTO p4_changes ( + -- Record the update into the change log. + INSERT INTO changes ( operation, value, transaction_id @@ -226,16 +234,17 @@ END; $$ LANGUAGE plpgsql; -- The delete trigger -CREATE OR REPLACE FUNCTION p4_todos_delete_trigger() +CREATE OR REPLACE FUNCTION todos_delete_trigger() RETURNS TRIGGER AS $$ BEGIN - IF EXISTS (SELECT 1 FROM p4_todos_local WHERE id = OLD.id) THEN - UPDATE p4_todos_local + -- Upsert a soft-deletion record in the local table. + IF EXISTS (SELECT 1 FROM todos_local WHERE id = OLD.id) THEN + UPDATE todos_local SET is_deleted = TRUE WHERE id = OLD.id; ELSE - INSERT INTO p4_todos_local ( + INSERT INTO todos_local ( id, is_deleted ) @@ -245,7 +254,8 @@ BEGIN ); END IF; - INSERT INTO p4_changes ( + -- Record in the change log. + INSERT INTO changes ( operation, value, transaction_id @@ -262,30 +272,31 @@ BEGIN END; $$ LANGUAGE plpgsql; -CREATE OR REPLACE TRIGGER p4_todos_insert -INSTEAD OF INSERT ON p4_todos +CREATE OR REPLACE TRIGGER todos_insert +INSTEAD OF INSERT ON todos FOR EACH ROW -EXECUTE FUNCTION p4_todos_insert_trigger(); +EXECUTE FUNCTION todos_insert_trigger(); -CREATE OR REPLACE TRIGGER p4_todos_update -INSTEAD OF UPDATE ON p4_todos +CREATE OR REPLACE TRIGGER todos_update +INSTEAD OF UPDATE ON todos FOR EACH ROW -EXECUTE FUNCTION p4_todos_update_trigger(); +EXECUTE FUNCTION todos_update_trigger(); -CREATE OR REPLACE TRIGGER p4_todos_delete -INSTEAD OF DELETE ON p4_todos +CREATE OR REPLACE TRIGGER todos_delete +INSTEAD OF DELETE ON todos FOR EACH ROW -EXECUTE FUNCTION p4_todos_delete_trigger(); +EXECUTE FUNCTION todos_delete_trigger(); -CREATE OR REPLACE FUNCTION p4_changes_notify_trigger() +-- Notify on a `changes` topic whenever anything is added to the change log. +CREATE OR REPLACE FUNCTION changes_notify_trigger() RETURNS TRIGGER AS $$ BEGIN - NOTIFY p4_changes; + NOTIFY changes; RETURN NEW; END; $$ LANGUAGE plpgsql; -CREATE OR REPLACE TRIGGER p4_changes_notify -AFTER INSERT ON p4_changes +CREATE OR REPLACE TRIGGER changes_notify +AFTER INSERT ON changes FOR EACH ROW -EXECUTE FUNCTION p4_changes_notify_trigger(); +EXECUTE FUNCTION changes_notify_trigger(); diff --git a/examples/write-patterns/patterns/4-through-the-db/sync.ts b/examples/write-patterns/patterns/4-through-the-db/sync.ts index dc046c2766..2b70cc9be4 100644 --- a/examples/write-patterns/patterns/4-through-the-db/sync.ts +++ b/examples/write-patterns/patterns/4-through-the-db/sync.ts @@ -22,19 +22,18 @@ type SendResult = 'accepted' | 'rejected' | 'retry' * Minimal, naive synchronization utility, just to illustrate the pattern of * `listen` to `changes` and `POST` them to the api server. */ -export default class LocalChangeSynchronizer { +export default class ChangeLogSynchronizer { #db: PGliteWithLive #position: TransactionId - #status: 'idle' | 'processing' = 'idle' #hasChangedWhileProcessing: boolean = false + #shouldContinue: boolean = true + #status: 'idle' | 'processing' = 'idle' + #abortController?: AbortController #unsubscribe?: () => Promise - #shouldContinue: boolean = true constructor(db: PGliteWithLive, position = '0') { - console.log('new LocalChangeSynchronizer', db) - this.#db = db this.#position = position } @@ -43,12 +42,8 @@ export default class LocalChangeSynchronizer { * Start by listening for notifications. */ async start(): Promise { - console.log('start') - - this.#unsubscribe = await this.#db.listen( - 'p4_changes', - this.handle.bind(this) - ) + this.#abortController = new AbortController() + this.#unsubscribe = await this.#db.listen('changes', this.handle.bind(this)) this.process() } @@ -58,8 +53,6 @@ export default class LocalChangeSynchronizer { * so we can process them straightaway on the next loop. */ async handle(): Promise { - console.log('handle') - if (this.#status === 'processing') { this.#hasChangedWhileProcessing = true @@ -72,8 +65,6 @@ export default class LocalChangeSynchronizer { // Process the changes by fetching them and posting them to the server. // If the changes are accepted then proceed, otherwise rollback or retry. async process(): Promise { - console.log('process', this.#position) - this.#status === 'processing' this.#hasChangedWhileProcessing = false @@ -111,18 +102,14 @@ export default class LocalChangeSynchronizer { * Fetch the current batch of changes */ async query(): Promise<{ changes: Change[]; position: TransactionId }> { - console.log('query') - const { rows } = await this.#db.sql` - SELECT * from p4_changes + SELECT * from changes WHERE transaction_id > ${this.#position} ORDER BY transaction_id asc, id asc ` - console.log('rows', rows) - const position = rows.length ? rows.at(-1)!.transaction_id : this.#position return { @@ -135,8 +122,6 @@ export default class LocalChangeSynchronizer { * Send the current batch of changes to the server, grouped by transaction. */ async send(changes: Change[]): Promise { - console.log('send', changes) - const path = '/changes' const groups = Object.groupBy(changes, (x) => x.transaction_id) @@ -150,7 +135,14 @@ export default class LocalChangeSynchronizer { } }) - const response = await api.request(path, 'POST', transactions) + const signal = this.#abortController?.signal + + let response: Response + try { + response = await api.request(path, 'POST', transactions, signal) + } catch (_err) { + return 'retry' + } if (response === undefined) { return 'retry' @@ -167,10 +159,8 @@ export default class LocalChangeSynchronizer { * Proceed by clearing the processed changes and moving the position forward. */ async proceed(position: TransactionId): Promise { - console.log('proceed', position) - await this.#db.sql` - DELETE from p4_changes + DELETE from changes WHERE id <= ${position} ` @@ -182,11 +172,9 @@ export default class LocalChangeSynchronizer { * wipe the entire local state. */ async rollback(): Promise { - console.log('rollback') - await this.#db.transaction(async (tx) => { - await tx.sql`DELETE from p4_changes` - await tx.sql`DELETE from p4_todos_local` + await tx.sql`DELETE from changes` + await tx.sql`DELETE from todos_local` }) } @@ -196,6 +184,10 @@ export default class LocalChangeSynchronizer { async stop(): Promise { this.#shouldContinue = false + if (this.#abortController !== undefined) { + this.#abortController.abort() + } + if (this.#unsubscribe !== undefined) { await this.#unsubscribe() } diff --git a/examples/write-patterns/patterns/index.ts b/examples/write-patterns/patterns/index.ts index 120e96e3a8..653ca5bfcd 100644 --- a/examples/write-patterns/patterns/index.ts +++ b/examples/write-patterns/patterns/index.ts @@ -1,4 +1,4 @@ export { default as OnlineWrites } from './1-online-writes' export { default as OptimisticState } from './2-optimistic-state' -export { default as CombineOnRead } from './3-combine-on-read' +export { default as SharedPersistent } from './3-shared-persistent' export { default as ThroughTheDB } from './4-through-the-db' diff --git a/examples/write-patterns/public/screenshot.png b/examples/write-patterns/public/screenshot.png new file mode 100644 index 0000000000..9ecade6e99 Binary files /dev/null and b/examples/write-patterns/public/screenshot.png differ diff --git a/examples/write-patterns/shared/app/App.tsx b/examples/write-patterns/shared/app/App.tsx index 29af4b18b9..3e42d9cfcf 100644 --- a/examples/write-patterns/shared/app/App.tsx +++ b/examples/write-patterns/shared/app/App.tsx @@ -1,9 +1,9 @@ import './style.css' import { - CombineOnRead, OnlineWrites, OptimisticState, + SharedPersistent, ThroughTheDB, } from '../../patterns' @@ -12,7 +12,7 @@ const App = () => {
- +
) diff --git a/examples/write-patterns/shared/app/client.ts b/examples/write-patterns/shared/app/client.ts index ac5d5dd84b..5db70c4306 100644 --- a/examples/write-patterns/shared/app/client.ts +++ b/examples/write-patterns/shared/app/client.ts @@ -4,6 +4,7 @@ type RequestOptions = { method: string headers: HeadersInit body?: string + signal?: AbortSignal } // Keeps trying for 3 minutes, with the delay @@ -51,7 +52,12 @@ async function resilientFetch( } } -async function request(path: string, method: string, data?: object) { +async function request( + path: string, + method: string, + data?: object, + signal?: AbortSignal +) { const url = `${API_URL}${path}` const options: RequestOptions = { @@ -61,10 +67,14 @@ async function request(path: string, method: string, data?: object) { }, } - if (data) { + if (data !== undefined) { options.body = JSON.stringify(data) } + if (signal !== undefined) { + options.signal = signal + } + return await resilientFetch(url, options, 0) } diff --git a/examples/write-patterns/shared/app/db.ts b/examples/write-patterns/shared/app/db.ts deleted file mode 100644 index 85ea196934..0000000000 --- a/examples/write-patterns/shared/app/db.ts +++ /dev/null @@ -1,15 +0,0 @@ -import { PGlite } from '@electric-sql/pglite' -import { PGliteWithLive, live } from '@electric-sql/pglite/live' -import { electricSync } from '@electric-sql/pglite-sync' - -const pglite: PGliteWithLive = await PGlite.create( - 'idb://electric-write-patterns', - { - extensions: { - electric: electricSync(), - live, - }, - } -) - -export default pglite diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index a2a21503e9..88b141b68e 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -707,6 +707,9 @@ importers: uuid: specifier: ^10.0.0 version: 10.0.0 + valtio: + specifier: ^2.1.2 + version: 2.1.2(react@19.0.0-rc.1)(types-react@19.0.0-rc.1) zod: specifier: ^3.23.8 version: 3.23.8 @@ -7435,6 +7438,9 @@ packages: resolution: {integrity: sha512-llQsMLSUDUPT44jdrU/O37qlnifitDP+ZwrmmZcoSKyLKvtZxpyV0n2/bD/N4tBAAZ/gJEdZU7KMraoK1+XYAg==} engines: {node: '>= 0.10'} + proxy-compare@3.0.1: + resolution: {integrity: sha512-V9plBAt3qjMlS1+nC8771KNf6oJ12gExvaxnNzN/9yVRLdTv/lc+oJlnSzrdYDAvBfTStPCoiaCOTmTs0adv7Q==} + pseudomap@1.0.2: resolution: {integrity: sha512-b/YwNhb8lk1Zz2+bXXpS/LK9OisiZZ1SNsSLxN1x2OXVEhW2Ckr/7mWE5vrC1ZTiJlD9g19jWszTmJsB+oEpFQ==} @@ -8646,6 +8652,18 @@ packages: resolution: {integrity: sha512-OljLrQ9SQdOUqTaQxqL5dEfZWrXExyyWsozYlAWFawPVNuD83igl7uJD2RTkNMbniIYgt8l81eCJGIdQF7avLQ==} engines: {node: ^14.17.0 || ^16.13.0 || >=18.0.0} + valtio@2.1.2: + resolution: {integrity: sha512-fhekN5Rq7dvHULHHBlJeXHrQDl0Jj9GXfNavCm3gkD06crGchaG1nf/J7gSlfZU2wPcRdVS5jBKWHtE2NNz97A==} + engines: {node: '>=12.20.0'} + peerDependencies: + '@types/react': '>=18.0.0' + react: '>=18.0.0' + peerDependenciesMeta: + '@types/react': + optional: true + react: + optional: true + vary@1.1.2: resolution: {integrity: sha512-BNGbWLfd0eUPabhkXUVm0j8uuvREyTh5ovRa/dyow/BqAbZJyC+5fU+IzQOzmAKzYqYRAISoRhdQr3eIZ/PXqg==} engines: {node: '>= 0.8'} @@ -16314,6 +16332,8 @@ snapshots: forwarded: 0.2.0 ipaddr.js: 1.9.1 + proxy-compare@3.0.1: {} + pseudomap@1.0.2: {} pstree.remy@1.1.8: {} @@ -17692,6 +17712,13 @@ snapshots: validate-npm-package-name@5.0.1: {} + valtio@2.1.2(react@19.0.0-rc.1)(types-react@19.0.0-rc.1): + dependencies: + proxy-compare: 3.0.1 + optionalDependencies: + '@types/react': types-react@19.0.0-rc.1 + react: 19.0.0-rc.1 + vary@1.1.2: {} vfile-message@3.1.4: