PGlite API
Main Constructor
The main constructor is imported as:
import { PGlite } from '@electric-sql/pglite'
The preferred way to create a PGlite instance is with the PGlite.create()
static method that returns a promise, resolving to the new PGlite instance.
await PGlite.create(dataDir: string, options: PGliteOptions)
await PGlite.create(options: PGliteOptions)
There are a couple of advantages to using the static method:
- This awaits the
.waitReady
promise, ensuring that the database has been fully initialised. - When using TypeScript and extensions, the returned PGlite instance will have the extensions namespace on its type. This is not possible with the standard constructor due to TypesScript limitations.
A new PGlite instance can also be created using the new PGlite()
constructor.
new PGlite(dataDir: string, options: PGliteOptions)
new PGlite(options: PGliteOptions)
dataDir
Path to the directory for storing the Postgres database. You can provide a URI scheme for various storage backends:
file://
or unprefixed
File system storage, available in Node and Bun.idb://
IndexedDB storage, available in the browser.memory://
In-memory ephemeral storage, available in all platforms.
options
dataDir?: string
The directory in which to store the Postgres database when not provided as the first argument.debug?: 1-5
the Postgres debug level. Logs are sent to the console.relaxedDurability?: boolean
Under relaxed durability mode, PGlite will not wait for flushes to storage to complete after each query before returning results. This is particularly useful when using the IndexedDB file system.fs?: Filesystem
The alternative to providing a dataDir with a filesystem prefix is to initialise aFilesystem
yourself and provide it here. See FilesystemsloadDataDir?: Blob | File
A tarball of a PGlitedatadir
to load when the database starts. This should be a tarball produced from the related.dumpDataDir()
method.extensions?: Extensions
An object containing the extensions you wish to load.username?: string
The username of the user to connect to the database as. Permissions will be applied in the context of this user.database?: string
The database from the Postgres cluster within thedataDir
to connect to.initialMemory?: number
The initial amount of memory in bytes to allocate for the PGlite instance. PGlite will grow the memory automatically, but if you have a particularly large database you can set this higher to prevent the pause during memory growth.
options.extensions
PGlite and Postgres extensions are loaded into a PGLite instance on start, and can include both a WASM build of a Postgres extension and/or a PGlite client plugin.
The options.extensions
parameter is an object of namespace: extension
parings. The namespace is used to expose the PGlite client plugin included in the extension. An example of this is the live queries extension.
import { PGlite } from '@electric-sql/pglite'
import { live } from '@electric-sql/pglite/live'
import { vector } from '@electric-sql/pglite/vector'
const pg = await PGlite.create({
extensions: {
live, // Live query extension, is a PGlite client plugin
vector, // Postgres pgvector extension
},
})
// The `live` namespace is added by the use of the
// `live` key in the `extensions` object.
pg.live.query('...')
For information on how to develop a PGlite extension see Extension Development.
Methods
query
.query<T>(query: string, params?: any[], options?: QueryOptions): Promise<Results<T>>
Execute a single statement, optionally with parameters.
Uses the extended query Postgres wire protocol.
Returns single result object.
Example
await pg.query('INSERT INTO test (name) VALUES ($1);', ['test'])
// { affectedRows: 1 },
Query Options
The query
and exec
methods take an optional options
objects with the following parameters:
rowMode: "object" | "array"
The returned row object type, either an object offieldName: value
mappings or an array of positional values. Defaults to"object"
.parsers: ParserOptions
An object of type{[[pgType: number]: (value: string) => any;]}
mapping Postgres data type IDs to parser functions.
For convenience, thepglite
package exports a constant for most common Postgres types:tsimport { types } from '@electric-sql/pglite' await pg.query( ` SELECT * FROM test WHERE name = $1; `, ['test'], { rowMode: 'array', parsers: { [types.TEXT]: (value) => value.toUpperCase(), }, }, )
blob: Blob | File
Attach aBlob
orFile
object to the query that can used with aCOPY FROM
command by using the virtual/dev/blob
device, see importing and exporting.
exec
.exec(query: string, options?: QueryOptions): Promise<Array<Results>>
Execute one or more statements. (note that parameters are not supported)
This is useful for applying database migrations, or running multi-statement SQL that doesn't use parameters.
Uses the simple query Postgres wire protocol.
Returns array of result objects; one for each statement.
Example
await pg.exec(`
CREATE TABLE IF NOT EXISTS test (
id SERIAL PRIMARY KEY,
name TEXT
);
INSERT INTO test (name) VALUES ('test');
SELECT * FROM test;
`)
// [
// { affectedRows: 0 },
// { affectedRows: 1 },
// {
// rows: [
// { id: 1, name: 'test' }
// ]
// affectedRows: 0,
// fields: [
// { name: 'id', dataTypeID: '23' },
// { name: 'name', dataTypeID: '25' },
// ]
// }
// ]
transaction
.transaction<T>(callback: (tx: Transaction) => Promise<T>)
To start an interactive transaction, pass a callback to the transaction method. It is passed a Transaction
object which can be used to perform operations within the transaction.
The transaction will be committed when the promise returned from your callback resolves, and automatically rolled back if the promise is rejected.
Transaction
objects
tx.query<T>(query: string, params?: any[], options?: QueryOptions): Promise<Results<T>>
The same as the main.query
method.tx.exec(query: string, options?: QueryOptions): Promise<Array<Results>>
The same as the main.exec
method.tx.rollback()
Rollback and close the current transaction.
Example
await pg.transaction(async (tx) => {
await tx.query(
'INSERT INTO test (name) VALUES ('$1');',
[ 'test' ]
);
return await ts.query('SELECT * FROM test;');
});
close
.close(): Promise<void>
Close the database, ensuring it is shut down cleanly.
listen
.listen(channel: string, callback: (payload: string) => void): Promise<void>
Subscribe to a pg_notify channel. The callback will receive the payload from the notification.
Returns an unsubscribe function to unsubscribe from the channel.
Example
const unsub = await pg.listen('test', (payload) => {
console.log('Received:', payload)
})
await pg.query("NOTIFY test, 'Hello, world!'")
unlisten
.unlisten(channel: string, callback?: (payload: string) => void): Promise<void>
Unsubscribe from the channel. If a callback is provided it removes only that callback from the subscription. When no callback is provided, it unsubscribes all callbacks for the channel.
onNotification
onNotification(callback: (channel: string, payload: string) => void): () => void
Add an event handler for all notifications received from Postgres.
Note: This does not subscribe to the notification; you will need to manually subscribe with LISTEN channel_name
.
offNotification
offNotification(callback: (channel: string, payload: string) => void): void
Remove an event handler for all notifications received from Postgres.
dumpDataDir
dumpDataDir(): Promise<File | Blob>
Dump the Postgres datadir
to a Gzipped tarball.
This can then be used in combination with the loadDataDir
option when starting PGlite to load a dumped database from storage.
NOTE
The datadir dump may not be compatible with other Postgres versions; it is only designed for importing back into PGlite.
Properties
ready
.ready
boolean (read only)
Whether the database is ready to accept queries.
closed
.closed
boolean (read only)
Whether the database is closed and no longer accepting queries.
waitReady
.waitReady
Promise<void>
Promise that resolves when the database is ready to use.
NOTE
Query methods will wait for the waitReady
promise to resolve if called before the database has fully initialised, and so it is not necessary to wait for it explicitly.
Results<T>
Objects
Result objects have the following properties:
rows: Row<T>[]
The rows retuned by the query.affectedRows?: number
Count of the rows affected by the query. Note, this is not the count of rows returned, it is the number or rows in the database changed by the query.fields: { name: string; dataTypeID: number }[]
Field name and Postgres data type ID for each field returned.blob?: Blob
ABlob
containing the data written to the virtual/dev/blob/
device by aCOPY TO
command. See /dev/blob.
Row<T>
Objects
Rows objects are a key / value mapping for each row returned by the query.
The .query<T>()
method can take a TypeScript type describing the expected shape of the returned rows.
NOTE
These types are not validated at run time, the result is only cast to the provided type.
/dev/blob
PGlite has support for importing and exporting via the SQL COPY TO/FROM
command by using a virtual /dev/blob
device.
To import a file, pass the File
or Blob
in the query options as blob
, and copy from the /dev/blob
device.
await pg.query("COPY my_table FROM '/dev/blob';", [], {
blob: MyBlob,
})
To export a table or query to a file, you just need to write to the /dev/blob
device; the file will be returned as blob
on the query results:
const ret = await pg.query("COPY my_table TO '/dev/blob';")
// ret.blob is a `Blob` object with the data from the copy.