Database
Deepkit provides an ORM that allows to access databases in a modern way. Entities are simply defined using TypeScript types:
import { entity, PrimaryKey, AutoIncrement, Unique, MinLength, MaxLength } from '@deepkit/type';
@entity.name('user')
class User {
id: number & PrimaryKey & AutoIncrement = 0;
created: Date = new Date;
firstName?: string;
lastName?: string;
constructor(
public username: string & Unique & MinLength<2> & MaxLength<16>,
public email: string & Unique,
) {}
}
In doing so, any TypeScript types and validation decorators from Deepkit can be used to fully define the entity. The entity type system is designed so that these types or classes can also be used in other areas such as HTTP routes, RPC actions or frontend. This prevents, for example, that a user is defined several times in the entire application.
Installation
Since Deepkit ORM is based on Runtime Types, it is necessary to have @deepkit/type
already installed correctly. See Runtime Type Installation.
If this is done successfully, @deepkit/orm
itself and a database adapter can be installed.
If classes are to be used as entities, experimentalDecorators
must be enabled in tsconfig.json:
{
"compilerOptions": {
"experimentalDecorators": true
}
}
Once the library is installed, a database adapter can be installed and the API of it can be used directly.
SQLite
npm install @deepkit/orm @deepkit/sqlite
import { SQLiteDatabaseAdapter } from '@deepkit/sqlite';
const database = new Database(new SQLiteDatabaseAdapter('./example.sqlite'), [User]);
const database = new Database(new SQLiteDatabaseAdapter(':memory:'), [User]);
MySQL
npm install @deepkit/orm @deepkit/mysql
import { MySQLDatabaseAdapter } from '@deepkit/mysql';
const database = new Database(new MySQLDatabaseAdapter({
host: 'localhost',
port: 3306
}), [User]);
Use
Primarily the Database
object is used. Once instantiated, it can be used throughout the application to query or manipulate data. The connection to the database is initialized lazy.
The Database
object is passed an adapter, which comes from the database adapters libraries.
import { SQLiteDatabaseAdapter } from '@deepkit/sqlite';
import { entity, PrimaryKey, AutoIncrement } from '@deepkit/type';
import { Database } from '@deepkit/orm';
async function main() {
@entity.name('user')
class User {
public id: number & PrimaryKey & AutoIncrement = 0;
created: Date = new Date;
constructor(public name: string) {
}
}
const database = new Database(new SQLiteDatabaseAdapter('./example.sqlite'), [User]);
await database.migrate(); //create tables
await database.persist(new User('Peter'));
const allUsers = await database.query(User).find();
console.log('all users', allUsers);
}
main();
Entity
An entity is either a class or object literal (interface) and always has a primary key.
The entity is decorated with all necessary information using type decorators from @deepkit/type
. For example, a primary key is defined as well as various fields and their validation constraints. These fields reflect the database structure, usually a table or a collection.
Special type decorators such as Mapped<'name'>
can also be used to map a field name to another name in the database.
Class
import { entity, PrimaryKey, AutoIncrement, Unique, MinLength, MaxLength } from '@deepkit/type';
@entity.name('user')
class User {
id: number & PrimaryKey & AutoIncrement = 0;
created: Date = new Date;
firstName?: string;
lastName?: string;
constructor(
public username: string & Unique & MinLength<2> & MaxLength<16>,
public email: string & Unique,
) {}
}
const database = new Database(new SQLiteDatabaseAdapter(':memory:'), [User]);
await database.migrate();
await database.persist(new User('Peter'));
const allUsers = await database.query(User).find();
console.log('all users', allUsers);
Interface
import { PrimaryKey, AutoIncrement, Unique, MinLength, MaxLength } from '@deepkit/type';
interface User {
id: number & PrimaryKey & AutoIncrement = 0;
created: Date = new Date;
firstName?: string;
lastName?: string;
username: string & Unique & MinLength<2> & MaxLength<16>;
}
const database = new Database(new SQLiteDatabaseAdapter(':memory:'));
database.register<User>({name: 'user'});
await database.migrate();
const user: User = {id: 0, created: new Date, username: 'Peter'};
await database.persist(user);
const allUsers = await database.query<User>().find();
console.log('all users', allUsers);
Primitive
Primitive data types like String, Number (bigint), and Boolean are mapped to common database types. Only the TypeScript type is used.
interface User {
logins: number;
username: string;
pro: boolean;
}
Primary Key
Each entity needs exactly one primary key. Multiple primary keys are not supported.
The base type of a primary key can be arbitrary. Often a number or UUID is used. For MongoDB the MongoId or ObjectID is often used.
For numbers, AutoIncrement
is a good choice.
import { PrimaryKey } from '@deepkit/type';
interface User {
id: number & PrimaryKey;
}
Auto Increment
Fields that should be automatically incremented during insertion are annotated with the AutoIncrement
decorator. All adapters support auto-increment values. The MongoDB adapter uses an additional collection to keep track of the counter.
An Auto-Increment field is an automatic counter and can only be applied to a Primary Key. The database automatically ensures that an ID is used only once.
import { PrimaryKey, AutoIncrement } from '@deepkit/type';
interface User {
id: number & PrimaryKey & AutoIncrement;
}
UUID
Fields that should be of type UUID (v4) are annotated with the decorator UUID. The runtime type is string
and mostly binary in the database itself. Use the uuid()
function to create a new UUID v4.
import { uuid, UUID, PrimaryKey } from '@deepkit/type';
class User {
id: UUID & PrimaryKey = uuid();
}
MongoDB ObjectID
Fields that should be of type ObjectID in MongoDB are annotated with the decorator MongoId
. The runtime type is string
and in the database itself ObjectId
(binary).
MongoID fields automatically get a new value when inserted. It is not mandatory to use the field name _id
. It can have any name.
import { PrimaryKey, MongoId } from '@deepkit/type';
class User {
id: MongoId & PrimaryKey = '';
}
Optional / Nullable
Optional fields are declared as TypeScript type with title?: string
or title: string | null
. You should use only one variant of this, usually the optional ?
syntax which works with undefined
.
Both variants result in the database type being NULLABLE
for all SQL adapters. So the only difference between these decorators is that they represent different values at runtime.
In the following example, the changed field is optional and can therefore be undefined at runtime, although it is always represented as NULL in the database.
import { PrimaryKey } from '@deepkit/type';
class User {
id: number & PrimaryKey = 0;
modified?: Date;
}
This example shows how the nullable type works. NULL is used both in the database and in the javascript runtime. This is more verbose than modified?: Date
and is not commonly used.
import { PrimaryKey } from '@deepkit/type';
class User {
id: number & PrimaryKey = 0;
modified: Date | null = null;
}
Database Type Mapping
Runtime type | SQLite | MySQL | Postgres | Mongo |
---|---|---|---|---|
string |
text |
longtext |
text |
string |
number |
float |
double |
double precision |
int/number |
boolean |
integer(1) |
boolean |
boolean |
boolean |
date |
text |
datetime |
timestamp |
datetime |
array |
text |
json |
jsonb |
array |
map |
text |
json |
jsonb |
object |
map |
text |
json |
jsonb |
object |
union |
text |
json |
jsonb |
T |
uuid |
blob |
binary(16) |
uuid |
binary |
ArrayBuffer/Uint8Array/… |
blob |
longblob |
bytea |
binary |
With DatabaseField
it is possible to map a field to any database type. The type must be a valid SQL statement that is passed unchanged to the migration system.
import { DatabaseField } from '@deepkit/type';
interface User {
title: string & DatabaseField<{type: 'VARCHAR(244)'}>;
}
To map a field for a specific database, either SQLite
, MySQL
, or Postgres
can be used.
SQLite
import { SQLite } from '@deepkit/type';
interface User {
title: string & SQLite<{type: 'text'}>;
}
Session / Unit Of Work
A session is something like a unit of work. It keeps track of everything you do and automatically records the changes whenever commit()
is called. It is the preferred way to execute changes in the database because it bundles statements in a way that makes it very fast. A session is very lightweight and can easily be created in a request-response lifecycle, for example.
import { SQLiteDatabaseAdapter } from '@deepkit/sqlite';
import { entity, PrimaryKey, AutoIncrement } from '@deepkit/type';
import { Database } from '@deepkit/orm';
async function main() {
@entity.name('user')
class User {
id: number & PrimaryKey & AutoIncrement = 0;
created: Date = new Date;
constructor(public name: string) {
}
}
const database = new Database(new SQLiteDatabaseAdapter(':memory:'), [User]);
await database.migrate();
const session = database.createSession();
session.add(new User('User1'), new User('User2'), new User('User3'));
await session.commit();
const users = await session.query(User).find();
console.log(users);
}
main();
Add new instance to the session with session.add(T)
or remove existing instances with session.remove(T)
. Once you are done with the Session object, simply dereference it everywhere so that the garbage collector can remove it.
Changes are automatically detected for entity instances fetched via the Session object.
const users = await session.query(User).find();
for (const user of users) {
user.name += ' changed';
}
await session.commit();//saves all users
Identity Map
Sessions provide an identity map that ensures there is only ever one javascript object per database entry. For example, if you run session.query(User).find()
twice within the same session, you get two different arrays, but with the same entity instances in them.
If you add a new entity with session.add(entity1)
and retrieve it again, you will get exactly the same entity instance entity1
.
Important: Once you start using sessions, you should use their Session.query
method instead of Database.query
. Only session queries have the identity mapping feature enabled.
Query
A query is an object that describes how to retrieve or modify data from the database. It has several methods to describe the query and termination methods that execute them. The database adapter can extend the query API in many ways to support database specific features.
You can create a query using Database.query(T)
or Session.query(T)
. We recommend Sessions as it improves performance.
@entity.name('user')
class User {
id: number & PrimaryKey & AutoIncrement = 0;
created: Date = new Date;
birthdate?: Date;
visits: number = 0;
constructor(public username: string) {
}
}
const database = new Database(...);
//[ { username: 'User1' }, { username: 'User2' }, { username: 'User2' } ]
const users = await database.query(User).select('username').find();
Filter
A filter can be applied to limit the result set.
//simple filters
const users = await database.query(User).filter({name: 'User1'}).find();
//multiple filters, all AND
const users = await database.query(User).filter({name: 'User1', id: 2}).find();
//range filter: $gt, $lt, $gte, $lte (greater than, lower than, ...)
//equivalent to WHERE created < NOW()
const users = await database.query(User).filter({created: {$lt: new Date}}).find();
//equivalent to WHERE id > 500
const users = await database.query(User).filter({id: {$gt: 500}}).find();
//equivalent to WHERE id >= 500
const users = await database.query(User).filter({id: {$gte: 500}}).find();
//set filter: $in, $nin (in, not in)
//equivalent to WHERE id IN (1, 2, 3)
const users = await database.query(User).filter({id: {$in: [1, 2, 3]}}).find();
//regex filter
const users = await database.query(User).filter({username: {$regex: /User[0-9]+/}}).find();
//grouping: $and, $nor, $or
//equivalent to WHERE (username = 'User1') OR (username = 'User2')
const users = await database.query(User).filter({
$or: [{username: 'User1'}, {username: 'User2'}]
}).find();
//nested grouping
//equivalent to WHERE username = 'User1' OR (username = 'User2' and id > 0)
const users = await database.query(User).filter({
$or: [{username: 'User1'}, {username: 'User2', id: {$gt: 0}}]
}).find();
//nested grouping
//equivalent to WHERE username = 'User1' AND (created < NOW() OR id > 0)
const users = await database.query(User).filter({
$and: [{username: 'User1'}, {$or: [{created: {$lt: new Date}, id: {$gt: 0}}]}]
}).find();
Select
To narrow down the fields to be received from the database, select('field1')
can be used.
const user = await database.query(User).select('username').findOne();
const user = await database.query(User).select('id', 'username').findOne();
It is important to note that as soon as the fields are narrowed down using select
, the results are no longer instances of the entity, but only object literals.
const user = await database.query(User).select('username').findOne();
user instanceof User; //false
Order
With orderBy(field, order)
the order of the entries can be changed.
Several times orderBy
can be executed to refine the order more and more.
const users = await session.query(User).orderBy('created', 'desc').find();
const users = await session.query(User).orderBy('created', 'asc').find();
Pagination
The itemsPerPage()
and page()
methods can be used to paginate the results. Page starts at 1.
const users = await session.query(User).itemsPerPage(50).page(1).find();
With the alternative methods limit
and skip
you can paginate manually.
const users = await session.query(User).limit(5).skip(10).find();
Join
By default, references from the entity are neither included in queries nor loaded. To include a join in the query without loading the reference, use join()
(left join) or innerJoin()
. To include a join in the query and load the reference, use joinWith()
or innerJoinWith()
.
All of the following examples assume these model schemes:
@entity.name('group')
class Group {
id: number & PrimaryKey & AutoIncrement = 0;
created: Date = new Date;
constructor(public username: string) {
}
}
@entity.name('user')
class User {
id: number & PrimaryKey & AutoIncrement = 0;
created: Date = new Date;
group?: Group & Reference;
constructor(public username: string) {
}
}
//select only users with a group assigned (INNER JOIN)
const users = await session.query(User).innerJoin('group').find();
for (const user of users) {
user.group; //error, since reference was not loaded
}
//select only users with a group assigned (INNER JOIN) and load the relation
const users = await session.query(User).innerJoinWith('group').find();
for (const user of users) {
user.group.name; //works
}
To modify join queries, use the same methods, but with the use
prefix: useJoin
, useInnerJoin
, useJoinWith
or useInnerJoinWith
. To end the join query modification, use end()
to get back the parent query.
//select only users with a group with name 'admins' assigned (INNER JOIN)
const users = await session.query(User)
.useInnerJoinWith('group')
.filter({name: 'admins'})
.end() // returns to the parent query
.find();
for (const user of users) {
user.group.name; //always admin
}
Aggregation
Aggregation methods allow you to count records and aggregate fields.
The following examples are based on this model scheme:
@entity.name('file')
class File {
id: number & PrimaryKey & AutoIncrement = 0;
created: Date = new Date;
downloads: number = 0;
category: string = 'none';
constructor(public path: string & Index) {
}
}
groupBy
allows to group the result by the specified field.
await database.persist(
cast<File>({path: 'file1', category: 'images'}),
cast<File>({path: 'file2', category: 'images'}),
cast<File>({path: 'file3', category: 'pdfs'})
);
//[ { category: 'images' }, { category: 'pdfs' } ]
await session.query(File).groupBy('category').find();
There are several aggregation methods: withSum
, withAverage
, withCount
, withMin
, withMax
, withGroupConcat
. Each requires a field name as the first argument and an optional second argument to change the alias.
// first let's update some of the records:
await database.query(File).filter({path: 'images/file1'}).patchOne({$inc: {downloads: 15}});
await database.query(File).filter({path: 'images/file2'}).patchOne({$inc: {downloads: 5}});
//[{ category: 'images', downloads: 20 },{ category: 'pdfs', downloads: 0 }]
await session.query(File).groupBy('category').withSum('downloads').find();
//[{ category: 'images', downloads: 10 },{ category: 'pdfs', downloads: 0 }]
await session.query(File).groupBy('category').withAverage('downloads').find();
//[ { category: 'images', amount: 2 }, { category: 'pdfs', amount: 1 } ]
await session.query(File).groupBy('category').withCount('id', 'amount').find();
Returning
With returning
additional fields can be requested in case of changes via patch
and delete
.
Caution: Not all database adapters return fields atomically. Use transactions to ensure data consistency.
await database.query(User).patchMany({visits: 0});
//{ modified: 1, returning: { visits: [ 5 ] }, primaryKeys: [ 1 ] }
const result = await database.query(User)
.filter({username: 'User1'})
.returning('username', 'visits')
.patchOne({$inc: {visits: 5}});
Find
Returns an array of entries matching the specified filter.
const users: User[] = await database.query(User).filter({username: 'Peter'}).find();
FindOne
Returns an entry that matches the specified filter.
If no item is found, an ItemNotFound
error is thrown.
const users: User = await database.query(User).filter({username: 'Peter'}).findOne();
FindOneOrUndefined
Returns an entry that matches the specified filter. If no entry is found, undefined is returned.
const query = database.query(User).filter({username: 'Peter'});
const users: User|undefined = await query.findOneOrUndefined();
FindField
Returns a list of a field that match the specified filter.
const usernames: string[] = await database.query(User).findField('username');
FindOneField
Returns a list of a field that match the specified filter.
If no item is found, an ItemNotFound
error is thrown.
const username: string = await database.query(User).filter({id: 3}).findOneField('username');
Patch
Patch is a change query that patches the records described in the query. The methods
patchOne
and patchMany
finish the query and execute the patch.
patchMany
changes all entries in the database that match the specified filter. If no filter is set, the whole table will be changed. Use patchOne
to change only one entry at a time.
await database.query(User).filter({username: 'Peter'}).patch({username: 'Peter2'});
await database.query(User).filter({username: 'User1'}).patchOne({birthdate: new Date});
await database.query(User).filter({username: 'User1'}).patchOne({$inc: {visits: 1}});
await database.query(User).patchMany({visits: 0});
Delete
deleteMany
deletes all entries in the database that match the specified filter.
If no filter is set, the entire table will be deleted. Use deleteOne
to delete only one entry at a time.
const result = await database.query(User)
.filter({visits: 0})
.deleteMany();
const result = await database.query(User).filter({id: 4}).deleteOne();
Has
Returns whether at least one entry exists in the database.
const userExists: boolean = await database.query(User).filter({username: 'Peter'}).has();
Lift
Lifting a query means adding new functionality to it. This is usually used either by plugins or complex architectures to split larger query classes into several convenient, reusable classes.
import { FilterQuery, Query } from '@deepkit/orm';
class UserQuery<T extends {birthdate?: Date}> extends Query<T> {
hasBirthday() {
const start = new Date();
start.setHours(0,0,0,0);
const end = new Date();
end.setHours(23,59,59,999);
return this.filter({$and: [{birthdate: {$gte: start}}, {birthdate: {$lte: end}}]} as FilterQuery<T>);
}
}
await session.query(User).lift(UserQuery).hasBirthday().find();
Relations
Relationships allow you to connect two entities in a certain way. This is usually done in databases using the concept of foreign keys. Deepkit ORM supports relations for all official database adapters.
A relation is annotated with the Reference
decorator. Normally a relation also has a reverse relation, which is annotated with the BackReference
type, but is only needed if the reverse relation is to be used in a database query. Back references are only virtual.
One To Many
The entity that stores a reference is usually referred to as the owning page
or the one that owns
the reference. The following code shows two entities with a one-to-many relationship between User
and Post
. This means that a User
can have multiple Post
. The post
entity has the post→user
relationship. In the database itself there is now a field Post. "author"
that contains the primary key of User
.
import { SQLiteDatabaseAdapter } from '@deepkit/sqlite';
import { entity, PrimaryKey, AutoIncrement, Reference } from '@deepkit/type';
import { Database } from '@deepkit/orm';
async function main() {
@entity.name('user').collectionName('users')
class User {
id: number & PrimaryKey & AutoIncrement = 0;
created: Date = new Date;
constructor(public username: string) {
}
}
@entity.name('post')
class Post {
id: number & PrimaryKey & AutoIncrement = 0;
created: Date = new Date;
constructor(
public author: User & Reference,
public title: string
) {
}
}
const database = new Database(new SQLiteDatabaseAdapter(':memory:'), [User, Post]);
await database.migrate();
const user1 = new User('User1');
const post1 = new Post(user1, 'My first blog post');
const post2 = new Post(user1, 'My second blog post');
await database.persist(user1, post1, post2);
}
main();
References are not selected in queries by default. See Join for details.
Many To One
A reference usually has a reverse reference called many-to-one. It is only a virtual reference, since it is not reflected in the database itself. A back reference is annotated BackReference
and is mainly used for reflection and query joins. If you add a BackReference
from User
to Post
, you can join Post
directly from User
queries.
@entity.name('user').collectionName('users')
class User {
id: number & PrimaryKey & AutoIncrement = 0;
created: Date = new Date;
posts?: Post[] & BackReference;
constructor(public username: string) {
}
}
//[ { username: 'User1', posts: [ [Post], [Post] ] } ]
const users = await database.query(User)
.select('username', 'posts')
.joinWith('posts')
.find();
Many To Many
A many-to-many relationship allows you to associate many records with many others. For example, it can be used for users in groups. A user can be in none, one or many groups. Consequently, a group can contain 0, one or many users.
Many-to-many relationships are usually implemented using a pivot entity. The pivot entity contains the actual own references to two other entities, and these two entities have back references to the pivot entity.
@entity.name('user')
class User {
id: number & PrimaryKey & AutoIncrement = 0;
created: Date = new Date;
groups?: Group[] & BackReference<{via: typeof UserGroup}>;
constructor(public username: string) {
}
}
@entity.name('group')
class Group {
id: number & PrimaryKey & AutoIncrement = 0;
users?: User[] & BackReference<{via: typeof UserGroup}>;
constructor(public name: string) {
}
}
//the pivot entity
@entity.name('userGroup')
class UserGroup {
id: number & PrimaryKey & AutoIncrement = 0;
constructor(
public user: User & Reference,
public group: Group & Reference,
) {
}
}
With these entities, you can now create users and groups and connect them to the pivot entity. By using a back reference in User, we can retrieve the groups directly with a User query.
const database = new Database(new SQLiteDatabaseAdapter(':memory:'), [User, Group, UserGroup]);
await database.migrate();
const user1 = new User('User1');
const user2 = new User('User2');
const group1 = new Group('Group1');
await database.persist(user1, user2, group1, new UserGroup(user1, group1), new UserGroup(user2, group1));
//[
// { id: 1, username: 'User1', groups: [ [Group] ] },
// { id: 2, username: 'User2', groups: [ [Group] ] }
// ]
const users = await database.query(User)
.select('username', 'groups')
.joinWith('groups')
.find();
To unlink a user from a group, the UserGroup record is deleted:
const users = await database.query(UserGroup)
.filter({user: user1, group: group1})
.deleteOne();
Events
Events are a way to hook into Deepkit ORM and allow you to write powerful plugins. There are two categories of events: Query events and Unit-of-Work events. Plugin authors typically use both to support both ways of manipulating data.
Events are registered via Database.listen
with an event token. It is also possible to register short-lived event listeners on sessions.
import { Query, Database } from '@deepkit/orm';
const database = new Database(...);
database.listen(Query.onFetch, async (event) => {
});
const session = database.createSession();
//will only be executed for this particular session
session.eventDispatcher.listen(Query.onFetch, async (event) => {
});
Query Events
Query events are triggered when a query is executed via Database.query()
or Session.query()
.
Each event has its own additional properties such as the type of entity, the query itself and the database session. You can override the query by setting a new query to Event.query
.
import { Query, Database } from '@deepkit/orm';
const database = new Database(...);
const unsubscribe = database.listen(Query.onFetch, async event => {
//overwrite the query of the user, so something else is executed.
event.query = event.query.filterField('fieldName', 123);
});
//to delete the hook call unsubscribe
unsubscribe();
Query" has several event tokens:
Event-Token | Description |
---|---|
Query.onFetch |
|
Query.onDeletePre |
|
Query.onDeletePost |
|
Query.onPatchPre |
|
Query.onPatchPost |
Unit Of Work Events
Unit-of-work events are triggered when a new session submits changes.
Event-Token | Description |
---|---|
DatabaseSession.onUpdatePre |
|
DatabaseSession.onUpdatePost |
|
DatabaseSession.onInsertPre |
|
DatabaseSession.onInsertPost |
|
DatabaseSession.onDeletePre |
|
DatabaseSession.onDeletePost |
|
DatabaseSession.onCommitPre |
Transactions
A transaction is a sequential group of statements, queries, or operations such as select, insert, update, or delete that are executed as a single unit of work that can be committed or rolled back.
Deepkit supports transactions for all officially supported databases. By default, no transactions are used for any query and database session. To enable transactions, there are two main methods: sessions and callback.
Session Transactions
You can start and assign a new transaction for each session you create. This is the preferred way of interacting with the database, as you can easily pass on the Session object and all queries instantiated by this session will be automatically assigned to its transaction.
A typical pattern is to wrap all operations in a try-catch block and execute commit()
on the very last line (which is only executed if all previous commands succeeded) and rollback()
in the catch block to roll back all changes as soon as an error occurs.
Although there is an alternative API (see below), all transactions work only with database session objects. To commit outstanding changes from the unit-of-work to the database in a database session, commit()
is normally called. In a transactional session, commit()
not only commits all pending changes to the database, but also completes ("commits") the transaction, thereby closing the transaction. Alternatively, you can call session.flush()
to commit all pending changes without commit
and thus without closing the transaction. To commit a transaction without flushing the unit-of-work, use session.commitTransaction()
.
const session = database.createSession();
//this assigns a new transaction, and starts it with the very next database operation.
session.useTransaction();
try {
//this query is executed in the transaction
const users = await session.query(User).find();
await moreDatabaseOperations(session);
await session.commit();
} catch (error) {
await session.rollback();
}
Once commit()
or rollback()
is executed in a session, the transaction is released. You must then call useTransaction()
again if you want to continue in a new transaction.
Please note that as soon as the first database operation is executed in a transactional session, the assigned database connection is permanently and exclusively assigned to the current session object (sticky). Thus, all subsequent operations will be performed on the same connection (and thus, in most databases, on the same database server). Only when either the transactional session is terminated (commit or rollback), the database connection is released again. It is therefore recommended to keep a transaction only as short as necessary.
If a session is already associated with a transaction, a call to session.useTransaction()
always returns the same object. Use session.isTransaction()
to check if a transaction is associated with the session.
Nested transactions are not supported.
Transaction Callback
An alternative to transactional sessions is database.transaction(callback)
.
await database.transaction(async (session) => {
//this query is executed in the transaction
const users = await session.query(User).find();
await moreDatabaseOperations(session);
});
The database.transaction(callback)
method performs an asynchronous callback within a new transactional session. If the callback succeeds (that is, no error is thrown), the session is automatically committed (and thus its transaction committed and all changes flushed). If the callback fails, the session automatically executes rollback()
and the error is propagated.
Isolations
Many databases support different types of transactions. To change the transaction behavior, you can call different methods for the returned transaction object from useTransaction()
. The interface of this transaction object depends on the database adapter used. For example, the transaction object returned from a MySQL database has different options than the one returned from a MongoDB database. Use code completion or view the database adapter’s interface to get a list of possible options.
const database = new Database(new MySQLDatabaseAdapter());
const session = database.createSession();
session.useTransaction().readUncommitted();
try {
//...operations
await session.commit();
} catch () {
await session.rollback();
}
//or
await database.transaction(async (session) => {
//this works as long as no database operation has been exuected.
session.useTransaction().readUncommitted();
//...operations
});
While transactions for MySQL, PostgreSQL, and SQLite work by default, you must first set up MongoDB as a "replica set".
To convert a standard MongoDB instance to a replica set, please refer to the official documentation Convert a Standalone to a Replica Set.
Composite Primary Key
Composite Primary-Key means, an entity has several primary keys, which are automatically combined to a "composite primary key". This way of modeling the database has advantages and disadvantages. We believe that composite primary keys have huge practical disadvantages that do not justify their advantages, so they should be considered bad practice and therefore avoided. Deepkit ORM does not support composite primary keys. In this chapter we explain why and show (better) alternatives.
Disadvantages
Joins are not trivial. Although they are highly optimized in RDBMS, they represent a constant complexity in applications that can easily get out of hand and lead to performance problems. Performance not only in terms of query execution time, but also in terms of development time.
Joins
Each individual join becomes more complicated as more fields are involved. While many databases have implemented optimizations to make joins with multiple fields not slower per se, it requires the developer to constantly think through these joins in detail, since, for example, forgetting keys can lead to subtle errors (since the join will work even without specifying all keys) and the developer therefore needs to know the full composite primary key structure.
Indexes
Indexes with multiple fields (which are composite primary keys) suffer from the problem of field ordering in queries. While database systems can optimize certain queries, complex structures make it difficult to write efficient operations that correctly use all defined indexes. For an index with multiple fields (such as a composite primary key), it is usually necessary to define the fields in the correct order for the database to actually use the index. If the order is not specified correctly (for example, in a WHERE clause), this can easily result in the database not using the index at all and instead performing a full table scan. Knowing which database query optimizes in which way is advanced knowledge that new developers don’t usually have, but is necessary once you start working with composite primary keys so that you get the most out of your database and don’t waste resources.
Migrations
Once you decide that a particular entity needs an additional field to uniquely identify it (and thus become the Composite Primary Key), this will result in the adjustment of all entities in your database that have relationships to that entity.
For example, suppose you have an entity user
with composite primary key and decide to use a foreign key to this user
in different tables, e.g. in a pivot table audit_log
, groups
and posts
. Once you change the primary key of user
, all these tables need to be adjusted in a migration as well.
Not only does this make migration files much more complex, but it can also lead to greater downtime when running migration files, since schema changes usually require either a full database lock or at least a table lock. The more tables affected by a large change like an index change, the longer the migration will take. And the larger a table is, the longer the migration takes.
Think about the audit_log
table. Such tables usually have many records (millions or so), and you have to touch them during a schema change only because you decided to use a composite primary key and add an additional field to the primary key of User
. Depending on the size of all these tables, this either makes migration changes unnecessarily more expensive or, in some cases, so expensive that changing the primary key of User
is no longer financially justifiable. This usually leads to workarounds (e.g. adding a unique index to the user table) that result in technical debt and sooner or later end up on the legacy list.
For large projects, this can result in enormous downtime (from minutes to hours) and sometimes even the introduction of an entirely new migration abstraction system that essentially copies tables, inserts records into ghost tables, and moves tables back and forth after migration. This added complexity is in turn imposed on any entity that has a relationship to another entity with a composite primary key, and becomes greater the larger your database structure becomes. The problem gets worse with no way to solve it (except by removing the composite primary key entirely).
Findability
If you are a database administrator or data engineer/scientist, you usually work directly on the database and explore the data as you need it. With composite primary keys, any user writing SQL directly must know the correct primary key of all tables involved (and the column order to get correct index optimizations). This added overhead not only complicates data exploration, report generation, etc., but can also lead to errors in older SQL if a composite primary key is suddenly changed. The old SQL is probably still valid and running fine, but suddenly returns incorrect results because the new field in the composite primary key is missing from the join. It is much easier here to have only one primary key. This makes it easier to find data and ensures that old SQL queries will still work correctly if you decide to change the way a user object is uniquely identified, for example.
Revision
Once a composite primary key is used in an entity, refactoring the key can result in significant additional refactoring. Because an entity with a composite primary key typically does not have a single unique field, all filters and links must contain all values of the composite key. This usually means that the code relies on knowing the composite primary key, so all fields must be retrieved (e.g., for URLs such as /user/:key1/:key2). Once this key is changed, all places where this knowledge is explicitly used, such as URLs, custom SQL queries, and other places, must be rewritten.
While ORMs typically create joins automatically without manually specifying the values, they cannot automatically cover refactoring for all other use cases such as URL structures or custom SQL queries, and especially not for places where the ORM is not used at all, such as in reporting systems and all external systems.
ORM Complexity
With the support of composite primary keys, the complexity of the code of a powerful ORM like Deepkit ORM increases tremendously. Not only will the code and maintenance become more complex and therefore more expensive, but there will be more edge cases from users that need to be fixed and maintained. The complexity of the query layer, change detection, migration system, internal relationship tracking, etc. increases significantly. The overall cost associated with building and supporting an ORM with composite primary keys is too high, all things considered, and cannot be justified, which is why Deepkit does not support it.
Advantages
That being said, composite primary keys also have advantages, albeit very superficial ones. Using as few indexes as possible for each table makes writing (inserting/updating) data more efficient, since fewer indexes need to be maintained. It also makes the structure of the model a bit cleaner (since it usually has one less column). However, the difference between a sequentially ordered, automatically incrementing primary key and a non-incrementing primary key is completely negligible these days, since disk space is cheap and the operation is usually an "append-only" operation, which is very fast.
There may certainly be a few edge cases (and for a few very specific database systems) where it is initially better to work with composite primary keys. But even in these systems, it might make more sense overall (considering all the costs) not to use them and to switch to another strategy.
Alternative
An alternative to composite primary keys is to use a single automatically incrementing numeric primary key, usually called "id", and move the composite primary key to a unique index with multiple fields. Depending on the primary key used (depending on the expected number of rows), the "id" uses either 4 or 8 bytes per record.
By using this strategy, you are no longer forced to think about the problems described above and find a solution, which greatly reduces the cost of ever-growing projects.
The strategy specifically means that each entity has an "id" field, usually at the very beginning, and this field is then used to identify unique rows by default and in joins.
class User {
id: number & PrimaryKey & AutoIncrement = 0;
constructor(public username: string) {}
}
As an alternative to a composite primary key, you would use a unique multi-field index instead.
@entity.index(['tenancyId', 'username'], {unique: true})
class User {
id: number & PrimaryKey & AutoIncrement = 0;
constructor(
public tenancyId: number,
public username: string,
) {}
}
Deepkit ORM automatically supports incremental primary keys, including for MongoDB. This is the preferred method for identifying records in your database. However, for MongoDB you can use the ObjectId (_id: MongoId & PrimaryKey = ''
) as a simple primary key. An alternative to the numeric, auto-incrementing primary key is a UUID, which works just as well (but has slightly different performance characteristics, since indexing is more expensive).
Summary
Composite primary keys essentially mean that once they are in place, any future changes and practical use will come at a much higher cost. While it looks like a clean architecture at the beginning (because you have one less column), it leads to significant practical costs once the project is actually developed, and the costs continue to increase as the project gets larger.
Looking at the asymmetries between advantages and disadvantages, it is clear that composite primary keys cannot be justified in most cases. The costs are much greater than the benefits. Not only for you as a user, but also for us as the author and maintainer of the ORM code. For this reason, Deepkit ORM does not support composite primary keys.
Plugins
Soft-Delete
The Soft-Delete plugin allows to keep database records hidden without actually deleting them. When a record is deleted, it is only marked as deleted and not actually deleted. All queries automatically filter for this deleted property, so it feels to the user as if it is actually deleted.
To use the plugin, you must instantiate the SoftDelete class and activate it for each entity.
import { entity, PrimaryKey, AutoIncrement } from '@deepkit/type';
import { SoftDelete } from '@deepkit/orm';
@entity.name('user')
class User {
id: number & PrimaryKey & AutoIncrement = 0;
created: Date = new Date;
// this field is used as indicator whether the record is deleted.
deletedAt?: Date;
// this field is optional and can be used to track who/what deleted the record.
deletedBy?: string;
constructor(
public name: string
) {
}
}
const softDelete = new SoftDelete(database);
softDelete.enable(User);
//or disable again
softDelete.disable(User);
Delete
To soft-delete records, use the usual methods: deleteOne
or deleteMany
in a query, or use the session to delete them. The soft-delete plugin will do the rest automatically in the background.
Restore
Deleted records can be restored using a cancelled query via SoftDeleteQuery
. It has restoreOne
and restoreMany
.
import { SoftDeleteQuery } from '@deepkit/orm';
await database.query(User).lift(SoftDeleteQuery).filter({ id: 1 }).restoreOne();
await database.query(User).lift(SoftDeleteQuery).filter({ id: 1 }).restoreMany();
The session also supports element recovery.
import { SoftDeleteSession } from '@deepkit/orm';
const session = database.createSession();
const user1 = session.query(User).findOne();
session.from(SoftDeleteSession).restore(user1);
await session.commit();
Hard Delete
To hard delete records, use a lifted query via SoftDeleteQuery. This essentially restores the old behavior without the single query plugin.
import { SoftDeleteQuery } from '@deepkit/orm';
await database.query(User).lift(SoftDeleteQuery).hardDeleteOne();
await database.query(User).lift(SoftDeleteQuery).hardDeleteMany();
//those are equal
await database.query(User).lift(SoftDeleteQuery).withSoftDeleted().deleteOne();
await database.query(User).lift(SoftDeleteQuery).withSoftDeleted().deleteMany();
Query Deleted.
With a "lifted" query via SoftDeleteQuery
you can also include deleted records.
import { SoftDeleteQuery } from '@deepkit/orm';
// find all, soft deleted and not deleted
await database.query(User).lift(SoftDeleteQuery).withSoftDeleted().find();
// find only soft deleted
await database.query(s).lift(SoftDeleteQuery).isSoftDeleted().count()
Deleted By
deletedBy
can be set via query and sessions.
import { SoftDeleteSession } from '@deepkit/orm';
const session = database.createSession();
const user1 = session.query(User).findOne();
session.from(SoftDeleteSession).setDeletedBy('Peter');
session.remove(user1);
await session.commit();
import { SoftDeleteQuery } from '@deepkit/orm';
database.query(User).lift(SoftDeleteQuery)
.deletedBy('Peter')
.deleteMany();