1. Introduction

TypeScript is JavaScript that scales. A language designed to implement complex use cases. It allows JavaScript to be written with types that are checked and removed at compile time. This resulting type safety allows for easier writing and maintenance of complex applications. This, among other things, is what has made TypeScript so popular: It allows finding bugs in code, especially on a large scale, much faster and easier than would be possible with pure JavaScript - and it works with JavaScript, not against it.

Deepkit is TypeScript that scales. A framework written in TypeScript for TypeScript, designed to develop very complex software in TypeScript. It brings many design patterns known from the Enterprise to TypeScript and introduces completely new features that are only possible with TypeScript’s new type system to increase development speed especially in teams. Small applications can also benefit from this new approach, as Deepkit comes with many libraries for very common use cases that can be used individually or in combination. The framework itself is designed to be as agile as possible and as complex as necessary, not only to get initial results quickly, but also to maintain development speed in the long term.

JavaScript is now the largest developer community in the world and provides the developer with a correspondingly large selection of many libraries and tools to cover the needs of a project. However, it is not always easy to find the right library. Often the philosophies, API and code qualities of these differ so much that a lot of sticky code and additional abstractions have to be introduced by the developer in order for these libraries to work properly with each other at all. The provision of core functions, which are needed by virtually every project, in nicely abstracted libraries brought together in a framework by a manufacturer or a community has proven itself again and again in the last decades: Java Spring, PHP Symfony/Laravel, and C++ QT are just a few very well-known and successful examples. These frameworks offer to the developer often far common and decades proven concepts, which are converted into libraries or components, in order to be able to be used so comfortably among themselves harmonizing demand-fairly. The offered functionalities and design patterns are thereby not cubed, but based on sometimes over decades old concepts, which worked satisfactorily by the fight in the competition with alternative ideas.

JavaScript has seen massive progress over the years, so that more and more design patterns from the enterprise environment can now be applied. Design patterns that can be found in more and more libraries, frameworks, and tools. However, JavaScript and also TypeScript have the problem that in order to efficiently apply many proven enterprise patterns, decisive functions are missing in the language itself. This does not mean that these design patterns cannot be applied in general, but that they are less efficient than in other current languages.

TypeScript completely removes its type information at compile time, once TypeScript is converted to JavaScript, so no information about it exists in the generated JavaScript or at runtime. It is undeniable that types are very valuable during development and when checking the correctness of the program. However, types also have tremendous value at runtime. This value is reflected where at runtime data is transformed (converted/serialized), data is validated, meta-information is added to objects, or interface information is required. In these and many other use cases, type information can be very useful at runtime because it provides libraries with the information needed to efficiently provide functionality. Currently, many of these use cases instead use alternatives that incompletely mimic TypeScript’s type system, forcing the developer to write types in a new way that has nothing to do with TypeScript’s syntax. The result is that TypeScript’s powerful type system can no longer show its strength here, and less ergonomic and less efficient ways of working must be used instead.

1.1. Deepkit Framework

Deepkit has developed a type compiler that leaves type information in place, allowing dynamic types to be computed at runtime and existing type information to be read at runtime. With this paradigm shift, completely new ways of working are possible, providing the required information for the aforementioned use cases, radically simplifying the development of complex software, and giving the code more expressiveness. It is thus possible for the first time to use the full power and expressiveness of TypeScript at runtime as well.

Based on this paradigm shift, Deepkit has developed a whole set of libraries for use cases that can be found in just about any program: Validation, Serialization, Database Abstraction, CLI parser, HTTP Router, RPC Framework, Logger, Template System, Event System and many more. The fundamental difference from other libraries is that type information is at the core of the functionality and as much TypeScript as possible should be reused at runtime, so less boilerplate needs to be written by the developer and even complex programs can be seen at a glance what they are doing. Finally, one of the key features of TypeScript is to give expression to even complex code, and Deepkit brings these expressiveness benefits to the runtime in the form of a powerful framework to now better scale application architecture with appropriate enterprise patterns.

Deepkit consists of two large areas: One is the Deepkit Libraries and the Deepkit Framework. The Deepkit Libraries are a whole family of standalone TypeScript libraries (NPM packages) that are good at one topic and are optimized, well tested, and designed to complement each other optimally. A project can use individual Deepkit libraries, or the entire Deepkit framework, which brings together all the capabilities of the libraries and complements them with additional tools such as the debugger. All together, it allows the developer to create complex, fast, and production-ready applications. Deepkit supports a wide range of use cases. From simple command-line tools (CLI programs) to web applications and micro-services to desktop or mobile applications. The code is designed to run in any known JavaScript engine (browser as well as NodeJS) and integrates beautifully with other frameworks such as Angular, React, and Vue. The claim behind Deepkit Framework is to apply clean code, SOLID principles, and enterprise design patterns to not only offer correspondingly high code quality, but to allow the user to apply them as well. Deepkit also tries to promote these same principles in its documentation and examples, but does not force the developer to follow them themselves.

1.2. High-Performance

One of the most difficult problems in software development is to maintain a high development speed even after months or years, especially when the code and the team grow. There are many frameworks that promise to get you started quickly and allow you to cobble together more complex applications on your own in a very short time. However, these usually have the common problem that the development speed decreases drastically the older the project or the larger the team becomes. It is not uncommon that even after a few months and only a handful of developers, the development speed collapses to such an extent that it drops to 1% of the original speed. To counteract this phenomenon, it is necessary to apply established design patterns and use the right framework and libraries in advance. Enterprise design patterns have established themselves for the reason that they scale excellently even with larger applications and large teams. Correctly applied, they develop their capabilities especially when a project is to be developed over a longer period of time (several months to years).

Design patterns have their advantages in theory, but in practice almost every pattern also has its disadvantages. These disadvantages vary depending on the language and framework, since the language and framework themselves determine how ergonomically a pattern can be applied. Just because a certain pattern can be used in a language, it does not mean that it automatically makes development better and faster. Some languages are better suited than others for applying certain patterns. With JavaScript or even TypeScript itself, various design patterns are often usable in the core, but there are limitations here that massively affect the user experience and thus speed. For example, Typescript decorators with all their idiosyncrasies may become necessary if a dependency injection framework specifies and is based on them. Deepkit’s runtime type system ensures that these design patterns can be applied in the most ergonomic way and with as little boilerplate as possible, unlocking their power to maintain high development speed not only initially, but also over the long term.

1.3. Isomorphic TypeScript

One of the biggest advantages of TypeScript is that complex code can be written better in many use cases. This includes frontend, backend, CLI tools, mobile and desktop apps, and much more. When a project spans these use cases and relies almost exclusively on TypeScript, it is called Isomorphic TypeScript. Using TypeScript in as much code as possible can massively increase development speed. So the following advantages are then suddenly available:

  • Code can be shared between departments (frontend, backend, microservice, etc).

    • Models, types and interfaces

    • Validation

    • Business logic

  • A unified audit system of a single package manager.

  • Reuse of known third-party libraries in all departments.

  • Knowledge sharing within teams.

  • Recruitment is simplified to one group (and the biggest one: JavaScript developers).

Deepkit framework and its runtime type system are designed to exploit these and more advantages of Isomorphic TypeScript to the utmost, so that its maximum powers are revealed.

Old approaches such as the dual stack (frontend and backend in different languages) can no longer keep up by far, since the context switch between the languages alone already costs an enormous amount of energy and time. All the other advantages that have already been explained even make it an unfair comparison. An isomorphic tech stack like TypeScript, properly applied, is many times faster in development time on a fundamental level than any combination of a dual stack for backend/frontend like Java/JavaScript, PHP/JavaScript, or even JavaScript/JavaScript. Since faster development speed also means less time needed for the same features, it also means that Isomorphic TypeScript saves cash. Besides all the advantages already presented, this is the killer argument to use Isomorphic TypeScript in all the next especially commercial projects.

2. Runtime Types

Making type information available at runtime in TypeScript changes a lot. It allows new ways of working that were previously only possible in a roundabout way, or not at all. Declaring types and schemas has become a big part of modern development processes. For example, GraphQL, validators, ORMs, and encoders such as ProtoBuf, and many more rely on having schema information available at runtime to provide fundamental functionality in the first place. These tools and libraries sometimes require the developer to learn completely new languages that have been developed very specifically for the use case. For example, ProtoBuf and GraphQL have their own declaration language, also validators are often based on their own schema APIs or even JSON schema, which is also an independent way to define structures. Some of them require code generators to be executed whenever a change is made, in order to provide the schema information to the runtime as well. Another well-known pattern is to use experimental TypeScript decorators to provide meta-information to classes at runtime.

But is all this necessary? TypeScript offers a very powerful language to describe even very complex structures. In fact, TypeScript is now touring-complete, which roughly means that theoretically any kind of program can be mapped into TypeScript. Of course, this has its practical limitations, but the important point is that TypeScript is able to completely replace any declaration formats such as GraphQL, ProtoBuf, JSON Schema, and many others. Combined with a type system at runtime, it is possible to cover all the described tools and their use cases in TypeScript itself without any code generator. But why is there not yet a solution that allows exactly this?

Historically, TypeScript has undergone a massive transformation over the past few years. It has been completely rewritten several times, received basic features, and undergone a number of iterations and breaking changes. However, TypeScript has now reached a product market fit that greatly slows the rate at which fundamental innovations and breaking changes happen. TypeScript has proven itself and shown what a highly charming type system for a highly dynamic language like JavaScript should look like. The market has gratefully embraced this push and ushered in a new era in JavaScript development.

This is exactly the time to build tools on top of the language itself at a fundamental level to make the above possible. Deepkit wants to be the impetus to bring over decades of proven design patterns from the enterprise of languages like Java and PHP not only fundamental to TypeScript, but in a new and better way that works with JavaScript rather than against it. Through type information at runtime, these are now for the first time not only possible in principle, but allow for whole new much simpler design patterns that are not possible with languages like Java and PHP. TypeScript itself has laid the foundation here to make the developer’s life considerably easier with completely new approaches in strong combination with the tried and tested.

Reading type information at runtime is the capability on which Deepkit builds its foundation. The API of the Deepkit libraries are largely focused on using as much TypeScript type information as possible to be as efficient as possible. Type system at runtime means that type information is readable at runtime and dynamic types are computable. This means, for example, that for classes all properties and for functions all parameters and return types can be read.

Let’s take this function as an example:

function log(message: string): void {
    console.log(message);
}

In JavaScript itself, several pieces of information can be read at runtime. For example, the name of the function (unless modified with a minimizer):

log.name; //‘log’

On the other hand, the number of parameters can be read out:

log.length; //1

With a bit more code it is also possible to read out the names of the parameters. However, this is not easily done without a rudimentary JavaScript parser or RegExp on log.toString(), so that’s about it from here. Since TypeScript translates the above function into JavaScript as follows:

function log(message) {
    console.log(message);
}

the information that message is of type string and the return type is of type void is no longer available. This information has been irrevocably destroyed by TypeScript.

However, with a type system at runtime, this information can survive so that one can programmatically read the types of message and the return type.

import { typeOf, ReflectionKind } from '@deepkit/type';

const type = typeOf(log);
type.kind; //ReflectionKind.function
type.parameters[0].name; //'message'
type.parameters[0].type; //{kind: ReflectionKind.string}
type.return; //{kind: ReflectionKind.void}

Deepkit does just that. It hooks into the compilation of TypeScript and ensures that all type information is built into the generated JavaScript. Functions like typeOf() (not to be confused with the operator typeof, with a lowercase o) then allow the developer to access it. Libraries can therefore be developed based on this type information, allowing the developer to use already written TypeScript types for a whole range of application possibilities.

2.1. Installation

To install Deepkit’s runtime type system two packages are needed. The type compiler in @deepkit/type-compiler and the runtime in @deepkit/type. The type compiler can be installed in package.json devDependencies, because it is only needed at build time.

npm install --save @deepkit/type
npm install --save-dev @deepkit/type-compiler

Runtime type information is not generated by default. It must be set "reflection": true in the tsconfig.json file to enable it in all files in the same folder of this file or in all subfolders. If decorators are to be used, "experimentalDecorators": true must be enabled in tsconfig.json. This is not strictly necessary to work with @deepkit/type, but necessary for certain functions of other deepkit libraries and in @deepkit/framework.

file: tsconfig.json

{
  "compilerOptions": {
    "module": "CommonJS",
    "target": "es6",
    "moduleResolution": "node",
    "experimentalDecorators": true
  },
  "reflection": true
}

2.1.1. Type Compiler

TypeScript itself does not allow to configure the type compiler via a tsconfig.json. It is necessary to either use the TypeScript compiler API directly or a build system like Webpack with ts-loader. To avoid this inconvenience for Deepkit users, the Deepkit type compiler automatically installs itself in node_modules/typescript when @deepkit/type-compiler is installed (this is done via NPM install hooks). This makes it possible for all build tools that access the locally installed TypeScript (the one in node_modules/typescript) to automatically have the type compiler enabled. This makes tsc, Angular, webpack, ts-node, and some other tools automatically work with the Deepkit type compiler.

If the type compiler could not be successfully installed automatically (for example because NPM install hooks are disabled), this can be done manually with the following command:

node_modules/.bin/deepkit-type-install

Note that deepkit-type-install must be run if the local typescript version has been updated (for example, if the typescript version in package.json has changed and npm install is run).

2.1.2. Webpack

If you want to use the type compiler in a webpack build, you can do so with the ts-loader package (or any other typescript loader that supports transformer registration).

file: webpack.config.js

const typeCompiler = require('@deepkit/type-compiler');

module.exports = {
  entry: './app.ts',
  module: {
    rules: [
      {
        test: /\.tsx?$/,
          use: {
            loader: 'ts-loader',
            options: {
              //this enables @deepkit/type's type compiler
              getCustomTransformers: (program, getProgram) => ({
                before: [typeCompiler.transformer],
                afterDeclarations: [typeCompiler.declarationTransformer],
              }),
            }
          },
          exclude: /node_modules/,
       },
    ],
  },
}

2.2. Type Decorators

Type decorators are normal TypeScript types that contain meta-information to change the behavior of various functions at runtime. Deepkit already provides some type decorators that cover some use cases. For example, a class property can be marked as primary key, reference, or index. The database library can use this information at runtime to create the correct SQL queries without prior code generation. Validator constraints such as MaxLength, Maximum, or Positive can also be added to any type. It is also possible to tell the serializer how to serialize or deserialize a particular value. In addition, it is possible to create completely custom type decorators and read them at runtime, in order to use the type system at runtime in a very individual way.

Deepkit comes with a whole set of type decorators, all of which can be used directly from @deepkit/type. They are designed not to come from multiple libraries, so as not to tie code directly to a particular library such as Deepkit RPC or Deepkit Database. This allows easier reuse of types, even in the frontend, although database type decorators are used for example.

The following is a list of existing type decorators. The validator and serializer of @deepkit/type and @deepkit/bson as well as Deepkit Database of @deepkit/orm used this information differently. See the corresponding chapters to learn more about this.

2.2.1. Integer/Float

Integer and floats are defined as a base as number and has several sub-variants:

Type Description

integer

An integer of arbitrary size.

int8

An integer between -128 and 127.

uint8

An integer between 0 and 255.

int16

An integer between -32768 and 32767.

uint16

An integer between 0 and 65535.

int32

An integer between -2147483648 and 2147483647.

uint32

An integer between 0 and 4294967295.

float

Same as number, but might have different meaning in database context.

float32

A float between -3.40282347e+38 and 3.40282347e+38. Note that JavaScript is not able to check correctly the range due to precision issues, but the information might be handy for the database or binary serializers.

float64

Same as number, but might have different meaning in database context.

import { integer } from '@deepkit/type';

interface User {
    id: integer;
}

Here the id of the user is a number at runtime, but is interpreted as an integer in the validation and serialization. This means, for example, that floats may not be used in validation and the serializer automatically converts floats to integers.

import { is, integer } from '@deepkit/type';

is<integer>(12); //true
is<integer>(12.5); //false

The subtypes can be used in the same way and are useful if a specific range of numbers is to be allowed.

import { is, int8 } from '@deepkit/type';

is<int8>(-5); //true
is<int8>(5); //true
is<int8>(-200); //false
is<int8>(2500); //false

2.2.2. Float

2.2.3. UUID

UUID v4 is usually stored as a binary in the database and as a string in JSON.

import { is, UUID } from '@deepkit/type';

is<UUID>('f897399a-9f23-49ac-827d-c16f8e4810a0'); //true
is<UUID>('asd'); //false

2.2.4. MongoID

Marks this field as ObjectId for MongoDB. Resolves as a string. Is stored in the MongoDB as binary.

import { MongoId, serialize, is } from '@deepkit/type';

serialize<MongoId>('507f1f77bcf86cd799439011'); //507f1f77bcf86cd799439011
is<MongoId>('507f1f77bcf86cd799439011'); //true
is<MongoId>('507f1f77bcf86cd799439011'); //false

class User {
    id: MongoId = ''; //will automatically set in Deepkit ORM once user is inserted
}

2.2.5. Bigint

Per default the normal bigint type serializes as number in JSON (and long in BSON). This has however limitation in what is possible to save since bigint in JavaScript has an unlimited potential size, where numbers in JavaScript and long in BSON are limited. To bypass this limitation the types BinaryBigInt and SignedBinaryBigInt are available.

BinaryBigInt is the same as bigint but serializes to unsigned binary with unlimited size (instead of 8 bytes in most databases) in databases and string in JSON. Negative values will be converted to positive (abs(x)).

import { BinaryBigInt } from '@deepkit/type';

interface User {
    id: BinaryBigInt;
}

const user: User = {id: 24n};

serialize<User>({id: 24n}); //{id: '24'}

serialize<BinaryBigInt>(24); //'24'
serialize<BinaryBigInt>(-24); //'0'

Deepkit ORM stores BinaryBigInt as a binary field.

SignedBinaryBigInt is the same as BinaryBigInt but is able to store negative values as well. Deepkit ORM stores SignedBinaryBigInt as binary. The binary has an additional leading sign byte and is represented as a uint: 255 for negative, 0 for zero, or 1 for positive.

import { SignedBinaryBigInt } from '@deepkit/type';

interface User {
    id: SignedBinaryBigInt;
}

2.2.6. MapName

To change the name of a property in the serialization.

import { serialize, deserialize, MapName } from '@deepkit/type';

interface User {
    firstName: string & MapName<'first_name'>;
}

serialize<User>({firstName: 'Peter'}) // {first_name: 'Peter'}
deserialize<User>({first_name: 'Peter'}) // {firstName: 'Peter'}

2.2.7. Group

Properties can be grouped together. For serialization you can for example exclude a group from serialization. See the chapter Serialization for more information.

import { serialize } from '@deepkit/type';

interface Model {
    username: string;
    password: string & Group<'secret'>
}

serialize<Model>(
    { username: 'Peter', password: 'nope' },
    { groupsExclude: ['secret'] }
); //{username: 'Peter'}

2.2.8. Data

Each property can add additional meta-data that can be read via the Reflection API. See Runtime Types Reflection for more information.

import { ReflectionClass } from '@deepkit/type';

interface Model {
    username: string;
    title: string & Data<'key', 'value'>
}

const reflection = ReflectionClass.from<Model>();
reflection.getProperty('title').getData()['key']; //value;

2.2.9. Excluded

Each property can be excluded from the serialization process for a specific target.

import { serialize, deserialize, Excluded } from '@deepkit/type';

interface Auth {
    title: string;
    password: string & Excluded<'json'>
}

const item = deserialize<Auth>({title: 'Peter', password: 'secret'});

item.password; //undefined, since deserialize's default serializer is called `json`

item.password = 'secret';

const json = serialize<Auth>(item);
json.password; //again undefined, since serialize's serializer is called `json`

2.2.10. Embedded

Marks the field as an embedded type.

import { PrimaryKey, Embedded, serialize, deserialize } from '@deepkit/type';

interface Address {
    street: string;
    postalCode: string;
    city: string;
    country: string;
}

interface User  {
    id: number & PrimaryKey;
    address: Embedded<Address>;
}

const user: User {
    id: 12,
    address: {
        street: 'abc', postalCode: '1234', city: 'Hamburg', country: 'Germany'
    }
};

serialize<User>(user);
{
    id: 12,
    address_street: 'abc',
    address_postalCode: '1234',
    address_city: 'Hamburg',
    address_country: 'Germany'
}

//for deserialize you have to provide the embedded structure
deserialize<User>({
    id: 12,
    address_street: 'abc',
    //...
});

It’s possible to change the prefix (which is per default the property name).

interface User  {
    id: number & PrimaryKey;
    address: Embedded<Address, {prefix: 'addr_'}>;
}

serialize<User>(user);
{
    id: 12,
    addr_street: 'abc',
    addr_postalCode: '1234',
}

//or remove it entirely
interface User  {
    id: number & PrimaryKey;
    address: Embedded<Address, {prefix: ''}>;
}

serialize<User>(user);
{
    id: 12,
    street: 'abc',
    postalCode: '1234',
}

2.2.11. Entity

To annotate interfaces with entity information. Only used in the database context.

import { Entity, PrimaryKey } from '@deepkit/type';

interface User extends Entity<{name: 'user', collection: 'users'> {
    id: number & PrimaryKey;
    username: string;
}

2.2.12. InlineRuntimeType

TODO

2.2.13. ResetDecorator

TODO

2.2.14. Database

TODO: PrimaryKey, AutoIncrement, Reference, BackReference, Index, Unique, DatabaseField.

2.2.15. Validation

TODO

Custom Type Decorators

A type decorator can be defined as follows:

type MyAnnotation = {__meta?: ['myAnnotation']};

By convention, a type decorator is defined to be an object literal with a single optional property __meta that has a tuple as its type. The first entry in this tuple is its unique name and all subsequent tuple entries are arbitrary options. This allows a type decorator to be equipped with additional options.

type AnnotationOption<T extends {title: string}> = {__meta?: ['myAnnotation', T]};

The type decorator is used with the intersection operator &. Any number of type decorators can be used on one type.

type Username = string & MyAnnotation;
type Title = string & & MyAnnotation & AnnotationOption<{title: 'Hello'}>;

The type decorators can be read out via the type objects of typeOf<T>() and metaAnnotation:

import { typeOf, metaAnnotation } from '@deepkit/type';

const type = typeOf<Username>();
const annotation = metaAnnotation.getForName(type, 'myAnnotation'); //[]

The result in annotation is either an array with options if the type decorator myAnnotation was used or undefined if not. If the type decorator has additional options as seen in AnnotationOption, the passed values can be found in the array. Already supplied type decorators like MapName, Group, Data, etc have their own annotation object:

import { typeOf, Group, groupAnnotation } from '@deepkit/type';
type Username = string & Group<'a'> & Group<'b'>;

const type = typeOf<Username>();
groupAnnotation.getAnnotations(type); //['a', 'b']

See Runtime Types Reflection to learn more.

2.3. External Classes

Since TypeScript does not include type information by default, imported types/classes from other packages (that did not use @deepkit/type-compiler) will not have type information available.

To annotate types for an external class, use annotateClass and make sure this function is executed in the bootstrap phase of your application before the imported class is used somewhere else.

import { MyExternalClass } from 'external-package';
import { annotateClass } from '@deepkit/type';

interface AnnotatedClass {
    id: number;
    title: string;
}

annotateClass<AnnotatedClass>(MyExternalClass);

//all uses of MyExternalClass return now the type of AnnotatedClass
serialize<MyExternalClass>({...});

//MyExternalClass can now also be used in other types
interface User {
    id: number;
    clazz: MyExternalClass;
}

MyExternalClass` can now be used in serialization functions and in the reflection API.

To following shows how to annotate generic classes:

import { MyExternalClass } from 'external-package';
import { annotateClass } from '@deepkit/type';

class AnnotatedClass<T> {
    id!: T;
}

annotateClass(ExternalClass, AnnotatedClass);

2.4. Reflection

To work directly with the type information itself, there are two basic variants: Type objects and Reflection classes. Reflection classes are discussed below. The function typeOf returns type objects, which are very simple object literals. It always contains a kind which is a number and gets its meaning from the enum ReflectionKind. ReflectionKind is defined in the @deepkit/type package as follows:

enum ReflectionKind {
  never,    //0
  any,     //1
  unknown, //2
  void,    //3
  object,  //4
  string,  //5
  number,  //6
  boolean, //7
  symbol,  //8
  bigint,  //9
  null,    //10
  undefined, //11

  //... and even more
}

There are a number of possible type objects that can be returned. The simplest ones are never, any, unknown, void, null, and undefined, which are represented as follows:

{kind: 0}; //never
{kind: 1}; //any
{kind: 2}; //unknown
{kind: 3}; //void
{kind: 10}; //null
{kind: 11}; //undefined

For example, number 0 is the first entry of the ReflectionKind enum, in this case never, number 1 is the second entry, here any, and so on. Accordingly, primitive types like string, number, boolean are represented as follows:

typeOf<string>(); //{kind: 5}
typeOf<number>(); //{kind: 6}
typeOf<boolean>(); //{kind: 7}

These rather simple types have no further information at the type object, because they were passed directly as type argument to typeOf. However, if types are passed via type aliases, additional information can be found at the type object.

type Title = string;

typeOf<Title>(); //{kind: 5, typeName: 'Title'}

In this case, the name of the type alias 'Title' is also available. If a type alias is a generic, the types passed will also be available at the type object.

type Title<T> = T extends true ? string : number;

typeOf<Title<true>>();
{kind: 5, typeName: 'Title', typeArguments: [{kind: 7}]}

If the type passed is the result of an index access operator, the container and the index type are present:

interface User {
  id: number;
  username: string;
}

typeOf<User['username']>();
{kind: 5, indexAccessOrigin: {
    container: {kind: Reflection.objectLiteral, types: [...]},
    Index: {kind: Reflection.literal, literal: 'username'}
}}

Interfaces and object literals are both output as Reflection.objectLiteral and contain the properties and methods in the types array.

interface User {
  id: number;
  username: string;
  login(password: string): void;
}

typeOf<User>();
{
  kind: Reflection.objectLiteral,
  types: [
    {kind: Reflection.propertySignature, name: 'id', type: {kind: 6}},
    {kind: Reflection.propertySignature, name: 'username',
     type: {kind: 5}},
    {kind: Reflection.methodSignature, name: 'login', parameters: [
      {kind: Reflection.parameter, name: 'password', type: {kind: 5}}
    ], return: {kind: 3}},
  ]
}

type User  = {
  id: number;
  username: string;
  login(password: string): void;
}
typeOf<User>(); //returns the same object as above

Index signatures are also in the types array.

interface BagOfNumbers {
    [name: string]: number;
}


typeOf<BagOfNumbers>;
{
  kind: Reflection.objectLiteral,
  types: [
    {
      kind: Reflection.indexSignature,
      index: {kind: 5}, //string
      type: {kind: 6}, //number
    }
  ]
}

type BagOfNumbers  = {
    [name: string]: number;
}
typeOf<BagOfNumbers>(); //returns the same object as above

Classes are similar to object literals and also have their properties and methods under a types array in addition to classType which is a reference to the class itself.

class User {
  id: number = 0;
  username: string = '';
  login(password: string): void {
     //do nothing
  }
}

typeOf<User>();
{
  kind: Reflection.class,
  classType: User,
  types: [
    {kind: Reflection.property, name: 'id', type: {kind: 6}},
    {kind: Reflection.property, name: 'username',
     type: {kind: 5}},
    {kind: Reflection.method, name: 'login', parameters: [
      {kind: Reflection.parameter, name: 'password', type: {kind: 5}}
    ], return: {kind: 3}},
  ]
}

Note that the type of Reflection.propertySignature has changed to Reflection.property and Reflection.methodSignature has changed to Reflection.method. Since properties and methods on classes have additional attributes, this information can also be retrieved. The latter additionally include visibility, abstract, and default. Type objects of classes contain only the properties and methods of the class itself and not of the super-classes. This is contrary to type objects of interfaces/object-literals, which have all property signatures and method signatures of all parents resolved into types. To resolve the property and method of the super-classes, either ReflectionClass and its ReflectionClass.getProperties() (see following sections) or resolveTypeMembers() of @deepkit/type can be used.

There is a whole plethora of type objects. For example for literal, template literals, promise, enum, union, array, tuple, and many more. To find out which ones all exist and what information is available, it is recommended to import type from @deepkit/type. It is a union with all possible subtypes like TypeAny, TypeUnknonwn, TypeVoid, TypeString, TypeNumber, TypeObjectLiteral, TypeArray, TypeClass, and many more. There you can find the exact structure.

2.4.1. Type Cache

Type objects are cached for type aliases, functions, and classes as soon as no generic argument is passed. This means that a call to typeOf<MyClass>() always returns the same object.

type MyType = string;

typeOf<MyType>() === typeOf<MyType>(); //true

However, as soon as a generic type is used, new objects are always created, even if the type passed is always the same. This is because an infinite number of combinations are theoretically possible and such a cache would effectively be a memory leak.

type MyType<T> = T;

typeOf<MyType<string>>() === typeOf<MyType<string>>();
//false

However, as soon as a type is instantiated multiple times in a recursive type, it is cached. However, the duration of the cache is limited only to the moment the type is computed and does not exist thereafter. Also, although the Type object is cached, a new reference is returned and is not the exact same object.

type MyType<T> = T;
type Object = {
   a: MyType<string>;
   b: MyType<string>;
};

typeOf<Object>();

MyType<string> is cached as long as Object is computed. The PropertySignature of a and b thus have the same type from the cache, but are not the same Type object.

All non-root type objects have a parent property, which usually points to the enclosing parent. This is valuable, for example, to find out whether a Type is part of a union or not.

type ID = string | number;

typeOf<ID>();
*Ref 1* {
  kind: ReflectionKind.union,
  types: [
    {kind: ReflectionKind.string, parent: *Ref 1* } }
    {kind: ReflectionKind.number, parent: *Ref 1* }
  ]
}

Ref 1' points to the actual union type object.

For cached type objects as exemplified above, the parent properties are not always the real parents. For example, for a class that is used multiple times, although immediate types in types (TypePropertySignature and TypeMethodSignature) point to the correct TypeClass, the type of these signature types point to the signature types of the TypeClass of the cached entry. This is important to know so as not to infinitely read the parent structure, but only the immediate parent. The fact that the parent does not have infinite precision is due to performance reasons.

2.4.2. JIT Cache

In the further course some functions and features are described, which are often based on the type objects. To implement some of them in a performant way, a JIT (just in time) cache per type object is needed. This can be provided via getJitContainer(type). This function returns a simple object on which arbitrary data can be stored. As long as no reference to the object is held, it will be deleted automatically by the GC as soon as the Type object itself is also no longer referenced.

2.4.3. Reflection Classes

In addition to the typeOf<>() function, there are various reflection classes that provide an OOP alternative to the Type objects. The reflection classes are only available for classes, interface/object literals and functions and their direct sub-types (properties, methods, parameters). All deeper types must be read again with the Type objects.

import { ReflectionClass } from '@deepkit/type';

interface User {
    id: number;
    username: string;
}


const reflection = ReflectionClass.from<User>();

reflection.getProperties(); //[ReflectionProperty, ReflectionProperty]
reflection.getProperty('id'); //ReflectionProperty

reflection.getProperty('id').name; //'id'
reflection.getProperty('id').type; //{kind: ReflectionKind.number}
reflection.getProperty('id').isOptional(); //false

2.4.4. Receive Type Information

In order to provide functions that operate on types, it can be useful to offer the user to pass a type manually. For example, in a validation function, it might be useful to provide the type to be requested as the first type argument and the data to be validated as the first function argument.

validate<string>(1234);

In order for this function to receive the type string, it must communicate this to the type compiler.

function validate<T>(data: any, type?: ReceiveType<T>): void;

ReceiveType with the reference to the first type arguments T signals the type compiler that each call to validate should put the type in second place (since type is declared in second place). To then read out the information at runtime, the resolveReceiveType function is used.

import { resolveReceiveType, ReceiveType } from '@deepkit/type';

function validate<T>(data: any, type?: ReceiveType<T>): void {
    type = resolveReceiveType(type);
}

It is useful to assign the result to the same variable to avoid creating a new one unnecessarily. In type now either a type object is stored or an error is thrown, if for example no type argument was passed, Deepkit’s type compiler was not installed correctly, or the emitting of type information is not activated (see the section Installation above).

2.5. Bytecode

To learn in detail how Deepkit encodes and reads the type information in JavaScript, this chapter is intended. It explains how the types are actually converted into bytecode, emitted in JavaScript, and then interpreted at runtime.

2.5.1. Type Compiler

The type compiler (in @deepkit/type-compiler) is responsible for reading the defined types in the TypeScript files and compiling them into a bytecode. This bytecode has everything needed to execute the types in runtime. At the time of this writing, the Type compiler is a so-called TypeScript Transformer. This transformer is a plugin for the TypeScript compiler itself and converts a TypeScript AST (Abstract Syntax Tree) into another TypeScript AST. In this process, Deepkit’s type compiler reads the AST, produces the corresponding bytecode, and inserts it into the AST.

TypeScript itself does not allow to configure this plugin aka transformer via a tsconfig.json. It is either necessary to use the TypeScript compiler API directly, or a build system like Webpack with ts-loader. To avoid this inconvenience for Deepkit users, the Deepkit type compiler automatically installs itself in node_modules/typescript when @deepkit/type-compiler is installed. This makes it possible for all build tools that access the locally installed TypeScript (the one in node_modules/typescript) to automatically have the type compiler enabled. This makes tsc, Angular, webpack, ts-node, and some other tools work automatically with Deepkit’s type compiler.

If the automatic execution of NPM install scripts is not activated and thus the locally installed typescript is not modified, this process must be executed manually if you want to do so. Alternatively, the types compiler can be used manually in a build tool such as webpack. See the Installation section above.

2.5.2. Bytecode Encoding

The bytecode is a sequence of commands for a virtual machine and is encoded in the JavaScript itself as an array of references and string (the actual bytecode).

//TypeScript
type TypeA = string;

//generated JavaScript
const typeA = ['&'];

The existing commands themselves are each one byte in size and can be found in @deepkit/type-spec as ReflectionOp enums. At the time of this writing, the command set is over 81 commands in size.

enum ReflectionOp {
    never,
    any,
    unknown,
    void,
    object,

    string,
    number,

    //...many more
}

A sequence of commands is encoded as a string to save memory. So a type string[] is conceptualized as a bytecode program [string, array] which has the bytes [5, 37] and encoded with the following algorithm:

function encodeOps(ops: ReflectionOp[]): string {
    return ops.map(v => String.fromCharCode(v + 33)).join('');
}

Accordingly, a 5 becomes an & character and a 37 becomes an F character. Together they become &F and are emitted in Javascript as ['&F'].

//TypeScript
export type TypeA = string[];

//generated JavaScript
export const __ΩtypeA = ['&F'];

To prevent naming conflicts, each type is given a "_Ω" prefix. For each explicitly defined type that is exported or used by an exported type, a bytecode is emitted the JavaScript. Classes and functions also receive a bytecode directly as a property.

//TypeScript
function log(message: string): void {}

//generated JavaScript
function log(message) {}
log.__type = ['message', 'log', 'P&2!$/"'];

2.5.3. Virtual Machine

A virtual machine (in @deepkit/type the class Processor) at runtime is responsible for decoding and executing the encoded bytecode. It always returns a type object, see the Reflection section above.

3. Validation

Validation is the process of checking data for correctness. Correctness is given if the type is the correct one and additional defined constraints are fulfilled. Deepkit generally distinguishes between type validation and the validation of additional constraints.

Validation is used whenever data comes from a source that is considered uncertain. Uncertain means that no guaranteed assumptions can be made about the types or contents of the data, and thus the data could have literally any value at runtime. For example, data from user input is generally not considered secure. Data from an HTTP request (query parameter, body), CLI arguments, or a read-in file must be validated. If a variable is declared as a number, there must also be a number in it, otherwise the program may crash or a security hole may occur.

In a controller of an HTTP route, for example, the top priority is to check every user input (query parameter, body). Especially in the TypeScript environment, it is important not to use type casts, as they are fundamentally insecure.

app.post('/user', function(request) {
    const limit = request.body.limit as number;
});

This often seen code is a bug that can lead to a program crash or a security vulnerability because a type cast as number was used that does not provide any security at runtime. The user can simply pass a string as limit and the program would then work with a string in limit, although the code is based on the fact that it must be a number. To maintain this security at runtime there are validators and type guards. Also, a serializer could be used to convert limit to a number. More information about this can be found in Serialization.

Validation is an essential part of any application and it is better to use it once too often than once too little. Deepkit provides many validation options and has a high-performance implementation, so in most cases there is no need to worry about execution time. Use as much validation as possible, in case of doubt once more, to be on the safe side.

In doing so, many components of Deepkit such as the HTTP router, the RPC abstraction, but also the database abstraction itself have validation built in and is performed automatically, so in many cases it is not necessary to do this manually. In the corresponding chapters (CLI, HTTP, RPC, Database) it is explained in detail when a validation happens automatically. Make sure that you know where restrictions or types have to be defined and don’t use any to make these validations work well and safely automatically. This can save you a lot of manual work to keep the code clean and safe.

3.1. Use

The basic function of the validator is to check a value for its type. For example, whether a value is a string. This is not about what the string contains, but only about its type. There are many types in Typescript: string, number, boolean, bigint, objects, classes, interface, generics, mapped types, and many more. Due to Typescript’s powerful type system, a large variety of different types are available.

In JavaScript itself, primitive types can be parsed with the typeof operator. For more complex types like interfaces, mapped types, or generic set/map this is not so easy anymore and a validator library like @deepkit/type becomes necessary. Deepkit is the only solution that allows to validate all TypesScript types directly without any detours.

In Deepkit, type validation can be done using either the validate, is, or assert function. The function is is a so-called type guard and assert is a type assertion. Both will be explained in the next section. The function validate returns an array of found errors and on success an empty array. Each entry in this array describes the exact error code and the error message as well as the path when more complex types like objects or arrays are validated.

All three functions are used in roughly the same way. The type is specified or referenced as the first type argument and the data is passed as the first function argument.

import { validate } from '@deepkit/type';

const errors = validate<string>('abc'); //[]
const errors = validate<string>(123); //[{code: 'type', message: 'Not a string'}]

If you work with more complex types like classes or interfaces, the array can also contain several entries.

import { validate } from '@deepkit/type';

interface User {
    id: number;
    username: string;
}

validate<User>({id: 1, username: 'Joe'}); //[]

validate<User>(undefined); //[{code: 'type', message: 'Not a object'}]

validate<User>({});
//[
//  {path: 'id', code: 'type', message: 'Not a number'}],
//  {path: 'username', code: 'type', message: 'Not a string'}],
//]

The validator also supports deep recursive types. Paths are then separated with a dot.

import { validate } from '@deepkit/type';

interface User {
    id: number;
    username: string;
    supervisor?: User;
}

validate<User>({id: 1, username: 'Joe'}); //[]

validate<User>({id: 1, username: 'Joe', supervisor: {}});
//[
//  {path: 'supervisor.id', code: 'type', message: 'Not a number'}],
//  {path: 'supervisor.username', code: 'type', message: 'Not a string'}],
//]

Take advantage of the benefits that TypeScript offers you. For example, more complex types such as a user can be reused in multiple places without having to declare it again and again. For example, if a user is to be validated without its id, TypeScript utitilies can be used to quickly and efficiently create derived subtypes. Very much in the spirit of DRY (Don’t Repeat Yourself).

type UserWithoutId = Omit<User, 'id'>;

validate<UserWithoutId>({username: 'Joe'}); //valid!

Deepkit is the only major framework that has the ability to access TypeScripts types in this way at runtime. If you want to use types in frontend and backend, types can be swapped out to a separate file and thus imported anywhere. Use this option to your advantage to keep the code efficient and clean.

A type cast (contrary to type guard) in TypeScript is not a construct at runtime, but is only handled in the type system itself. It is not a safe way to assign a type to unknown data.

const data: any = ...;

const username = data.username as string;

if (username.startsWith('@')) { //might crash
}

The as string code is not safe. The variable data could have literally any value, for example {username: 123}, or even {}, and would have the consequence that username is not a string, but something completely different and therefore the code username.startsWith('@') will lead to an error, so that in the worst case the program crashes. To guarantee at runtime that data here has a property username with the type string, type-guards must be used.

Type guards are functions that give TypeScript a hint about what type the passed data is guaranteed to have at runtime. Armed with this knowledge, TypeScript then "narrows" the type as the code progresses. For example, any can be made into a string, or any other type in a safe way. So if there is data of which the type is not known (any or unknown), a type guard helps to narrow it down more precisely based on the data itself. However, the type guard is only as safe as its implementation. If you make a mistake, this can have severe consequences, because fundamental assumptions suddenly turn out to be untrue.

3.2. Type-Guard

A type guard on the above used type User could look in simplest form as follows. Note that the above explained special features with NaN are not part here and thus this type guard is not quite correct.

function isUser(data: any): data is User {
    return 'object' === typeof data
           && 'number' === data.id
           && 'string' === data.username;
}

isUser({}); //false

isUser({id: 1, username: 'Joe'}); //true

A type guard always returns a Boolean and is usually used directly in an If operation.

const data: any = await fetch('/user/1');

if (isUser(data)) {
    data.id; //can be safely accessed and is a number
}

Writing a separate function for each type guard, especially for more complex types, and then adapting it every time a type changes is extremely tedious, error-prone, and not efficient. Therefore, Deepkit provides the function is, which automatically provides a Type-Guard for any TypeScript type. This then also automatically takes into account special features such as the above-mentioned problem with NaN. The function is does the same as validate, but instead of an array of errors it simply returns a boolean.

import { is } from '@deepkit/type';

is<string>('abc'); //true
is<string>(123); //false


const data: any = await fetch('/user/1');

if (is<User>(data)) {
    //data is guaranteed to be of type User now
}

A pattern that can be found more often is to return an error directly in case of incorrect validation, so that subsequent code is not executed. This can be used in various places without changing the complete flow of the code.

function addUser(data: any): void {
    if (!is<User>(data)) throw new TypeError('No user given');

    //data is guaranteed to be of type User now
}

Alternatively, a TypeScript type assertion can be used. The assert function automatically throws an error if the given data does not validate correctly to a type. The special signature of the function, which distinguishes TypeScript type assertions, helps TypeScript to automatically narrow the passed variable.

import { assert } from '@deepkit/type';

function addUser(data: any): void {
    assert<User>(data); //throws on invalidate data

    //data is guaranteed to be of type User now
}

Here, too, take advantage of the benefits that TypeScript offers you. Types can be reused or customized using various TypeScript functions.

3.3. Error Reporting

The functions is, assert and validates return a boolean as result. To get exact information about failed validation rules, the validate function can be used. It returns an empty array if everything was validated successfully. In case of errors the array will contain one or more entries with the following structure:

interface ValidationErrorItem {
    /**
     * The path to the property. Might be a deep path separated by dot.
     */
    path: string;
    /**
     * A lower cased error code that can be used to identify this error and translate.
     */
    code: string,
    /**
     * Free text of the error.
     */
    message: string,
}

The function receives as first type argument any TypeScript type and as first argument the data to validate.

import { validate } from '@deepkit/type';

validate<string>('Hello'); //[]
validate<string>(123); //[{code: 'type', message: 'Not a string', path: ''}]

validate<number>(123); //[]
validate<number>('Hello'); //[{code: 'type', message: 'Not a number', path: ''}]

Complex types such as interfaces, classes, or generics can also be used.

import { validate } from '@deepkit/type';

interface User {
    id: number;
    username: string;
}

validate<User>(undefined); //[{code: 'type', message: 'Not an object', path: ''}]
validate<User>({}); //[{code: 'type', message: 'Not a number', path: 'id'}]
validate<User>({id: 1}); //[{code: 'type', message: 'Not a string', path: 'username'}]
validate<User>({id: 1, username: 'Peter'}); //[]

3.4. Constraints

In addition to checking the types, other arbitrary constraints can be added to a type. The validation of these additional content constraints is done automatically after the types themselves have been validated. This is done in all validation functions like validate, is, and assert. For example, a constraint can be that a string must have a certain minimum or maximum length. These constraints are added to the actual types via the type decorators. There is a whole variety of decorators that can be used. Own decorators can be defined and used at will in case of extended needs.

type Username = string & MinLength<3>;

With & any number of type decorators can be added to the actual type. The result, here username, can then be used in all validation functions but also in other types.

is<Username>('ab'); //false, because minimum length is 3
is<Username>('Joe'); //true

interface User {
  id: number;
  username: Username;
}

is<User>({id: 1, username: 'ab'}); //false, because minimum length is 3
is<User>({id: 1, username: 'Joe'}); //true

The function validate gives useful error messages coming from the constraints.

const errors = validate<Username>('xb');
//[{ code: 'minLength', message: `Min length is 3` }]

These information can be represented for example wonderfully also at a form automatically and be translated by means of the code. Through the existing path for objects and arrays, fields in a form can filter out and display the appropriate error.

validate<User>({id: 1, username: 'ab'});
//{ path: 'username', code: 'minLength', message: `Min length is 3` }

An often useful use case is also to define an email with a RegExp constraint. Once the type is defined, it can be used anywhere.

export const emailRegexp = /^\[email protected]\S+$/;
type Email = string & Pattern<typeof emailRegexp>

is<Email>('abc'); //false
is<Email>('[email protected]'); //true

Any number of constraints can be added.

type ID = number & Positive & Maximum<1000>;

is<ID>(-1); //false
is<ID>(123); //true
is<ID>(1001); //true

3.4.1. Constraint Types

Validate<typeof MyValidator>

Validation using a custom validator function. See next section Custom Validator for more information.

	type T = string & Validate<typeof myValidator>
Pattern<typeof MyRegexp>

Defines a regular expression as validation pattern. Usually used for email validation or more complex content validation.

	const myRegExp = /[a-zA-Z]+/;
	type T = string & Pattern<typeof myRegExp>
Alpha

Validation for alpha characters (a-Z).

	type T = string & Alpha;
Alphanumeric

Validation for alpha and numeric characters.

	type T = string & Alphanumeric;
Ascii

Validation for ASCII characters.

	type T = string & Ascii;
Decimal<number, Number>

Validation for string represents a decimal number, such as 0.1, .3, 1.1, 1.00003, 4.0, etc.

	type T = string & Decimal<1, 2>;
MultipleOf<number>

Validation of numbers that are a multiple of given number.

	type T = number & MultipleOf<3>;
MinLength<number>, MaxLength<number>

Validation for min/max length for arrays or strings.

	type T = any[] & MinLength<1>;

	type T = string & MinLength<3> & MaxLength<16>;
Includes<'any'> Excludes<'any'>

Validation for an array item or sub string being included/excluded

	type T = any[] & Includes<'abc'>;
	type T = string & Excludes<' '>;
Minimum<number>, Maximum<number>

Validation for a value being minimum or maximum given number. Same as >= and <=.

	type T = number & Minimum<10>;
	type T = number & Minimum<10> & Maximum<1000>;
ExclusiveMinimum<number>, ExclusiveMaximum<number>

Same as minimum/maximum but excludes the value itself. Same as > and <.

	type T = number & ExclusiveMinimum<10>;
	type T = number & ExclusiveMinimum<10> & ExclusiveMaximum<1000>;
Positive, Negative, PositiveNoZero, NegativeNoZero

Validation for a value being positive or negative.

	type T = number & Positive;
	type T = number & Negative;
BeforeNow, AfterNow

Validation for a date value compared to now (new Date)…​

	type T = Date & BeforeNow;
	type T = Date & AfterNow;
Email

Simple regexp validation of emails via /^\[email protected]\S+$/. Is automatically a string, so no need to do string & Email.

	type T = Email;
Integer

Ensures that the number is an integer in the correct range. Is automatically a number, so no need to do number & integer.

	type T = integer;
	type T = uint8;
	type T = uint16;
	type T = uint32;
	type T = int8;
	type T = int16;
	type T = int32;

See Special types: integer/floats for more information

3.4.2. Custom Validator

If the built-in validators are not sufficient, custom validation functions can be created and used via the Validate decorator.

import { ValidatorError, Validate, Type, validates, validate }
  from '@deepkit/type';

function titleValidation(value: string, type: Type) {
    value = value.trim();
    if (value.length < 5) {
        return new ValidatorError('tooShort', 'Value is too short');
    }
}

interface Article {
    id: number;
    title: string & Validate<typeof titleValidation>;
}

console.log(validates<Article>({id: 1})); //false
console.log(validates<Article>({id: 1, title: 'Peter'})); //true
console.log(validates<Article>({id: 1, title: ' Pe     '})); //false
console.log(validate<Article>({id: 1, title: ' Pe     '})); //[ValidationErrorItem]

Note that your custom validation function is executed after all built-in type validators have been called. If a validator fails, all subsequent validators for the current type are skipped. Only one failure is possible per type.

Generic Validator

In the Validator function the type object is available which can be used to get more information about the type using the validator. There is also a possibility to define any validator option that must be passed to the validate type and makes the validator configurable. With this information and its parent references, powerful generic validators can be created.

import { ValidatorError, Validate, Type, is, validate }
  from '@deepkit/type';

function startsWith(value: any, type: Type, chars: string) {
    const valid = 'string' === typeof value && value.startsWith(chars);
    if (!valid) {
        return new ValidatorError('startsWith', 'Does not start with ' + chars)
    }
}

type MyType = string & Validate<typeof startsWith, 'a'>;

is<MyType>('aah'); //true
is<MyType>('nope'); //false

const errors = validate<MyType>('nope');
//[{ path: '', code: 'startsWith', message: `Does not start with a` }]);

4. Serialization

Serialization is the process of converting data types into a format suitable for transport or storage, for example. Deserialization is the process of undoing this. This is done losslessly, meaning that data can be converted to and from a serialization target without losing data type information or the data itself.

In JavaScript, serialization is usually between JavaScript objects and JSON. JSON supports only String, Number, Boolean, Objects, and Arrays. JavaScript, on the other hand, supports many other types such as BigInt, ArrayBuffer, typed arrays, Date, custom class instances, and many more. Now, to transmit JavaScript data to a server using JSON, you need a serialization process (on the client) and a deserialization process (on the server), or vice versa if the server sends data to the client as JSON. Using JSON.parse and JSON.stringify is often not sufficient for this, as it is not lossless.

This serialization process is absolutely necessary for non-trivial data, since JSON loses its information even for basic types like a date. A new Date is finally serialized as a string in JSON:

const json = JSON.stringify(new Date);
//'"2022-05-13T20:48:51.025Z"

As you can see, the result of JSON.stringify is a JSON string. If you deserialize it again with JSON.parse, you will not get a date object, but a string.

const value = JSON.parse('"2022-05-13T20:48:51.025Z"');
//"2022-05-13T20:48:51.025Z"

Although there are various workarounds to teach JSON.parse to deserialize Date objects, they are error-prone and poorly performing. To enable type-safe serialization and deserialization for this case and many other types, a serialization process is necessary.

There are four main functions available: serialize, cast/deserialize and validatedDeserialize. Under the hood of these functions, the globally available JSON serializer from @deepkit/type is used by default, but a custom serialization target can also be used.

Deepkit Type supports user-defined serialization targets, but already comes with a powerful JSON serialization target that serializes data as JSON objects and then can be correctly and safely converted as JSON using JSON.stringify. With @deepkit/bson, BSON can also be used as a serialization target. How to create a custom serialization target (for example for a database driver) can be learned in the Custom Serializer section.

Note that although serializers also validate data for compatibility, these validations are different from the validation in Validation. Only the cast function also calls the full validation process from the Validation chapter after successful deserialization, and throws an error if the data is not valid.

Alternatively, validatedDeserialize can be used to validate after deserialization. Another alternative is to manually call the validate or validates functions on deserialized data from the deserialize function, see Validation. All functions from serialization and validation throw a ValidationError from @deepkit/type on errors.

4.1. Cast

Todo

4.2. Serialization

import { serialize } from '@deepkit/type';

class MyModel {
    id: number = 0;
    created: Date = new Date;

    constructor(public name: string) {
    }
}

const model = new MyModel('Peter');

const jsonObject = serialize<MyModel>(model);
//{
//  id: 0,
//  created: '2021-06-10T15:07:24.292Z',
//  name: 'Peter'
//}
const json = JSON.stringify(jsonObject);

The function serialize converts the passed data by default with the JSON serializer into a JSON object, that is: String, Number, Boolean, Object, or Array. The result of this can then be safely converted to a JSON using JSON.stringify.

4.3. Deserialization

The function deserialize converts the passed data per default with the JSON serializer into the corresponding specified types. The JSON serializer expects a JSON object, i.e.: string, number, boolean, object, or array. This is usually obtained from a JSON.parse call.

import { deserialize } from '@deepkit/type';

class MyModel {
    id: number = 0;
    created: Date = new Date;

    constructor(public name: string) {
    }
}

const myModel = deserialize<MyModel>({
    id: 5,
    created: 'Sat Oct 13 2018 14:17:35 GMT+0200',
    name: 'Peter',
});

//from JSON
const json = '{"id": 5, "created": "Sat Oct 13 2018 14:17:35 GMT+0200", "name": "Peter"}';
const myModel = deserialize<MyModel>(JSON.parse(json));

If the correct data type is already passed (for example, a Date object in the case of created), then this is taken as it is.

Not only a class, but any TypeScript type can be specified as the first type argument. So even primitives or very complex types can be passed:

deserialize<Date>('Sat Oct 13 2018 14:17:35 GMT+0200');
deserialize<string | number>(23);

4.3.1. Soft Type Conversion

In the deserialization process a soft type conversion is implemented. This means that String and Number for String types or a Number for a String type can be accepted and converted automatically. This is useful, for example, when data is accepted via a URL and passed to the deserializer. Since the URL is always a string, Deepkit Type still tries to resolve the types for Number and Boolean.

deserialize<boolean>('false')); //false
deserialize<boolean>('0')); //false
deserialize<boolean>('1')); //true

deserialize<number>('1')); //1

deserialize<string>(1)); //'1'

The following soft type conversions are built into the JSON serializer:

  • number|bigint: Number or Bigint accept String, Number, and BigInt. parseFloat or BigInt(x) are used in case of a necessary conversion.

  • boolean: Boolean accepts Number and String. 0, '0', 'false' is interpreted as false. 1, '1', 'true' is interpreted as true.

  • string: String accepts Number, String, Boolean, and many more. All non-string values are automatically converted with String(x).

The soft conversion can also be deactivated:

const result = deserialize(data, {loosely: false});

In the case of invalid data, no attempt is made to convert it and instead an error message is thrown.

4.4. Type Decorators

4.4.1. Integer

4.4.2. Group

4.4.3. Excluded

4.4.4. Mapped

4.4.5. Embedded

4.5. Naming Strategy

4.6. Custom Serializer

By default, @deepkit/type comes with a JSON serializer and type validation for TypeScript types. You can extend this and add or remove the serialization functionality or change the way validation is done, since validation is also linked to the serializer.

4.6.1. New Serializer

A serializer is simply an instance of the Serializer class with registered serializer templates. Serializer templates are small functions that create JavaScript code for the JIT serializer process. For each type (String, Number, Boolean, etc.) there is a separate Serializer template that is responsible for returning code for data conversion or validation. This code must be compatible with the JavaScript engine that the user is using.

Only during the execution of the compiler template function do you (or should you) have full access to the full type. The idea is that you should embed all the information needed to convert a type directly into the JavaScript code, resulting in highly optimized code (also called JIT-optimized code).

The following example creates an empty serializer.

import { EmptySerializer } from '@deepkit/type';

class User {
    name: string = '';
    created: Date = new Date;
}

const mySerializer = new EmptySerializer('mySerializer');

const user = deserialize<User>({ name: 'Peter', created: 0 }, undefined, mySerializer);
console.log(user);
$ ts-node app.ts
User { name: 'Peter', created: 0 }

As you can see, nothing has been converted (created is still a number, but we have defined it as date). To change this, we add a serializer template for deserialization of type Date.

mySerializer.deserializeRegistry.registerClass(Date, (type, state) => {
    state.addSetter(`new Date(${state.accessor})`);
});

const user = deserialize<User>({ name: 'Peter', created: 0 }, undefined, mySerializer);
console.log(user);
$ ts-node app.ts
User { name: 'Peter', created: 2021-06-10T19:34:27.301Z }

Now our serializer converts the value into a Date object.

To do the same for serialization, we register another serialization template.

mySerializer.serializeRegistry.registerClass(Date, (type, state) => {
    state.addSetter(`${state.accessor}.toJSON()`);
});

const user1 = new User();
user1.name = 'Peter';
user1.created = new Date('2021-06-10T19:34:27.301Z');
console.log(serialize(user1, undefined, mySerializer));
{ name: 'Peter', created: '2021-06-10T19:34:27.301Z' }

Our new serializer now correctly converts the date from the Date object to a string in the serialization process.

4.6.2. Examples

To see many more examples, you can take a look at the code of the JSON serializer included in Deepkit Type.

4.6.3. Expanding A Serializer

If you want to extend an existing serializer, you can do so using class inheritance. This works because serializers should be written to register their templates in the constructor.

class MySerializer extends Serializer {
    constructor(name: string = 'mySerializer') {
        super(name);
        this.registerTemplates();
    }

    protected registerTemplates() {
        this.deserializeRegistry.register(ReflectionKind.string, (type, state) => {
            state.addSetter(`String(${state.accessor})`);
        });

        this.deserializeRegistry.registerClass(Date, (type, state) => {
            state.addSetter(`new Date(${state.accessor})`);
        });

        this.serializeRegistry.registerClass(Date, (type, state) => {
            state.addSetter(`${state.accessor}.toJSON()`);
        });
    }
}
const mySerializer = new MySerializer();

5. Dependency Injection

Dependency Injection (DI) is a design pattern in which classes and functions receive their dependencies. It follows the principle of Inversion of Control (IoC) and helps to better separate complex code in order to significantly improve testability, modularity and clarity. Although there are other design patterns, such as the service locator pattern, for applying the principle of IoC, DI has established itself as the dominant pattern, especially in enterprise software.

To illustrate the principle of IoC, here is an example:

import { HttpClient } from 'http-library';

class UserRepository {
    async getUsers(): Promise<Users> {
        const client = new HttpClient();
        return await client.get('/users');
    }
}

The UserRepository class has an HttpClient as a dependency. This dependency in itself is nothing remarkable, but it is problematic that UserRepository creates the HttpClient itself. This is obvious at first glance, but it has its drawbacks: What if we want to replace the HttpClient? What if we want to test UserRepository in a unit test without allowing real HTTP requests to go out? How do we know that the class even uses an HttpClient?

5.1. Inversion Of Control

In the thought of Inversion of Control (IoC) is the following alternative variant that sets the HttpClient as an explicit dependency in the constructor (also known as constructor injection).

class UserRepository {
    constructor(
        private http: HttpClient
    ) {}

    async getUsers(): Promise<Users> {
        return await this.http.get('/users');
    }
}

Now UserRepository is no longer responsible for creating the HttpClient, but the user of UserRepository. This is Inversion of Control (IoC). The control has been reversed or inverted. Specifically, this code applies dependency injection, because dependencies are received (injected) and no longer created or requested. Dependency Injection is only one variant of IoC.

5.2. Service Locator

Besides DI, Service Locator (SL) is also a way to apply the IoC principle. This is commonly considered the counterpart to Dependency Injection, as it requests dependencies rather than receiving them. If HttpClient were requested in the above code as follows, it would be called a Service Locator pattern.

class UserRepository {
    async getUsers(): Promise<Users> {
        const client = locator.getHttpClient();
        return await client.get('/users');
    }
}

The function locator.getHttpClient can have any name. Alternatives would be function calls like useContext(HttpClient), getHttpClient(), await import("client"), or a container call like container.get(HttpClient). An import of a global is a slightly different variant of a service locator, using the module system itself as the locator:

import { httpClient } from 'clients'

class UserRepository {
    async getUsers(): Promise<Users> {
        return await httpClient.get('/users');
    }
}

All these variants have in common that they explicitly request the HttpClient dependency. This request can happen not only to properties as a default value, but also somewhere in the middle of the code. Since in the middle of the code means that it is not part of a type interface, the use of the HttpClient is hidden. Depending on the variant of how the HttpClient is requested, it can sometimes be very difficult or completely impossible to replace it with another implementation. Especially in the area of unit tests and for the sake of clarity, difficulties can arise here, so that the service locator is now classified as an anti-pattern in certain situations.

5.3. Dependency Injection

With Dependency Injection, nothing is requested, but it is explicitly provided by the user or received by the code. As can be seen in the example of Inversion of Control, the dependency injection pattern has already been applied there. Specifically, constructor injection can be seen there, since the dependency is declared in the constructor. So UserRepository must now be used as follows.

const users = new UserRepository(new HttpClient());

The code that wants to use UserRepository must also provide (inject) all its dependencies. Whether HttpClient should be created each time or the same one should be used each time is now decided by the user of the class and no longer by the class itself. It is no longer requested (from the class’s point of view) as in the case of the service locator, or created entirely by itself in the initial example. This inversion of the flow has various advantages:

  • The code is easier to understand because all dependencies are explicitly visible.

  • The code is easier to test because all dependencies are unique and can be easily modified if needed.

  • The code is more modular, as dependencies can be easily exchanged.

  • It promotes the Separation of Concern principle, as UserRepository is no longer responsible for creating very complex dependencies itself when in doubt.

But an obvious disadvantage can also be recognized directly: Do I really need to create or manage all dependencies like the HttpClient myself? Yes and No. Yes, there are many cases where it is perfectly legitimate to manage the dependencies yourself. The hallmark of a good API is that dependencies don’t get out of hand, and that even then they are pleasant to use. For many applications or complex libraries, this may well be the case. To provide a very complex low-level API with many dependencies in a simplified way to the user, facades are wonderfully suitable.

5.4. Dependency Injection Container

For more complex applications, however, it is not necessary to manage all dependencies yourself, because that is exactly what a so-called dependency injection container is for. This not only creates all objects automatically, but also "injects" the dependencies automatically, so that a manual "new" call is no longer necessary. There are various types of injection, such as constructor injection, method injection, or property injection. This makes it easy to manage even complicated constructions with many dependencies.

A dependency injection container (also called DI container or IoC container) brings Deepkit in @deepkit/injector or already ready integrated via app modules in the Deepkit framework. The above code would look like this using a low-level API from the @deepkit/injector package.

import { InjectorContext } from '@deepkit/injector';

const injector = InjectorContext.forProviders(
    [UserRepository, HttpClient]
);

const userRepo = injector.get(UserRepository);

const users = await userRepo.getUsers();

The injector object in this case is the dependency injection container. Instead of using "new UserRepository", the container returns an instance of UserRepository using get(UserRepository). To initialize the container statically a list of providers is passed to the function InjectorContext.forProviders (in this case simply the classes). Since DI is all about providing dependencies, the dependencies are provided to the container, hence the technical term "provider". There are various types of providers: ClassProvider, ValueProvider, ExistingProvider, FactoryProvider. All together, they allow very flexible architectures to be mapped with a DI container.

All dependencies between providers are automatically resolved and as soon as an injector.get() call occurs, the objects and dependencies are created, cached, and correctly passed either as a constructor argument (constructor injection), set as a property (property injection), or passed to a method call (method injection).

Now to exchange the HttpClient with another one, another provider (here the ValueProvider) can be defined for HttpClient:

const injector = InjectorContext.forProviders([
    UserRepository,
    {provide: HttpClient, useValue: new AnotherHttpClient()},
]);

As soon as UserRepository is requested via injector.get(UserRepository), it receives the AnotherHttpClient object. Alternatively, a ClassProvider can be used here very well, so that all dependencies of AnotherHttpClient are also managed by the DI container.

const injector = InjectorContext.forProviders([
    UserRepository,
    {provide: HttpClient, useClass: AnotherHttpClient},
]);

All types of providers are listed and explained in the Dependency Injection Providers section.

It should be mentioned here that Deepkit’s DI container only works with Deepkit’s runtime types. This means that any code that contains classes, types, interfaces, and functions must be compiled by the Deepkit Type Compiler in order to have the type information available at runtime. See the chapter Runtime Types.

5.5. Dependency Inversion

The example of UserRepository under Inversion of Control shows that UserRepository depends on a lower level HTTP library. In addition, a concrete implementation (class) is declared as a dependency instead of an abstraction (interface). At first glance, this may seem to be in line with the object-oriented paradigms, but it can lead to problems, especially in complex and large architectures.

An alternative variant would be to convert the HttpClient dependency into an abstraction (interface) and thus not import code from an HTTP library into UserRepository.

interface HttpClientInterface {
   get(path: string): Promise<any>;
}

class UserRepository {
    concstructor(
        private http: HttpClientInterface
    ) {}

    async getUsers(): Promise<Users> {
        return await this.http.get('/users');
    }
}

This is called the dependency inversion principle. UserRepository no longer has a dependency directly on an HTTP library and is instead based on an abstraction (interface). It thus solves two fundamental goals in this principle:

  • High-level modules should not import anything from low-level modules.

  • Implementations should be based on abstractions (interfaces).

Merging the two implementations (UserRepository with an HTTP library) can now be done via the DI container.

import { HttpClient } from 'http-library';
import { UserRepository } from './user-repository';

const injector = InjectorContext.forProviders([
    UserRepository,
    HttpClient,
]);

Since Deepkit’s DI container is capable of resolving abstract dependencies (interfaces) like this one of HttpClientInterface, UserRepository automatically gets the implementation of HttpClient since HttpClient implemented the interface HttpClientInterface. This is done either by HttpClient specifically implementing HttpClientInterface (class HttpClient implements HttpClientInterface), or by HttpClient’s API simply being compatible with HttpClientInterface. As soon as HttpClient modifies its API (for example, removes the get method) and is thus no longer compatible with HttpClientInterface, the DI container throws an error ("the HttpClientInterface dependency was not provided"). Here the user, who wants to bring both implementations together, is in the obligation to find a solution. As an example, an adapter class could be registered here that implements HttpClientInterface and correctly forwards the method calls to HttpClient.

It should be noted here that although in theory the dependency inversion principle has its advantages, in practice it also has significant disadvantages. It not only leads to more code (since more interfaces have to be written), but also to more complexity (since each implementation now has an interface for each dependency). This price to pay is only worth it when the application reaches a certain size and this flexibility is needed. Like any design pattern and principle, this one has its cost-use factor, which should be thought through before it is applied. Design patterns should not be used blindly and across the board for even the simplest code. However, if the prerequisites such as a complex architecture, large applications, or a scaling team are given, dependency inversion and other design patterns only unfold their true strength.

5.6. Installation

Since Dependency Injection in Deepkit is based on Runtime Types, it is necessary to have @deepkit/type already installed correctly. See Runtime Type Installation.

If this is done successfully, @deepkit/injector can be installed by itself or the Deepkit framework which already uses the library under the hood.

npm install @deekpit/injector

Once the library is installed, the API of it can be used directly.

5.7. Use

To use Dependency Injection now, there are three ways.

  • Injector API (Low Level)

  • Modules API

  • App API (Deepkit Framework)

If @deepkit/injector is to be used without the deepkit framework, the first two variants are recommended.

5.7.1. Injector API

The Injector API has already been introduced in the introduction to Dependency Injection. It is characterized by a very simple usage by means of a single class InjectorContext which creates a single DI container and is particularly suitable for simpler applications without modules.

import { InjectorContext } from '@deepkit/injector';

const injector = InjectorContext.forProviders([
    UserRepository,
    HttpClient,
]);

const repository = injector.get(UserRepository);

The injector object in this case is the dependency injection container. The function InjectorContext.forProviders takes an array of providers. See the Dependency Injection Providers section to learn what values can be passed.

5.7.2. Modules API

A more complex API is the InjectorModule class, which allows to store the providers in different modules to create multiple encapsulated DI containers per module. Also this allows using configuration classes per module, which makes it easier to provide configuration values automatically validated to the providers. Modules can import themselves among themselves, providers export, in order to build up so a hierarchy and nicely separated architecture.

This API should be used if the application is more complex and the Deepkit framework is not used.

import { InjectorModule, InjectorContext } from '@deepkit/injector';

const lowLevelModule = new InjectorModule([HttpClient])
     .addExport(HttpClient);

const rootModule = new InjectorModule([UserRepository])
     .addImport(lowLevelModule);

const injector = new InjectorContext(rootModule);

The injector object in this case is the dependency injection container. Providers can be split into different modules and then imported again in different places using module imports. This creates a natural hierarchy that maps the hierarchy of the application or architecture. The InjectorContext should always be given the top module in the hierarchy, also called root module or app module. The InjectorContext then only has an intermediary task: calls to injector.get() are simply forwarded to the root module. However, it is also possible to get providers from non-root modules by passing the module as a second argument.

const repository = injector.get(UserRepository);

const httpClient = injector.get(HttpClient, lowLevelModule);

All non-root modules are encapsulated by default, so that all providers in this module are only available to itself. If a provider is to be available to other modules, this provider must be exported. By exporting, the provider moves to the parent module of the hierarchy and can be used that way.

To export all providers by default to the top level, the root module, the option forRoot can be used. This allows all providers to be used by all other modules.

const lowLevelModule = new InjectorModule([HttpClient])
     .forRoot(); //export all Providers to the root

5.7.3. App API

Once the Deepkit framework is used, modules are defined with the @deepkit/app API. This is based on the Module API, so the capabilities from there are also available. In addition, it is possible to work with powerful hooks and to define configuration loaders in order to map even more dynamic architectures.

The Framework Modules chapter describes this in more detail.

5.8. Providers

There are several ways to provide dependencies in the Dependency Injection container. The simplest variant is simply the specification of a class. This is also known as short ClassProvider.

InjectorContext.forProviders([
    UserRepository
]);

This represents a special provider, since only the class is specified. All other providers must be specified as object literals.

By default, all providers are marked as singletons, so only one instance exists at any given time. To create a new instance each time a provider is deployed, the transient option can be used. This will cause classes to be recreated each time or factories to be executed each time.

InjectorContext.forProviders([
    {provide: UserRepository, transient: true}
]);

5.8.1. ClassProvider

Besides the short ClassProvider there is also the regular ClassProvider, which is an object literal instead of a class.

InjectorContext.forProviders([
    {provide: UserRepository, useClass: UserRepository}
]);

This is equivalent to these two:

InjectorContext.forProviders([
    {provide: UserRepository}
]);

InjectorContext.forProviders([
    UserRepository
]);

It can be used to exchange a provider with another class.

InjectorContext.forProviders([
    {provide: UserRepository, useClass: OtherUserRepository}
]);

In this example, the OtherUserRepository class is now also managed in the DI container and all its dependencies are resolved automatically.

5.8.2. ValueProvider

Static values can be provided with this provider.

InjectorContext.forProviders([
    {provide: OtherUserRepository, useValue: new OtherUserRepository()},
]);

Since not only class instances can be provided as dependencies, any value can be specified as useValue. A symbol or a primitive (string, number, boolean) could also be used as a provider token.

InjectorContext.forProviders([
    {provide: 'domain', useValue: 'localhost'},
]);

Primitive provider tokens must be declared with the Inject type as a dependency.

import { Inject } from '@deepkit/injector';

class EmailService {
    constructor(public domain: Inject<string, 'domain'>) {}
}

The combination of an inject alias and primitive provider tokens can also be used to provide dependencies from packages that do not contain runtime type information.

import { Inject } from '@deepkit/injector';
import { Stripe } from 'stripe';

export type StripeService = Inject<Stripe, '_stripe'>;

InjectorContext.forProviders([
    {provide: '_stripe', useValue: new Stripe},
]);

And then declared on the user side as follows:

class PaymentService {
    constructor(public stripe: StripeService) {}
}

5.8.3. ExistingProvider

A forwarding to an already defined provider can be defined.

InjectorContext.forProviders([
    {provide: OtherUserRepository, useValue: new OtherUserRepository()},
    {provide: UserRepository, useExisting: OtherUserRepository}
]);

5.8.4. FactoryProvider

A function can be used to provide a value for the provider. This function can also contain parameters, which in turn are provided by the DI container. Thus, other dependencies or configuration options are accessible.

InjectorContext.forProviders([
    {provide: OtherUserRepository, useFactory: () => {
        return new OtherUserRepository()
    }},
]);

InjectorContext.forProviders([
    {
        provide: OtherUserRepository,
        useFactory: (domain: RootConfiguration['domain']) => {
            return new OtherUserRepository(domain);
        }
    },
]);

InjectorContext.forProviders([
    Database,
    {
        provide: OtherUserRepository,
        useFactory: (database: Database) => {
            return new OtherUserRepository(database);
        }
    },
]);

5.8.5. InterfaceProvider

In addition to classes and primitives, abstractions (interfaces) can also be provided. This is done via the function provide and is particularly useful if the value to be provided does not contain any type information.

import { provide } from '@deepkit/injector';

interface Connection {
    write(data: Uint16Array): void;
}

class Server {
   constructor (public connection: Connection) {}
}

class MyConnection {
    write(data: Uint16Array): void {}
}

InjectorContext.forProviders([
    Server,
    provide<Connection>(MyConnection)
]);

5.8.6. Asynchronous Providers

Asynchronous providers are not possible due to the design, since an asynchronous Dependency Injection container would mean that requesting providers would also be asynchronous. would be and thus the entire application is already forced to asynchrony at the highest level.

To initialize something asynchronously, this initialization should be moved to the application server bootstrap, because there the events can be asynchronous. Alternatively, an initialization can be triggered manually.

TODO: Explain it better, maybe example

If multiple providers have implemented the Connection interface, the last provider is used.

As argument for provide() all other providers are possible.

const myConnection = {write: (data: any) => undefined};

InjectorContext.forProviders([
    provide<Connection>({useValue: myConnection})
]);

InjectorContext.forProviders([
    provide<Connection>({useFactory: () => myConnection})
]);

5.9. Constructor/Property Injection

In most cases, constructor injection is used. All dependencies are specified as constructor arguments and are automatically injected by the DI container.

class MyService {
    constructor(protected database: Database) {
    }
}

Optional dependencies should be marked as such, otherwise an error could be triggered if no provider can be found.

class MyService {
    constructor(protected database?: Database) {
    }
}

An alternative to constructor injection is property injection. This is usually used when the dependency is optional or the constructor is otherwise too full. The properties are automatically assigned once the instance is created (and thus the constructor is executed).

import { Inject } from '@deepkit/injector';

class MyService {
    //required
    protected database!: Inject<Database>;

    //or optional
    protected database?: Inject<Database>;
}

5.10. Configuration

The dependency injection container also allows configuration options to be injected. This configuration injection can be received via constructor injection or property injection.

The Module API supports the definition of a configuration definition, which is a regular class. By providing such a class with properties, each property acts as a configuration option. Because of the way classes can be defined in TypeScript, this allows defining a type and default values per property.

class RootConfiguration {
    domain: string = 'localhost';
    debug: boolean = false;
}

const rootModule = new InjectorModule([UserRepository])
     .setConfigDefinition(RootConfiguration)
     .addImport(lowLevelModule);

The configuration options domain and debug can now be used quite conveniently type-safe in providers.

class UserRepository {
    constructor(private debug: RootConfiguration['debug']) {}

    getUsers() {
        if (this.debug) console.debug('fetching users ...');
    }
}

The values of the options themselves can be set via configure().

	rootModule.configure({debug: true});

Options that do not have a default value but are still necessary can be provided with a !. This forces the user of the module to provide the value, otherwise an error will occur.

class RootConfiguration {
    domain!: string;
}

5.10.1. Validation

Also, all serialization and validation types from the previous chapters Validation and Serialization can be used to specify in great detail what type and content restrictions an option must have.

class RootConfiguration {
    domain!: string & MinLength<4>;
}

5.10.2. Injection

Configuration options, like other dependencies, can be safely and easily injected through the DI container as shown earlier. The simplest method is to reference a single option using the index access operator:

class WebsiteController {
    constructor(private debug: RootConfiguration['debug']) {}

    home() {
        if (this.debug) console.debug('visit home page');
    }
}

Configuration options can be referenced not only individually, but also as a group. The TypeScript utility type Partial is used for this purpose:

class WebsiteController {
    constructor(private options: Partial<RootConfiguration, 'debug' | 'domain'>) {}

    home() {
        if (this.options.debug) console.debug('visit home page');
    }
}

To get all configuration options, the configuration class can also be referenced directly:

class WebsiteController {
    constructor(private options: RootConfiguration) {}

    home() {
        if (this.options.debug) console.debug('visit home page');
    }
}

However, it is recommended to reference only the configuration options that are actually used. This not only simplifies unit tests, but also makes it easier to see what is actually needed from the code.

5.11. Scopes

By default, all providers of the DI container are singletons and are therefore instantiated only once. This means that in the example of UserRepository there is always only one instance of UserRepository during the entire runtime. At no time is a second instance created, unless the user does this manually with the "new" keyword.

However, there are various use cases where a provider should only be instantiated for a short time or only during a specific event. Such an event could be, for example, an HTTP request or an RPC call. This would mean that a new instance is created for each event and after this instance is no longer used it is automatically removed (by the garbage collector).

An HTTP request is a classic example of a scope. For example, providers such as a session, a user object, or other request-related providers can be registered to this scope. To create a scope, simply choose an arbitrary scope name and then specify it with the providers.

import { InjectorContext } from '@deepkit/injector';

class UserSession {}

const injector = InjectorContext.forProviders([
    {provide: UserSession, scope: 'http'}
]);

Once a scope is specified, this provider cannot be obtained directly from the DI container, so the following call will fail:

const session = injector.get(UserSession); //throws

Instead, a scoped DI container must be created. This would happen every time an HTTP request comes in:

const httpScope = injector.createChildScope('http');

Providers that are also registered in this scope can now be requested on this scoped DI container, as well as all providers that have not defined a scope.

const session = httpScope.get(UserSession); //works

Since all providers are singleton by default, each call to get(UserSession) will always return the same instance per scoped container. If you create multiple scoped containers, multiple UserSessions will be created.

Scoped DI containers have the ability to set values dynamically from the outside. For example, in an HTTP scope, it is easy to set the HttpRequest and HttpResponse objects.

const injector = InjectorContext.forProviders([
    {provide: HttpResponse, scope: 'http'},
    {provide: HttpRequest, scope: 'http'},
]);

httpServer.on('request', (req, res) => {
    const httpScope = injector.createChildScope('http');
    httpScope.set(HttpRequest, req);
    httpScope.set(HttpResponse, res);
});

Applications using the Deepkit framework have by default an http, an rpc, and a cli scope. See respectively the chapter CLI, HTTP, or RPC.

5.12. Setup Calls

Setup calls allow to manipulate the result of a provider. This is useful for example to use another dependency injection variant, the method injection.

Setup calls can only be used with the module API or the app API and are registered above the module.

class UserRepository  {
    private db?: Database;
    setDatabase(db: Database) {
       this.db = db;
    }
}

const rootModule = new InjectorModule([UserRepository])
     .addImport(lowLevelModule);

rootModule.setupProvider(UserRepository).setDatabase(db);

The setupProvider method thereby returns a proxy object of UserRepository on which its methods can be called. It should be noted that these method calls are merely placed in a queue and are not executed at this time. Accordingly, no return value is returned.

In addition to method calls, properties can also be set.

class UserRepository  {
    db?: Database;
}

const rootModule = new InjectorModule([UserRepository])
     .addImport(lowLevelModule);

rootModule.setupProvider(UserRepository).db = db;

This assignment is also simply placed in a queue.

The calls or the assignments in the queue are then executed on the actual result of the provider as soon as this is created. That is with a ClassProvider these are applied to the class instance, as soon as the instance is created, with a FactoryProvider on the result of the Factory, and with a ValueProvider on the Provider.

To reference not only static values, but also other providers, the function injectorReference can be used. This function returns a reference to a provider, which is also requested by the DI container when the setup calls are executed.

class Database {}

class UserRepository  {
    db?: Database;
}

const rootModule = new InjectorModule([UserRepository, Database])
rootModule.setupProvider(UserRepository).db = injectorReference(Database);

Abstractions/Interfaces

Setup calls can also be assigned to an interface.

rootModule.setupProvider<DatabaseInterface>().logging = logger;

6. Event System

An event system allows application components in the same process to communicate with each other by sending and listening for events. It helps modularize the code by sending messages between functions that do not know about each other directly.

The application or library opens the possibility to execute additional functions at a certain time of the execution. These additional functions register themselves as so-called event listeners.

An event can be multifaceted:

  • The application goes up or down.

  • A new user has been created or deleted.

  • An error was thrown.

  • A new HTTP request has come in.

Deepkit Framework and its libraries already offer various events to which the user can listen and react. However, any number of custom events can be created to make the application modularly extensible.

Below is an example of the low-level API of @deepkit/event. When using Deepkit framework, event listener registration is not done via EventDispatcher directly but via modules.

import { EventDispatcher, EventToken } from '@deepkit/event';

const dispatcher = new EventDispatcher();
const MyEvent = new EventToken('my-event');

dispatcher.listen(MyEvent, (event) => {
    console.log('MyEvent triggered!');
});
dispatcher.dispatch(MyEvent);

6.1. Installation

Since Deepkit’s event system is based on Runtime Types, it is necessary to have @deepkit/type already installed correctly. See Runtime Type Installation.

If this is done successfully, @deepkit/event can be installed or the Deepkit framework which already uses the library under the hood.

npm install @deekpit/event

Note that @deepkit/event for the controller API is based on TypeScript decorators and this feature must be enabled accordingly with experimentalDecorators once the controller API is used.

file: tsconfig.json

{
  "compilerOptions": {
    "module": "CommonJS",
    "target": "es6",
    "moduleResolution": "node",
    "experimentalDecorators": true
  },
  "reflection": true
}

Once the library is installed, the API of it can be used directly.

6.2. Event Token

At the heart of the event system are the event tokens. They are objects that define the unique event ID and the event type. An event can be triggered and an event can be listened to via an event token. Conceptually, the person who triggers the event of an event token is also the owner of this event token. The event token decides accordingly which data is available at the event and whether asynchronous event listeners are allowed.

const MyEvent = new EventToken('my-event');

TODO asynchronous

6.3. Event Types

TODO

6.4. Propagation

TODO. event.stop()

6.5. Dependency Injection

TODO

7. CLI

Command-line Interface (CLI) programs are programs that interact via the terminal in the form of text input and text output. The advantage of interacting with the application in this variant is that only a terminal must exist either locally or via an SSH connection.

A CLI application in Deepkit has full access to the DI container and can thus access all providers and configuration options.

The arguments and options of the CLI application are controlled by method parameters via TypeScript types and are automatically serialized and validated.

CLI is one of three entry points to a Deepkit Framework application. In the Deepkit framework, the application is always launched via a CLI program, which is itself written in TypeScript by the user. Therefore, there is no Deepkit specific global CLI tool to launch a Deepkit application. This is how you launch the HTTP/RPC server, perform migrations, or run your own commands. This is all done through the same entry point, the same file. Once the Deepkit framework is used by importing FrameworkModule from @deepkit/framework, the application gets additional commands for the application server, migrations, and more.

The CLI framework allows you to easily register your own commands and is based on simple classes. In fact, it is based on @deepkit/app, a small package intended only for this purpose, which can also be used standalone without the deepkit framework. In this package you can find decorators that are needed to decorate the CLI controller class.

Controllers are managed or instantiated by the Dependency Injection container and can therefore use other providers. See the Dependency Injection chapter for more details.

7.1. Installation

Since CLI programs in Deepkit are based on Runtime Types, it is necessary to have @deepkit/type already installed correctly. See Runtime Type Installation.

If this is done successfully, @deepkit/app can be installed or the Deepkit framework which already uses the library under the hood.

npm install @deekpit/app

Note that @deepkit/app is based on TypeScript decorators and this feature must be enabled accordingly with experimentalDecorators.

file: tsconfig.json

{
  "compilerOptions": {
    "module": "CommonJS",
    "target": "es6",
    "moduleResolution": "node",
    "experimentalDecorators": true
  },
  "reflection": true
}

Once the library is installed, the API of it can be used directly.

7.2. Use

To create a command for your application, you need to create a CLI controller. This is a simple class that has an exeecute method and is equipped with information about the command.

File: app.ts

#!/usr/bin/env ts-node-script
import { App, cli } from '@deepkit/app';

@cli.controller('test', {
    description: 'My first command'
})
class TestCommand {
    async execute() {
        console.log('Hello World')
    }
}

new App({
    controllers: [TestCommand]
}).run();

In the decorator @cli.controller the unique name of the CLI application is defined as the first argument. Further options like a description can be optionally added in the object at the second position.

This code is already a complete CLI application and can be started this way:

$ ts-node ./app.ts
VERSION
  Node

USAGE
  $ ts-node app.ts [COMMAND]

COMMANDS
  test

You can see that a "test" command is available. To execute this, the name must be passed as an argument:

$ ts-node ./app.ts test
Hello World

It is also possible to make the file executable using chmod +x app.ts, so that the command ./app.ts is already sufficient to start it. Note that then a so-called Shebang is necessary. Shebang denotes the character combination #! at the beginning of a script program. In the example above this is already present: #!/usr/bin/env ts-node-script and uses the script mode of ts-node.

$ ./app.ts test
Hello World

In this way, any number of commands can be created and registered. The unique name specified in @cli.controller should be well chosen and allows grouping of commands with the : character (e.g. user:create, user:remove, etc).

7.3. Arguments

To add arguments, new parameters are added to the execute method and decorated with the @arg decorator.

import { cli, arg } from '@deepkit/app';

@cli.controller('test')
class TestCommand {
    async execute(
        @arg name: string
    ) {
        console.log('Hello', name);
    }
}

If you execute this command now without specifying a name, an error will be issued:

$ ./app.ts test
RequiredArgsError: Missing 1 required arg:
name

By using --help you will get more information about the required arguments:

$ ./app.ts test --help
USAGE
  $ ts-node-script app.ts test NAME

Once the name is passed as an argument, the execute method in TestCommand is executed and the name is passed correctly.

$ ./app.ts test "beautiful world"
Hello beautiful world

7.4. Flags

Flags are another way to pass values to your command. Mostly these are optional, but they don`t have to be. Parameters decorated with @flag name can be passed via --name value or --name=value.

import { flag } from '@deepkit/app';

class TestCommand {
    async execute(
        @flag id: number
    ) {
        console.log('id', id);
    }
}
$ ./app.ts test --help
USAGE
  $ ts-node app.ts test

OPTIONS
  --id=id  (required)

In the help view you can see in the "OPTIONS" that a --id flag is necessary. If you enter this flag correctly, the command will receive this value.

$ ./app.ts test --id 23
id 23

$ ./app.ts test --id=23
id 23

7.4.1. Boolean Flags

Flags have the advantage that they can also be used as a worthless flag, for example to activate a certain behavior. As soon as a parameter is marked as an optional Boolean, this behavior is activated.

import { flag } from '@deepkit/app';

class TestCommand {
    async execute(
        @flag remove: boolean = false
    ) {
        console.log('delete?', remove);
    }
}
$ ./app.ts test
delete? false

$ ./app.ts test --remove
delete? true

7.4.2. Multiple Flags

To pass multiple values to the same flag, a flag can be marked as an array.

import { flag } from '@deepkit/app';

class TestCommand {
    async execute(
        @flag id: number[] = []
    ) {
        console.log('ids', id);
    }
}
$ ./app.ts test
ids: []

$ ./app.ts test --id 12
ids: [12]

$ ./app.ts test --id 12 --id 23
ids: [12, 23]

7.4.3. Single Character Flags

To allow a flag to be passed as a single character as well, @flag.char('x') can be used.

import { flag } from '@deepkit/app';

class TestCommand {
    async execute(
        @flag.char('o') output: string
    ) {
        console.log('output: ', output);
    }
}
$ ./app.ts test --help
USAGE
  $ ts-node app.ts test

OPTIONS
  -o, --output=output  (required)


$ ./app.ts test --output test.txt
output: test.txt

$ ./app.ts test -o test.txt
output: test.txt

7.5. Optional / Default

The signature of the method execute defines which arguments or flags are optional. If the parameter is optional in the type system, the user does not have to provide it.

class TestCommand {
    async execute(
        @arg name?: string
    ) {
        console.log('Hello', name || 'nobody');
    }
}
$ ./app.ts test
Hello nobody

The same for parameters with a default value:

class TestCommand {
    async execute(
        @arg name: string = 'body'
    ) {
        console.log('Hello', name);
    }
}
$ ./app.ts test
Hello nobody

This also applies to flags in the same way.

7.6. Serialization / Validation

All arguments and flags are automatically deserialized based on its types, validated and can be provided with additional constraints.

Thus, arguments defined as numbers are always guaranteed to be real numbers in the controller, even though the command-line interface is based on text and thus strings. The conversion happens automatically with the feature xref:serialization.adoc#serialization-loosely-conversion.

class TestCommand {
    async execute(
        @arg id: number
    ) {
        console.log('id', id, typeof id);
    }
}
$ ./app.ts test 123
id 123 number

Additional constraints can be defined with the type decorators from @deepkit/type.

import { Positive } from '@deepkit/type';

class TestCommand {
    async execute(
        @arg id: number & Positive
    ) {
        console.log('id', id, typeof id);
    }
}

The type Postive in id indicates that only positive numbers are wanted. If the user now passes a negative number, the code in execute will not be executed at all and an error message will be presented.

$ ./app.ts test -123
Validation error in id: Number needs to be positive [positive]

If the number is positive, this works again as before. This additional validation, which is very easy to do, makes the command much more robust against wrong entries. See the chapter Validation for more information.

7.7. Description

To describe a flag or argument, @flag.description or @arg.description can be used respectively.

import { Positive } from '@deepkit/type';

class TestCommand {
    async execute(
        @arg.description('The users identifier') id: number & Positive,
        @flag.description('Delete the user?') remove: boolean = false,
    ) {
        console.log('id', id, typeof id);
    }
}

In the help view, this description appears after the flag or argument:

$ ./app.ts test --help
USAGE
  $ ts-node app.ts test ID

ARGUMENTS
  ID  The users identifier

OPTIONS
  --remove  Delete the user?

7.8. Exit Code

The exit code is 0 by default, which means that the command was executed successfully. To change the exit code, a number other than 0 should be returned in the exucute method.

@cli.controller('test')
export class TestCommand {
    async execute() {
        console.error('Error :(');
        return 12;
    }
}
$ ./app.ts
Error :(
$ echo $?
12

7.9. Dependency Injection

The class of the command is managed by the DI Container, so dependencies can be defined that are resolved via the DI Container.

#!/usr/bin/env ts-node-script
import { App, cli } from '@deepkit/app';
import { Logger, ConsoleTransport } from '@deepkit/logger';

@cli.controller('test', {
    description: 'My super first command'
})
class TestCommand {
    constructor(protected logger: Logger) {
    }

    async execute() {
        this.logger.log('Hello World!');
    }
}

new App({
    providers: [{provide: Logger, useValue: new Logger([new ConsoleTransport]}],
    controllers: [TestCommand]
}).run();

8. HTTP

Processing HTTP requests is one of the most well-known tasks for a server. It converts an input (HTTP request) into an output (HTTP response) and performs a specific task. A client can send data to the server via an HTTP request in a variety of ways, which must be read and handled correctly. In addition to the HTTP body, HTTP query or HTTP header values are also possible. How data is actually processed depends on the server. It is the server that defines where and how the values are to be sent by the client.

The top priority here is not only to correctly execute what the user expects, but to correctly convert (deserialize) and validate any input from the HTTP request.

The pipeline through which an HTTP request passes on the server can be varied and complex. Many simple HTTP libraries pass only the HTTP request and the HTTP response for a given route, and expect the developer to process the HTTP response directly. A middleware API allows the pipeline to be extended as needed.

Express Example

const http = express();
http.get('/user/:id', (request, response) => {
    response.send({id: request.params.id, username: 'Peter' );
});

This is very well tailored for simple use cases, but quickly becomes confusing as the application grows, since all inputs and outputs must be manually serialized or deserialized and validated. Also, consideration must be given to how objects and services such as a database abstraction can be obtained from the application itself. It forces the developer to put an architecture on top of it that maps these mandatory functionalities.

Deepkit’s HTTP library leverages the power of TypeScript and Dependency Injection. Serialization/deserialization and validation of any values happen automatically based on the defined types. It also allows defining routes either via a functional API as in the example above or via controller classes to cover the different needs of an architecture.

It can be used either with an existing HTTP server like Node’s http module or with the Deepkit framework. Both API variants have access to the dependency injection container and can thus conveniently retrieve objects such as a database abstraction and configurations from the application.

Deepkit Example

import { Positive } from '@deepkit/type';
import { http } from '@deepkit/http';

//Functional API
router.get('/user/:id', (id: number & Positive, database: Database) => {
    //id is guaranteed to be a number and positive.
    //database is injected by the DI Container.
    return database.query(User).filter({id}).findOne();
});

//Controller API
class UserController {
    constructor(private database: Database) {}

    @http.GET('/user/:id')
    user(id: number & Positive) {
        return this.database.query(User).filter({id}).findOne();
    }
}

8.1. Installation

Since CLI programs in Deepkit are based on Runtime Types, it is necessary to have @deepkit/type already installed correctly. See Runtime Type Installation.

If this is done successfully, @deepkit/app can be installed or the Deepkit framework which already uses the library under the hood.

npm install @deekpit/http

Note that @deepkit/http for the controller API is based on TypeScript decorators and this feature must be enabled accordingly with experimentalDecorators once the controller API is used.

file: tsconfig.json

{
  "compilerOptions": {
    "module": "CommonJS",
    "target": "es6",
    "moduleResolution": "node",
    "experimentalDecorators": true
  },
  "reflection": true
}

Once the library is installed, the API of it can be used directly.

8.2. Functional API

The functional API is based on functions and can be registered via the router registry, which can be obtained via the DI container of the app.

import { App } from '@deepkit/app';
import { FrameworkModule } from '@deepkit/framework';
import { HttpRouterRegistry } from '@deepkit/http';

const app = new App({
    imports: [new FrameworkModule]
});

const router = app.get(HttpRouterRegistry);

router.get('/', () => {
    return "Hello World!";
});

app.run();

The router registry can also be obtained in Event Listener or in the bootstrap, so that based on modules, configurations and other providers various routes are registered.

import { App } from '@deepkit/app';
import { FrameworkModule } from '@deepkit/framework';

const app = new App({
    bootstrap: (router: HttpRouterRegistry) => {
        router.get('/', () => {
            return "Hello World!";
        });
    },
    imports: [new FrameworkModule]
});

Once modules are used, functional routes can also be provided dynamically by modules.

import { App, createModule } from '@deepkit/app';
import { FrameworkModule } from '@deepkit/framework';
import { HttpRouterRegistry } from '@deepkit/http';

class MyModule extends createModule({}) {
    override process() {
        const router = this.setupGlobalProvider(HttpRouterRegistry);

        router.get('/', () => {
            return "Hello World!";
        });
    }
}

const app = new App({
    imports: [new FrameworkModule, new MyModule]
});

See Framework Modules to learn more about App Modules.

8.3. Controller API

The controller API is based on classes and can be registered via the App-API under the option controllers.

import { App } from '@deepkit/app';
import { FrameworkModule } from '@deepkit/framework';
import { http } from '@deepkit/http';

class MyPage {
    @http.GET('/')
    helloWorld() {
        return "Hello World!";
    }
}

new App({
    controllers: [MyPage],
    imports: [new FrameworkModule]
}).run();

Once modules are used, controllers can also be provided by modules.

import { App, createModule } from '@deepkit/app';
import { FrameworkModule } from '@deepkit/framework';
import { http } from '@deepkit/http';

class MyPage {
    @http.GET('/')
    helloWorld() {
        return "Hello World!";
    }
}

class MyModule extends createModule({
    controllers: [MyPage]
}) {
}

const app = new App({
    imports: [new FrameworkModule, new MyModule]
});

To provide controllers dynamically (depending on the configuration option, for example), the process hook can be used.

class MyModuleConfiguration {
    debug: boolean = false;
}

class MyModule extends createModule({
    config: MyModuleConfiguration
}) {
    override process() {
        if (this.config.debug) {
            class DebugController {
                @http.GET('/debug/')
                root() {
                    return 'Hello Debugger';
                }
            }
            this.addController(DebugController);
        }
    }
}

See Framework Modules to learn more about App Modules.

8.4. HTTP Server

If Deepkit Framework is used, an HTTP server is already built in. However, the HTTP library can also be used with its own HTTP server without using the Deepkit framework.

import { Server } from 'http';
import { HttpRequest, HttpResponse } from '@deepkit/http';

const app = new App({
    controllers: [MyPage],
    imports: [new HttpModule]
});

const httpKernel = app.get(HttpKernel);

new Server(
    { IncomingMessage: HttpRequest, ServerResponse: HttpResponse, },
    ((req, res) => {
        httpKernel.handleRequest(req as HttpRequest, res as HttpResponse);
    })
).listen(8080, () => {
    console.log('listen at 8080');
});

8.5. HTTP Client

todo: fetch API, validation, and cast.

8.6. Route Names

Routes can be given a unique name that can be referenced when forwarding. Depending on the API, the way a name is defined differs.

//functional API
router.get({
    path: '/user/:id',
    name: 'userDetail'
}, (id: number) => {
    return {userId: id};
});

//controller API
class UserController {
    @http.GET('/user/:id').name('userDetail')
    userDetail(id: number) {
        return {userId: id};
    }
}

From all routes with a name the URL can be requested by Router.resolveUrl().

import { HttpRouter } from '@deepkit/http';
const router = app.get(HttpRouter);
router.resolveUrl('userDetail', {id: 2}); //=> '/user/2'

8.7. Dependency Injection

The router functions as well as the controller classes and controller methods can define arbitrary dependencies, which are resolved by the dependency injection container. For example, it is possible to conveniently get to a database abstraction or logger.

For example, if a database has been provided as a provider, it can be injected:

class Database {
    //...
}

const app = new App({
    providers: [
        Database,
    ],
});

Functional API:

router.get('/user/:id', async (id: number, database: Database) => {
    return await database.query(User).filter({id}).findOne();
});

Controller API:

class UserController {
    constructor(private database: Database) {}

    @http.GET('/user/:id')
    async userDetail(id: number) {
        return await this.database.query(User).filter({id}).findOne();
    }
}

//alternatively directly in the method
class UserController {
    @http.GET('/user/:id')
    async userDetail(id: number, database: Database) {
        return await database.query(User).filter({id}).findOne();
    }
}

See Dependency Injection for more information.

8.8. Input

All of the following input variations function in the same way for both the functional and the controller API. They allow to read data from an HTTP request in a type-safe and decoupled way. This leads not only to significantly increased security, but also easier unit testing, since, strictly speaking, not even an HTTP request object needs to exist to test the route.

All parameters are automatically converted (deserialized) to the defined TypeScript type and validated. This is done via the @deepkit/type package and its Serialization and Validation features.

For simplicity, all examples with the functional API are shown below.

8.8.1. Path Parameters

Path parameters are values extracted from the URL of the route. The type of the value depends on the type at the associated parameter of the function or method. The conversion is done automatically using the xref:serialization.adoc#serialization-loosely-conversion feature.

router.get('/:text', (text: string) => {
    return 'Hello ' + text;
});
$ curl http://localhost:8080/galaxy
Hello galaxy

If a Path parameter is defined as a type other than string, it will be converted correctly.

router.get('/user/:id', (id: number) => {
    return `${id} ${typeof id}`;
});
$ curl http://localhost:8080/user/23
23 number

Additional validation constraints can also be applied to the types.

import { Positive } from '@deepkit/type';

router.get('/user/:id', (id: number & Positive) => {
    return `${id} ${typeof id}`;
});

All validation types from @deepkit/type can be applied. For this see more in HTTP Validation.

The Path parameters have [^/]+ set as a regular expression by default in the URL matching. The RegExp for this can be customized as follows:

import { HttpRegExp } from '@deepkit/http';
import { Positive } from '@deepkit/type';

router.get('/user/:id', (id: HttpRegExp<number & Positive, '[0-9]+'>) => {
    return `${id} ${typeof id}`;
});

This is only necessary in exceptional cases, because often the types in combination with validation types themselves already correctly restrict possible values.

8.8.2. Query Parameters

Query parameters are values from the URL after the ? character and can be read with the HttpQuery<T> type. The name of the parameter corresponds to the name of the query parameter.

import { HttpQuery } from '@deepkit/http';

router.get('/', (text: HttpQuery<number>) => {
    return `Hello ${text}`;
});
$ curl http://localhost:8080/\?text\=galaxy
Hello galaxy

Query parameters are also automatically deserialized and validated.

import { HttpQuery } from '@deepkit/http';
import { MinLength } from '@deepkit/type';

router.get('/', (text: HttpQuery<string> & MinLength<3>) => {
    return 'Hello ' + text;
}
$ curl http://localhost:8080/\?text\=galaxy
Hello galaxy
$ curl http://localhost:8080/\?text\=ga
error

All validation types from @deepkit/type can be applied. More about this can be found in HTTP Validation.

Warning: Parameter values are not escaped/sanitized. Returning them directly in a string in a route as HTML opens a security hole (XSS). Make sure never to trust external input and filter/sanitize/convert data where necessary.

8.8.3. Query Model

With a large number of query parameters, it can quickly become confusing. To bring order back in here, a model (class or interface) can be used, which summarizes all possible query parameters.

import { HttpQueries } from '@deepkit/http';

class HelloWorldQuery {
    text!: string;
    page: number = 0;
}

router.get('/', (query: HttpQueries<HelloWorldQuery>) {
    return 'Hello ' + query.text + ' at page ' + query.page;
}
$ curl http://localhost:8080/\?text\=galaxy&page=1
Hello galaxy at page 1

The properties in the given model can contain all TypeScript types and validation types that @deepkit/type supports. See the chapter Serialization and Validation.

8.8.4. Body

For HTTP methods that allow an HTTP body, a body model can also be specified. The body content type from the HTTP request must be either application/x-www-form-urlencoded, multipart/form-data or application/json so Deepkit can automatically convert this to JavaScript objects.

import { HttpBody } from '@deepkit/type';

class HelloWorldBody {
    text!: string;
}

router.post('/', (body: HttpBody<HelloWorldBody>) => {
    return 'Hello ' + body.text;
}

8.8.5. Header

8.8.6. Stream

Manual Validation Handling

To manually take over the validation of the body model, a special type HttpBodyValidation<T> can be used. It allows to receive also invalid body data and to react very specifically to error messages.

import { HttpBodyValidation } from '@deepkit/type';

class HelloWorldBody {
    text!: string;
}

router.post('/', (body: HttpBodyValidation<HelloWorldBody>) => {
    if (!body.valid()) {
        // Houston, we got some errors.
        const textError = body.getErrorMessageForPath('text');
        return 'Text is invalid, please fix it. ' + textError;
    }

    return 'Hello ' + body.text;
})

As soon as valid() returns false, the values in the specified model may be in a faulty state. This means that the validation has failed. If HttpBodyValidation is not used and an incorrect HTTP request is received, the request would be directly aborted and the code in the function would never be executed. Use HttpBodyValidation only if, for example, error messages regarding the body should be manually processed in the same route.

The properties in the given model can contain all TypeScript types and validation types that @deepkit/type supports. See the chapter Serialization and Validation.

File Upload

A special property type on the body model can be used to allow the client to upload files. Any number of UploadedFile can be used.

import { UploadedFile, HttpBody } from '@deepkit/http';
import { readFileSync } from 'fs';

class HelloWordBody {
    file!: UploadedFile;
}

router.post('/', (body: HttpBody<HelloWordBody>) => {
    const content = readFileSync(body.file.path);

    return {
        uploadedFile: body.file
    };
})
$ curl http://localhost:8080/ -X POST -H "Content-Type: multipart/form-data" -F "[email protected]/23931.png"
{
    "uploadedFile": {
        "size":6430,
        "path":"/var/folders/pn/40jxd3dj0fg957gqv_nhz5dw0000gn/T/upload_dd0c7241133326bf6afddc233e34affa",
        "name":"23931.png",
        "type":"image/png",
        "lastModifiedDate":"2021-06-11T19:19:14.775Z"
    }
}

By default, Router saves all uploaded files to a temp folder and removes them once the code in the route has been executed. It is therefore necessary to read the file in the specified path in path and save it to a permanent location (local disk, cloud storage, database).

8.9. Validation

Validation in an HTTP server is a mandatory functionality, because almost always work with data that is not trustworthy. The more places data is validated, the more stable the server is. Validation in HTTP routes can be conveniently used via types and validation constraints and is checked with a highly optimized validator from @deepkit/type, so there are no performance problems in this regard. It is therefore highly recommended to use these validation capabilities as well. Better one time too much, than one time too little.

All inputs such as path parameters, query parameters, and body parameters are automatically validated for the specified TypeScript type. If additional constraints are specified via @deepkit/type types, these are also checked.

import { HttpQuery, HttpQueries, HttpBody } from '@deepkit/http';
import { MinLength } from '@deepkit/type';

router.get('/:text', (text: string & MinLength<3>) => {
    return 'Hello ' + text;
}

router.get('/', (text: HttpQuery<string> & MinLength<3>) => {
    return 'Hello ' + text;
}

interface MyQuery {
     text: string & MinLength<3>;
}

router.get('/', (query: HttpQueries<MyQuery>) => {
    return 'Hello ' + query.text;
}

router.post('/', (body: HttpBody<MyQuery>) => {
    return 'Hello ' + body.text;
}

See Validation for more information on this.

8.10. Output

A route can return various data structures. Some of them are handled in a special way, such as redirects and templates, and others, such as simple objects, are simply sent as JSON.

8.10.1. JSON

By default, normal JavaScript values are returned to the client as JSON with the header application/json; charset=utf-8.

router.get('/', () => {
    // will be sent as application/json
    return {hello: 'world'}
});

If an explicit return type is specified for the function or method, the data is serialized to JSON with the Deepkit JSON Serializer according to this type.

interface ResultType {
    hello: string;
}

router.get('/', (): ResultType => {
    // will be sent as application/json and additionalProperty is dropped
    return {hello: 'world', additionalProperty: 'value'};
});

8.10.2. HTML

To send HTML there are two possibilities. Either the object HtmlResponse or Template Engine with TSX is used.

import { HtmlResponse } from '@deepkit/http';

router.get('/', () => {
    // will be sent as Content-Type: text/html
    return new HtmlResponse('<b>Hello World</b>');
});
router.get('/', () => {
    // will be sent as Content-Type: text/html
    return <b>Hello World</b>;
});

The template engine variant with TSX has the advantage that used variables are automatically HTML-escaped. See also Template.

8.10.3. Custom Content

Besides HTML and JSON it is also possible to send text or binary data with a specific content type. This is done via the object Response.

import { Response } from '@deepkit/http';

router.get('/', () => {
    return new Response('<title>Hello World</title>', 'text/xml');
});

8.10.4. HTTP Errors

By throwing various HTTP errors, it is possible to immediately interrupt the processing of an HTTP request and output the corresponding HTTP status of the error.

import { HttpNotFoundError } from '@deepkit/http';

router.get('/user/:id', async (id: number, database: Database) => {
    const user = await database.query(User).filter({id}).findOneOrUndefined();
    if (!user) throw new HttpNotFoundError('User not found');
    return user;
});

By default, all errors are returned to the client as JSON. This behavior can be customized in the event system under the event httpWorkflow.onControllerError. See the section HTTP Events.

Error class Status

HttpBadRequestError

400

HttpUnauthorizedError

401

HttpAccessDeniedError

403

HttpNotFoundError

404

HttpMethodNotAllowedError

405

HttpNotAcceptableError

406

HttpTimeoutError

408

HttpConflictError

409

HttpGoneError

410

HttpTooManyRequestsError

429

HttpInternalServerError

500

HttpNotImplementedError

501

The error HttpAccessDeniedError is a special case. As soon as it is thrown, the HTTP workflow (see HTTP Events) does not jump to controllerError but to accessDenied.

Custom HTTP errors can be created and thrown with createHttpError.

export class HttpMyError extends createHttpError(412, 'My Error Message') {
}

8.10.5. Additional Headers

To modify the header of an HTTP response, additional methods can be called on the Response, JSONResponse, and HTMLResponse objects.

import { Response } from '@deepkit/http';

router.get('/', () => {
    return new Response('Access Denied', 'text/plain')
        .header('X-Reason', 'unknown')
        .status(403);
});

8.10.6. Redirect

To return a 301 or 302 redirect as a response, Redirect.toRoute or Redirect.toUrl can be used.

import { Redirect } from '@deepkit/http';

router.get({path: '/', name: 'homepage'}, () => {
    return <b>Hello World</b>;
});

router.get({path: '/registration/complete'}, () => {
    return Redirect.toRoute('homepage');
});

The Redirect.toRoute method uses the name of the route. How to set a route name can be seen in the section HTTP Route Name. If this referenced route (query or path) contains parameters, they can be specified via the second argument:

router.get({path: '/user/:id', name: 'user_detail'}, (id: number) => {

});

router.post('/user', (user: HttpBody<User>) => {
    //... store user and redirect to its detail page
    return Redirect.toRoute('user_detail', {id: 23});
});

Alternatively, you can redirect to a URL with Redirect.toUrl.

router.post('/user', (user: HttpBody<User>) => {
    //... store user and redirect to its detail page
    return Redirect.toUrl('/user/' + 23);
});

By default, both use a 302 forwarding. This can be customized via the statusCode argument.

8.11. Scope

All HTTP controllers and functional routes are managed within the http dependency injection scope. HTTP controllers are instantiated accordingly for each HTTP request. This also means that both can access providers registered for the http scope. So additionally HttpRequest and HttpResponse from @deepkit/http are usable as dependencies. If Deepkit Framework is used, SessionHandler from @deepkit/framework is also available.

import { HttpResponse } from '@deepkit/http';

router.get('/user/:id', (id: number, request: HttpRequest) => {
});

router.get('/', (response: HttpResponse) => {
    response.end('Hello');
});

It can be useful to place providers in the http scope, for example to instantiate services for each HTTP request. Once the HTTP request has been processed, the http scoped DI container is deleted, thus cleaning up all its provider instances from the garbage collector (GC).

See Dependency Injection Scopes to learn how to place providers in the http scope.

8.12. Events

The HTTP module is based on a workflow engine that provides various event tokens that can be used to hook into the entire process of processing an HTTP request.

The workflow engine is a finite state machine that creates a new state machine instance for each HTTP request and then jumps from position to position. The first position is the start and the last one the response. Additional code can be executed in each position.

http workflow

Each event token has its own event type with additional information.

Event-Token Description

httpWorkflow.onRequest

When a new request comes in

httpWorkflow.onRoute

When the route should be resolved from the request

httpWorkflow.onRouteNotFound

When the route is not found

httpWorkflow.onAuth

When authentication happens

httpWorkflow.onResolveParameters

When route parameters are resolved

httpWorkflow.onAccessDenied

When access is denied

httpWorkflow.onController

When the controller action is called

httpWorkflow.onControllerError

When the controller action threw an error

httpWorkflow.onParametersFailed

When route parameters resolving failed

httpWorkflow.onResponse

When the controller action has been called. This is the place where the result is converted to a response.

Since all HTTP events are based on the workflow engine, its behavior can be modified by using the specified event and jumping there with the event.next() method.

The HTTP module uses its own event listeners on these event tokens to implement HTTP request processing. All these event listeners have a priority of 100, which means that when you listen for an event, your listener is executed first by default (since the default priority is 0). Add a priority above 100 to run after the HTTP module’s event listeners.

For example, suppose you want to catch the event when a controller is invoked. If a particular controller is to be invoked, we check if the user has access to it. If the user has access, we continue. But if not, we jump to the next workflow item accessDenied. There, the procedure of an access-denied is then automatically processed further.

import { App } from '@deepkit/app';
import { FrameworkModule } from '@deepkit/framework';
import { HtmlResponse, http, httpWorkflow } from '@deepkit/http';
import { eventDispatcher } from '@deepkit/event';

class MyWebsite {
    @http.GET('/')
    open() {
        return 'Welcome';
    }

    @http.GET('/admin').group('secret')
    secret() {
        return 'Welcome to the dark side';
    }
}

class SecretRouteListeners {
    @eventDispatcher.listen(httpWorkflow.onController)
    onController(event: typeof httpWorkflow.onController.event) {
        if (event.route.groups.includes('secret')) {
            //check here for authentication information like cookie session, JWT, etc.

            //this jumps to the 'accessDenied' workflow state,
            // essentially executing all onAccessDenied listeners.

            //since our listener is called before the HTTP kernel one,
            // the standard controller action will never be called.
            //this calls event.next('accessDenied', ...) under the hood
            event.accessDenied();
        }
    }

    /**
     * We change the default accessDenied implementation.
     */
    @eventDispatcher.listen(httpWorkflow.onAccessDenied)
    onAccessDenied(event: typeof httpWorkflow.onAccessDenied.event): void {
        if (event.sent) return;
        if (event.hasNext()) return;

        event.send(new HtmlResponse('No access to this area.', 403));
    }
}

new App({
    controllers: [MyWebsite],
    listeners: [SecretRouteListeners],
    imports: [new FrameworkModule]
}).run();
$ curl http://localhost:8080/
Welcome
$ curl http://localhost:8080/admin
No access to this area

8.13. Security

8.14. Sessions

8.15. Middleware

HTTP middlewares allow you to hook into the request/response cycle as an alternative to HTTP events. Its API allows you to use all middlewares from the Express/Connect framework.

Middleware A middleware can either be a class (which is instantiated by the dependency injection container) or a simple function.

import { HttpMiddleware, httpMiddleware, HttpRequest, HttpResponse } from '@deepkit/http';

class MyMiddleware implements HttpMiddleware {
    async execute(request: HttpRequest, response: HttpResponse, next: (err?: any) => void) {
        response.setHeader('middleware', '1');
        next();
    }
}


function myMiddlewareFunction(request: HttpRequest, response: HttpResponse, next: (err?: any) => void) {
    response.setHeader('middleware', '1');
    next();
}

new App({
    providers: [MyMiddleware],
    middlewares: [
        httpMiddleware.for(MyMiddleware),
        httpMiddleware.for(myMiddlewareFunction),
    ],
    imports: [new FrameworkModule]
}).run();

8.15.1. Global

By using httpMiddleware.for(MyMiddleware) a middleware is registered for all routes, globally.

import { httpMiddleware } from '@deepkit/http';

new App({
    providers: [MyMiddleware],
    middlewares: [
        httpMiddleware.for(MyMiddleware)
    ],
    imports: [new FrameworkModule]
}).run();

8.15.2. Per Controller

You can limit middlewares to one or multiple controllers in two ways. Either by using the @http.controller or httpMiddleware.for(T).forControllers(). excludeControllers allow you to exclude controllers.

@http.middleware(MyMiddleware)
class MyFirstController {

}
new App({
    providers: [MyMiddleware],
    controllers: [MainController, UsersCommand],
    middlewares: [
        httpMiddleware.for(MyMiddleware).forControllers(MyFirstController, MySecondController)
    ],
    imports: [new FrameworkModule]
}).run();

8.15.3. Per Route Name

forRouteNames along with its counterpart excludeRouteNames allow you to filter the execution of a middleware per route names.

class MyFirstController {
    @http.GET('/hello').name('firstRoute')
    myAction() {
    }

    @http.GET('/second').name('secondRoute')
    myAction2() {
    }
}
new App({
    controllers: [MainController, UsersCommand],
    providers: [MyMiddleware],
    middlewares: [
        httpMiddleware.for(MyMiddleware).forRouteNames('firstRoute', 'secondRoute')
    ],
    imports: [new FrameworkModule]
}).run();

8.15.4. Per Action/Route

To execute a middleware only for a certain route, you can either use @http.GET().middleware() or httpMiddleware.for(T).forRoute() where forRoute has multiple options to filter routes.

class MyFirstController {
    @http.GET('/hello').middleware(MyMiddleware)
    myAction() {
    }
}
new App({
    controllers: [MainController, UsersCommand],
    providers: [MyMiddleware],
    middlewares: [
        httpMiddleware.for(MyMiddleware).forRoutes({
            path: 'api/*'
        })
    ],
    imports: [new FrameworkModule]
}).run();

forRoutes() allows as first argument several way to filter for routes.

{
    path?: string;
    pathRegExp?: RegExp;
    httpMethod?: 'GET' | 'HEAD' | 'POST' | 'PATCH' | 'PUT' | 'DELETE' | 'OPTIONS' | 'TRACE';
    category?: string;
    excludeCategory?: string;
    group?: string;
    excludeGroup?: string;
}

8.15.5. Path Pattern

path supports wildcard *.

httpMiddleware.for(MyMiddleware).forRoutes({
    path: 'api/*'
})

8.15.6. RegExp

httpMiddleware.for(MyMiddleware).forRoutes({
    pathRegExp: /'api/.*'/
})

8.15.7. HTTP Method

Filter all routes by a HTTP method.

httpMiddleware.for(MyMiddleware).forRoutes({
    httpMethod: 'GET'
})

8.15.8. Category

category along with its counterpart excludeCategory allow you to filter per route category.

@http.category('myCategory')
class MyFirstController {

}

class MySecondController {
    @http.GET().category('myCategory')
    myAction() {
    }
}
httpMiddleware.for(MyMiddleware).forRoutes({
    category: 'myCategory'
})

8.15.9. Group

group along with its counterpart excludeGroup allow you to filter per route group.

@http.group('myGroup')
class MyFirstController {

}

class MySecondController {
    @http.GET().group('myGroup')
    myAction() {
    }
}
httpMiddleware.for(MyMiddleware).forRoutes({
    group: 'myGroup'
})

8.15.10. Per Modules

You can limit the execution of a module for a whole module.

httpMiddleware.for(MyMiddleware).forModule(ApiModule)

8.15.11. Per Self Modules

To execute a middleware for all controllers/routes of a module where the middleware was registered use forSelfModules().

const ApiModule new AppModule({
    controllers: [MainController, UsersCommand],
    providers: [MyMiddleware],
    middlewares: [
        //for all controllers registered of the same module
        httpMiddleware.for(MyMiddleware).forSelfModules(),
    ],
});

8.15.12. Timeout

All middleware needs to execute next() sooner or later. If a middleware does not execute next() withing a timeout, a warning is logged and the next middleware executed. To change the default of 4seconds to something else use timeout(milliseconds).

const ApiModule = new AppModule({
    controllers: [MainController, UsersCommand],
    providers: [MyMiddleware],
    middlewares: [
        //for all controllers registered of the same module
        httpMiddleware.for(MyMiddleware).timeout(15_000),
    ],
});

8.15.13. Multiple Rules

To combine multiple filters, you can chain method calls.

const ApiModule = new AppModule({
    controllers: [MyController],
    providers: [MyMiddleware],
    middlewares: [
        httpMiddleware.for(MyMiddleware).forControllers(MyController).excludeRouteNames('secondRoute')
    ],
});

8.15.14. Express Middleware

Almost all express middlewares are supported. Those who access certain request methods of express are not yet supported.

import * as compression from 'compression';

const ApiModule = new AppModule({
    middlewares: [
        httpMiddleware.for(compress()).forControllers(MyController)
    ],
});

8.16. Resolver

Router supports a way to resolve complex parameter types. For example, given a route such as /user/:id, this id can be resolved to a user object outside the route using a resolver. This further decouples HTTP abstraction and route code, further simplifying testing and modularity.

import { App } from '@deepkit/app';
import { FrameworkModule } from '@deepkit/framework';
import { http, RouteParameterResolverContext, RouteParameterResolver } from '@deepkit/http';

class UserResolver implements RouteParameterResolver {
    constructor(protected database: Database) {}

    async resolve(context: RouteParameterResolverContext) {
        if (!context.parameters.id) throw new Error('No :id given');
        return await this.database.getUser(parseInt(context.parameters.id, 10));
    }
}

@http.resolveParameter(User, UserResolver)
class MyWebsite {
    @http.GET('/user/:id')
    getUser(user: User) {
        return 'Hello ' + user.username;
    }
}

new App({
    controllers: [MyWebsite],
    providers: [UserDatabase, UserResolver],
    imports: [new FrameworkModule]
})
    .run();

The decorator in @http.resolveParameter specifies which class is to be resolved with the UserResolver. As soon as the specified class User is specified as a parameter in the function or method, the resolver is used to provide it.

If @http.resolveParameter is specified at the class, all methods of this class get this resolver. The decorator can also be applied per method:

class MyWebsite {
    @http.GET('/user/:id').resolveParameter(User, UserResolver)
    getUser(user: User) {
        return 'Hello ' + user.username;
    }
}

Also, the functional API can be used:

router.add(
    http.GET('/user/:id').resolveParameter(User, UserResolver),
    (user: User) => {
        return 'Hello ' + user.username;
    }
);

The User object does not necessarily have to depend on a parameter. It could just as well depend on a session or an HTTP header, and only be provided when the user is logged in. In RouteParameterResolverContext a lot of information about the HTTP request is available, so that many use cases can be mapped.

In principle, it is also possible to have complex parameter types provided via the Dependency Injection container from the http scope, since these are also available in the route function or method. However, this has the disadvantage that no asynchronous function calls can be used, since the DI container is synchronous throughout.

9. RPC

RPC stands for Remote Procedure Call and allows to call functions (procedures) on a remote server as if it were a local function. In contrast to HTTP client-server communication, the assignment is not done via the HTTP method and a URL, but the function name. The data to be sent is passed as normal function arguments and the result of the function call on the server is returned to the client.

The advantage of RPC is that the client-server abstraction is more lightweight, since no headers, URLs, query strings or the like are used. The disadvantage is that functions on a server via RPC cannot be called easily by a browser and it often requires a special client.

A key feature of RPC is that the data between the client and server is automatically serialized and deserialized. For this reason, type-safe RPC clients are usually possible. Some RPC frameworks therefore force users to provide the types (parameter types and return types) in a specific format. This can be in the form of a custom DSL as in gRPC (Protocol Buffers) and GraphQL with a code generator, or in the form of a JavaScript schema builder. Additional validation of the data can also be provided by the RPC framework, but is not supported by all.

In Deepkit RPC, the types from the functions are extracted from the TypeScript code itself (see Runtime Types), so there is no need to use a code generator or define them manually. Deepkit supports automatic serialization and deserialization of parameters and results. Once additional constraints are defined from Validation, they are also automatically validated. This makes communication via RPC extremely type-safe and effective. The support for streaming via rxjs in Deepkit RPC also makes this RPC framework a suitable tool for real-time communication.

To illustrate the concept behind RPC the following code:

//server.ts
class Controller {
    hello(title: string): string {
        return 'Hello ' + title
    }
}

A method like hello is implemented normally within a class on the server and can then be called from a remote client.

//client.ts
const client = new RpcClient('localhost');
const controller = client.controller<Controller>();

const result = await controller.hello('World'); // => 'Hello World';

Since RPC is fundamentally based on asynchronous communication, communication is mostly over HTTP, but can also happen over TCP or WebSockets. This means that all function calls in TypeScript itself are converted to a promise. With a corresponding await the result can be received asynchronously.

9.1. Isomorphic TypeScript

As soon as a project uses TypeScript in the client (mostly frontend) and server (backend), it is called Isomorphic TypeScript. A type-safe RPC framework based on TypeScript’s types is then particularly profitable for such a project, since types can be shared between client and server.

To take advantage of this, types that are used on both sides should be swapped out into a separate file or package. Importing on the respective side will then merge them again.

//shared.ts
export class User {
    id: number;
    username: string;
}

interface UserControllerApi {
    getUser(id: number): Promise<User>;
}

//server.ts
import { User } from './shared';
class UserController implements UserControllerApi {
    async getUser(id: number): Promise<User> {
        return await datbase.query(User).filter({id}).findOne();
    }
}

//client.ts
import { UserControllerApi } from './shared';
const controller = client.controller<UserControllerApi>();
const user = await controller.getUser(2); // => User

The interface UserControllerApi acts here as a contract between client and server. The server must implement this correctly and the client can consume it.

Backward compatibility can be implemented in the same way as for a normal local API: either new parameters are marked as optional or a new method is added.

While it is also possible to directly import UserController via import type { UserController } from './server.ts, this has other disadvantages like no support for nominal types (which means that class instances cannot be checked with instanceof).

9.2. Installation

Since Deepkit RPC is based on Runtime Types, it is necessary to have @deepkit/type already installed correctly. See Runtime Type Installation.

If this is done successfully, @deepkit/rpc can be installed or the Deepkit framework which already uses the library under the hood.

npm install @deekpit/rpc

Note that controller classes in @deepkit/rpc are based on TypeScript decorators and this feature must be enabled accordingly with experimentalDecorators.

The @deepkit/rpc package must be installed on the server and client if both have their own package.json.

To communicate with the server via TCP, the @deepkit/rpc-tcp package must be installed in the client and server.

For a WebSocket communication it needs the package on the server as well. The client in the browser, on the other hand, uses WebSocket from the official standard.

npm install @deepkit/rpc-tcp

As soon as the client is to be used via WebSocket in an environment where WebSocket is not available (for example NodeJS), the package ws is required in the client.

npm install ws

9.3. Use

Below is a fully functional example based on WebSockets and the low-level API of @deepkit/rpc. Once the Deepkit framework is used, controllers are provided via app modules and no RpcKernel is instantiated manually.

file: server.ts

import { rpc, RpcKernel } from '@deepkit/rpc';
import { RpcWebSocketServer } from '@deepkit/rpc-tcp';

@rpc.controller('myController');
export class Controller {
    @rpc.action()
    hello(title: string): string {
        return 'Hello ' + title;
    }
}

const kernel = new RpcKernel();
kernel.registerController(Controller);
const server = new RpcWebSocketServer(kernel, 'localhost:8081');
server.start();

file: client.ts

import { RpcWebSocketClient } from '@deepkit/rpc';
import type { Controller } from './server';

async function main() {
    const client = new RpcWebSocketClient('localhost:8081');
    const controller = client.controller<Controller>('myController');

    const result = await controller.hello('World');
    console.log('result', result);

    client.disconnect();
}

main().catch(console.error);

9.4. Server Controller

The "Procedure" in Remote Procedure Call is also called Action. Such an action is defined as a method in a class and marked with the @rpc.action decorator. The class itself is marked as controller by the @rpc.controller decorator and assigned a unique name. This name is then referenced in the client to address the correct controller. Any number of controllers can be defined and registered.

import { rpc } from '@deepkit/rpc';

@rpc.controller('myController');
class Controller {
    @rpc.action()
    hello(title: string): string {
        return 'Hello ' + title;
    }

    @rpc.action()
    test(): boolean {
        return true;
    }
}

Only methods that are also marked as @rpc.action() can be accessed by a client.

Types must be explicitly specified and cannot be inferred. This is important because the serializer needs to know exactly what the types are in order to convert them to binary data (BSON) or JSON.

9.5. Client Controller

The normal flow in RPC is that the client can perform functions on the server. However, it is also possible in Deepkit RPC for the server to perform functions on the client. To allow this, the client can also register a controller.

TODO

9.6. Dependency Injection

The controller classes are managed by the dependency injection container of @deepkit/injector. When the Deepkit framework is used, these controllers automatically have access to the module’s providers that deploy the controller.

Controllers are instantiated in the Deepkit framework in the dependency injection scope rpc so that all controllers automatically have access to various providers from this scope. These additional providers are HttpRequest (optional), RpcInjectorContext, SessionState, RpcKernelConnection, and ConnectionWriter.

import { RpcKernel, rpc } from '@deepkit/rpc';
import { App } from '@deepkit/app';
import { Database, User } from './database';

@rpc.controller('my')
class Controller {
    constructor(private database: Database) {}

    @rpc.action()
    async getUser(id: number): Promise<User> {
        return await this.database.query(User).filter({id}).findOne();
    }
}

new App({
    providers: [{provide: Database, useValue: new Database}]
    controllers: [Controller],
}).run();

However, as soon as a RpcKernel is instantiated manually, a DI container can also be passed there. The RPC controller is then instantiated via this DI container.

import { RpcKernel, rpc } from '@deepkit/rpc';
import { InjectorContext } from '@deepkit/injector';
import { Database, User } from './database';

@rpc.controller('my')
class Controller {
    constructor(private database: Database) {}

    @rpc.action()
    async getUser(id: number): Promise<User> {
        return await this.database.query(User).filter({id}).findOne();
    }
}

const injector = InjectorContext.forProviders([
    Controller,
    {provide: Database, useValue: new Database},
]);
const kernel = new RpcKernel(injector);
kernel.registerController(Controller);

See Dependency Injection to learn more.

9.7. Nominal Types

When data is received on the client from the function call, it was previously serialized on the server and then deserialized on the client. If classes are now used in the return type of the function, they are reconstructed in the client, but lose their nominal identity and all methods. To counteract this behavior, classes can be registered as nominal types via a unique ID. This should be done for all classes used in an RPC API.

To register a class it is necessary to use the decorator @entity.name('id').

import { entity } from '@deepkit/type';

@entity.name('user')
class User {
    id!: number;
    firstName!: string;
    lastName!: string;
    get fullName() {
        return this.firstName + ' ' + this.lastName;
    }
}

As soon as this class is now used as the result of a function, its identity is preserved.

const controller = client.controller<Controller>('controller');

const user = await controller.getUser(2);
user instanceof User; //true when @entity.name is used, and false if not

9.8. Error Forwarding

RPC functions can throw errors. These errors are forwarded to the client by default and thrown again there. If custom error classes are used, their nominal type should be enabled. See RPC Nominal Types.

@entity.name('@error:myError')
class MyError extends Error {}

//server
class Controller {
    @rpc.action()
    saveUser(user: User): void {
        throw new MyError('Can not save user');
    }
}

//client
//[MyError] makes sure the class MyError is known in runtime
const controller = client.controller<Controller>('controller', [MyError]);

try {
    await controller.getUser(2);
} catch (e) {
    if (e instanceof MyError) {
        //ops, could not save user
    } else {
        //all other errors
    }
}

9.9. Security

By default, all RPC functions can be called from any client. Also the feature Peer-To-Peer communication is activated by default. To be able to set exactly which client is allowed to do what, the class RpcKernelSecurity can be overridden.

import { RpcKernelSecurity, Session, RpcControllerAccess } from '@deepkit/type';

//contains default implementations
class MyKernelSecurity extends RpcKernelSecurity {
    async hasControllerAccess(session: Session, controllerAccess: RpcControllerAccess): Promise<boolean> {
        return true;
    }

    async isAllowedToRegisterAsPeer(session: Session, peerId: string): Promise<boolean> {
        return true;
    }

    async isAllowedToSendToPeer(session: Session, peerId: string): Promise<boolean> {
        return true;
    }

    async authenticate(token: any): Promise<Session> {
        throw new Error('Authentication not implemented');
    }

    transformError(err: Error) {
        return err;
    }
}

To use this either the RpcKernel is passed an instance of it:

const kernel = new RpcKernel(undefined, new MyKernelSecurity);

Or in the case of a Deepkit Framework application the class RpcKernelSecurity is overwritten with a provider.

import { App } from '@deepkit/type';
import { RpcKernelSecurity } from '@deepkit/rpc';
import { FrameworkModule } from '@deepkit/framework';

new App({
    controllers: [MyRpcController],
    providers: [
        {provide: RpcKernelSecurity, useClass: MyRpcKernelSecurity}
    ],
    imports: [new FrameworkModule]
}).run();

9.9.1. Authentication / Session

The session object is by default an anonymous session, which means that the client has not authenticated itself. As soon as it wants to authenticate, the authenticate method is called. The token that the authenticate method receives comes from the client and can have any value.

Once the client sets a token, authentication is performed as soon as the first RPC function or manually client.connect() is called.

const client = new RpcWebSocketClient('localhost:8081');
client.token.set('123456789');

const controller = client.controller<Controller>('myController');

Here RpcKernelSecurity.authenticate receives the token 123456789 and can return another session accordingly. This returned session is then passed to all other methods like the hasControllerAccess.

import { Session, RpcKernelSecurity } from '@deepkit/rpc';

class UserSession extends Session {
}

class MyKernelSecurity extends RpcKernelSecurity {
    async hasControllerAccess(session: Session, controllerAccess: RpcControllerAccess): Promise<boolean> {
        if (controllerAccess.controllerClassType instanceof MySecureController) {
            //MySecureController requires UserSession
            return session instanceof UserSession;
        }
        return true;
    }

    async authenticate(token: any): Promise<Session> {
        if (token === '123456789') {
            return new UserSession('username', token);
        }
        throw new Error('Authentication failed');
    }
}

9.9.2. Controller Access

The hasControllerAccess method can be used to determine whether a client is allowed to execute a specific RPC function. This method is executed on every RPC function call. If it returns false, access is denied and an error is thrown on the client.

In RpcControllerAccess there are several valuable information about the RPC function:

interface RpcControllerAccess {
    controllerName: string;
    controllerClassType: ClassType;
    actionName: string;
    actionGroups: string[];
    actionData: { [name: string]: any };
}

Groups and additional data can be changed via the decorator @rpc.action():

class Controller {
    @rpc.action().group('secret').data('role', 'admin')
    saveUser(user: User): void {
    }
}


class MyKernelSecurity extends RpcKernelSecurity {
    async hasControllerAccess(session: Session, controllerAccess: RpcControllerAccess): Promise<boolean> {
        if (controllerAccess.actionGroups.includes('secret')) {
            //todo: check
            return false;
        }
        return true;
    }
}

9.9.3. Transform Error

Since thrown errors are automatically forwarded to the client with all its information like the error message and also the stacktrace, this could unwantedly publish sensitive information. To change this, in the method transformError the thrown error can be modified.

class MyKernelSecurity extends RpcKernelSecurity {
    transformError(error: Error) {
        //wrap in new error
        return new Error('Something went wrong: ' + error.message);
    }
}

Note that once the error is converted to a generic error, the complete stack trace and the identity of the error are lost. Accordingly, no instanceof checks can be used on the error in the client.

If Deepkit RPC is used between two microservices, and thus the client and server are under complete control of the developer, then transforming the error is rarely necessary. If, on the other hand, the client is running in a browser with an unknown, then care should be taken in transformError as to what information is to be revealed. If in doubt, each error should be transformed with a generic Error to ensure that no internal details are leaked. Logging the error would then be a good idea at this point.

9.9.4. Dependency Injection

If the Deepkit RPC library is used directly, the RpcKernelSecurity class itself is instantiated. If this class needs a database or a logger, this must be passed itself.

When the Deepkit framework is used, the class is instantiated by the Dependency Injection container and thus automatically has access to all other providers in the application.

9.10. Streaming RxJS

TODO

9.11. Transport Protocol

Deepkit RPC supports several transport protocols. WebSockets is the protocol that has the best compatibility (since browsers support it) while supporting all features like streaming. TCP is usually faster and is great for communication between servers (microservices) or non-browser clients.

Deepkit’s RPC HTTP protocol is a variant that is particularly easy to debug in the browser, as each function call is an HTTP request, but has its limitations such as no support for RxJS streaming.

9.11.1. HTTP

TODO: Not implemented yet.

9.11.2. WebSockets

@deepkit/rpc-tcp RpcWebSocketServer and Browser WebSocket or Node ws package.

9.11.3. TCP

@deepkit/rpc-tcp RpcNetTcpServer and RpcNetTcpClientAdapter

9.12. Peer To Peer

TODO

10. Database

Deepkit provides an ORM that allows to access databases in a modern way. Entities are simply defined using TypeScript types:

import { entity, PrimaryKey, AutoIncrement, Unique, MinLength, MaxLength } from '@deepkit/type';

@entity.name('user')
class User {
    id: number & PrimaryKey & AutoIncrement = 0;
    created: Date = new Date;
    firstName?: string;
    lastName?: string;

    constructor(
        public username: string & Unique & MinLength<2> & MaxLength<16>,
        public email: string & Unique,
    ) {}
}

In doing so, any TypeScript types and validation decorators from Deepkit can be used to fully define the entity. The entity type system is designed so that these types or classes can also be used in other areas such as HTTP routes, RPC actions or frontend. This prevents, for example, that a user is defined several times in the entire application.

10.1. Installation

Since Deepkit ORM is based on Runtime Types, it is necessary to have @deepkit/type already installed correctly. See Runtime Type Installation.

If this is done successfully, @deepkit/orm itself and a database adapter can be installed.

If classes are to be used as entities, experimentalDecorators must be enabled in tsconfig.json:

{
  "compilerOptions": {
    "experimentalDecorators": true
  }
}

Once the library is installed, a database adapter can be installed and the API of it can be used directly.

10.1.1. SQLite

npm install @deekpit/orm @deepkit/sqlite
import { SQLiteDatabaseAdapter } from '@deepkit/sqlite';

const database = new Database(new SQLiteDatabaseAdapter('./example.sqlite'), [User]);
const database = new Database(new SQLiteDatabaseAdapter(':memory:'), [User]);

10.1.2. MySQL

npm install @deekpit/orm @deepkit/mysql
import { MySQLDatabaseAdapter } from '@deepkit/mysql';

const database = new Database(new MySQLDatabaseAdapter({
    host: 'localhost',
    port: 3306
}), [User]);

10.1.3. Postgres

npm install @deekpit/orm @deepkit/postgres
import { PostgresDatabaseAdapter } from '@deepkit/postgres';

const database = new Database(new PostgresDatabaseAdapter({
    host: 'localhost',
    port: 3306
}), [User]);

10.1.4. MongoDB

npm install @deekpit/orm @deepkit/bson @deepkit/mongo
import { MongoDatabaseAdapter } from '@deepkit/mongo';

const database = new Database(new MongoDatabaseAdapter('mongodb://localhost/mydatabase'), [User]);

10.2. Use

Primarily the Database object is used. Once instantiated, it can be used throughout the application to query or manipulate data. The connection to the database is initialized lazy.

The Database object is passed an adapter, which comes from the database adapters libraries.

import { SQLiteDatabaseAdapter } from '@deepkit/sqlite';
import { entity, PrimaryKey, AutoIncrement } from '@deepkit/type';
import { Database } from '@deepkit/orm';

async function main() {
    @entity.name('user')
    class User {
        public id: number & PrimaryKey & AutoIncrement = 0;
        created: Date = new Date;

        constructor(public name: string) {
        }
    }

    const database = new Database(new SQLiteDatabaseAdapter('./example.sqlite'), [User]);
    await database.migrate(); //create tables

    await database.persist(new User('Peter'));

    const allUsers = await database.query(User).find();
    console.log('all users', allUsers);
}

main();

10.2.1. Database

10.2.2. Connection

Read Replica

10.3. Entity

An entity is either a class or object literal (interface) and always has a primary key. The entity is decorated with all necessary information using type decorators from @deepkit/type. For example, a primary key is defined as well as various fields and their validation constraints. These fields reflect the database structure, usually a table or a collection.

Special type decorators such as Mapped<'name'> can also be used to map a field name to another name in the database.

10.3.1. Class

import { entity, PrimaryKey, AutoIncrement, Unique, MinLength, MaxLength } from '@deepkit/type';

@entity.name('user')
class User {
    id: number & PrimaryKey & AutoIncrement = 0;
    created: Date = new Date;
    firstName?: string;
    lastName?: string;

    constructor(
        public username: string & Unique & MinLength<2> & MaxLength<16>,
        public email: string & Unique,
    ) {}
}

const database = new Database(new SQLiteDatabaseAdapter(':memory:'), [User]);
await database.migrate();

await database.persist(new User('Peter'));

const allUsers = await database.query(User).find();
console.log('all users', allUsers);

10.3.2. Interface

import { PrimaryKey, AutoIncrement, Unique, MinLength, MaxLength } from '@deepkit/type';

interface User {
    id: number & PrimaryKey & AutoIncrement = 0;
    created: Date = new Date;
    firstName?: string;
    lastName?: string;
    username: string & Unique & MinLength<2> & MaxLength<16>;
}

const database = new Database(new SQLiteDatabaseAdapter(':memory:'));
database.register<User>({name: 'user'});

await database.migrate();

const user: User = {id: 0, created: new Date, username: 'Peter'};
await database.persist(user);

const allUsers = await database.query<User>().find();
console.log('all users', allUsers);

10.3.3. Primitive

Primitive data types like String, Number (bigint), and Boolean are mapped to common database types. Only the TypeScript type is used.

interface User {
    logins: number;
    username: string;
    pro: boolean;
}

10.3.4. Primary Key

Each entity needs exactly one primary key. Multiple primary keys are not supported.

The base type of a primary key can be arbitrary. Often a number or UUID is used. For MongoDB the MongoId or ObjectID is often used.

For numbers, AutoIncrement is a good choice.

import { PrimaryKey } from '@deepkit/type';

interface User {
    id: number & PrimaryKey;
}

10.3.5. Auto Increment

Fields that should be automatically incremented during insertion are annotated with the AutoIncrement decorator. All adapters support auto-increment values. The MongoDB adapter uses an additional collection to keep track of the counter.

An Auto-Increment field is an automatic counter and can only be applied to a Primary Key. The database automatically ensures that an ID is used only once.

import { PrimaryKey, AutoIncrement } from '@deepkit/type';

interface User {
    id: number & PrimaryKey & AutoIncrement;
}

10.3.6. UUID

Fields that should be of type UUID (v4) are annotated with the decorator UUID. The runtime type is string and mostly binary in the database itself. Use the uuid() function to create a new UUID v4.

import { uuid, UUID, PrimaryKey } from '@deepkit/type';

class User {
    id: UUID & PrimaryKey = uuid();
}

10.3.7. MongoDB ObjectID

Fields that should be of type ObjectID in MongoDB are annotated with the decorator MongoId. The runtime type is string and in the database itself ObjectId (binary).

MongoID fields automatically get a new value when inserted. It is not mandatory to use the field name _id. It can have any name.

import { PrimaryKey, MongoId } from '@deepkit/type';

class User {
    id: MongoId & PrimaryKey = '';
}

10.3.8. Optional / Nullable

Optional fields are declared as TypeScript type with title?: string or title: string | null. You should use only one variant of this, usually the optional ? syntax which works with undefined. Both variants result in the database type being NULLABLE for all SQL adapters. So the only difference between these decorators is that they represent different values at runtime.

In the following example, the changed field is optional and can therefore be undefined at runtime, although it is always represented as NULL in the database.

import { PrimaryKey } from '@deepkit/type';

class User {
    id: number & PrimaryKey = 0;
    modified?: Date;
}

This example shows how the nullable type works. NULL is used both in the database and in the javascript runtime. This is more verbose than modified?: Date and is not commonly used.

import { PrimaryKey } from '@deepkit/type';

class User {
    id: number & PrimaryKey = 0;
    modified: Date | null = null;
}

10.3.9. Database Type Mapping

Runtime type SQLite MySQL Postgres Mongo

string

text

longtext

text

string

number

float

double

double precision

int/number

boolean

integer(1)

boolean

boolean

boolean

date

text

datetime

timestamp

datetime

array

text

json

jsonb

array

map

text

json

jsonb

object

map

text

json

jsonb

object

union

text

json

jsonb

T

uuid

blob

binary(16)

uuid

binary

ArrayBuffer/Uint8Array/…​

blob

longblob

bytea

binary

With DatabaseField it is possible to map a field to any database type. The type must be a valid SQL statement that is passed unchanged to the migration system.

import { DatabaseField } from '@deepkit/type';

interface User {
    title: string & DatabaseField<{type: 'VARCHAR(244)'}>;
}

To map a field for a specific database, either SQLite, MySQL, or Postgres can be used.

SQLite
import { SQLite } from '@deepkit/type';

interface User {
    title: string & SQLite<{type: 'text'}>;
}
MySQL
import { MySQL } from '@deepkit/type';

interface User {
    title: string & MySQL<{type: 'text'}>;
}
Postgres
import { Postgres } from '@deepkit/type';

interface User {
    title: string & Postgres<{type: 'text'}>;
}

10.3.10. Embedded Types

10.3.11. Default Values

Default values are

10.3.12. Default Expressions

10.3.13. Complex Types

10.3.14. Exclude

10.3.15. Database Specific Column Types

10.4. Session / Unit Of Work

A session is something like a unit of work. It keeps track of everything you do and automatically records the changes whenever commit() is called. It is the preferred way to execute changes in the database because it bundles statements in a way that makes it very fast. A session is very lightweight and can easily be created in a request-response lifecycle, for example.

import { SQLiteDatabaseAdapter } from '@deepkit/sqlite';
import { entity, PrimaryKey, AutoIncrement } from '@deepkit/type';
import { Database } from '@deepkit/orm';

async function main() {

    @entity.name('user')
    class User {
        id: number & PrimaryKey & AutoIncrement = 0;
        created: Date = new Date;

        constructor(public name: string) {
        }
    }

    const database = new Database(new SQLiteDatabaseAdapter(':memory:'), [User]);
    await database.migrate();

    const session = database.createSession();
    session.add(new User('User1'), new User('User2'), new User('User3'));

    await session.commit();

    const users = await session.query(User).find();
    console.log(users);
}

main();

Add new instance to the session with session.add(T) or remove existing instances with session.remove(T). Once you are done with the Session object, simply dereference it everywhere so that the garbage collector can remove it.

Changes are automatically detected for entity instances fetched via the Session object.

const users = await session.query(User).find();
for (const user of users) {
    user.name += ' changed';
}

await session.commit();//saves all users

10.4.1. Identity Map

Sessions provide an identity map that ensures there is only ever one javascript object per database entry. For example, if you run session.query(User).find() twice within the same session, you get two different arrays, but with the same entity instances in them.

If you add a new entity with session.add(entity1) and retrieve it again, you will get exactly the same entity instance entity1.

Important: Once you start using sessions, you should use their Session.query method instead of Database.query. Only session queries have the identity mapping feature enabled.

10.4.2. Change Detection

10.4.3. Request/Response

10.5. Query

A query is an object that describes how to retrieve or modify data from the database. It has several methods to describe the query and termination methods that execute them. The database adapter can extend the query API in many ways to support database specific features.

You can create a query using Database.query(T) or Session.query(T). We recommend Sessions as it improves performance.

@entity.name('user')
class User {
    id: number & PrimaryKey & AutoIncrement = 0;
    created: Date = new Date;
    birthdate?: Date;
    visits: number = 0;

    constructor(public username: string) {
    }
}

const database = new Database(...);

//[ { username: 'User1' }, { username: 'User2' }, { username: 'User2' } ]
const users = await database.query(User).select('username').find();

10.5.1. Filter

A filter can be applied to limit the result set.

//simple filters
const users = await database.query(User).filter({name: 'User1'}).find();

//multiple filters, all AND
const users = await database.query(User).filter({name: 'User1', id: 2}).find();

//range filter: $gt, $lt, $gte, $lte (greater than, lower than, ...)
//equivalent to WHERE created < NOW()
const users = await database.query(User).filter({created: {$lt: new Date}}).find();
//equivalent to WHERE id > 500
const users = await database.query(User).filter({id: {$gt: 500}}).find();
//equivalent to WHERE id >= 500
const users = await database.query(User).filter({id: {$gte: 500}}).find();

//set filter: $in, $nin (in, not in)
//equivalent to WHERE id IN (1, 2, 3)
const users = await database.query(User).filter({id: {$in: [1, 2, 3]}}).find();

//regex filter
const users = await database.query(User).filter({username: {$regex: /User[0-9]+/}}).find();

//grouping: $and, $nor, $or
//equivalent to WHERE (username = 'User1') OR (username = 'User2')
const users = await database.query(User).filter({
    $or: [{username: 'User1'}, {username: 'User2'}]
}).find();


//nested grouping
//equivalent to WHERE username = 'User1' OR (username = 'User2' and id > 0)
const users = await database.query(User).filter({
    $or: [{username: 'User1'}, {username: 'User2', id: {$gt: 0}}]
}).find();


//nested grouping
//equivalent to WHERE username = 'User1' AND (created < NOW() OR id > 0)
const users = await database.query(User).filter({
    $and: [{username: 'User1'}, {$or: [{created: {$lt: new Date}, id: {$gt: 0}}]}]
}).find();
Equal
Greater / Smaller
RegExp
Grouping AND/OR
In

10.5.2. Select

To narrow down the fields to be received from the database, select('field1') can be used.

const user = await database.query(User).select('username').findOne();
const user = await database.query(User).select('id', 'username').findOne();

It is important to note that as soon as the fields are narrowed down using select, the results are no longer instances of the entity, but only object literals.

const user = await database.query(User).select('username').findOne();
user instanceof User; //false

10.5.3. Order

With orderBy(field, order) the order of the entries can be changed. Several times orderBy can be executed to refine the order more and more.

const users = await session.query(User).orderBy('created', 'desc').find();
const users = await session.query(User).orderBy('created', 'asc').find();

10.5.4. Pagination

The itemsPerPage() and page() methods can be used to paginate the results. Page starts at 1.

const users = await session.query(User).itemsPerPage(50).page(1).find();

With the alternative methods limit and skip you can paginate manually.

const users = await session.query(User).limit(5).skip(10).find();

10.5.5. Join

By default, references from the entity are neither included in queries nor loaded. To include a join in the query without loading the reference, use join() (left join) or innerJoin(). To include a join in the query and load the reference, use joinWith() or innerJoinWith().

All of the following examples assume these model schemes:

@entity.name('group')
class Group {
    id: number & PrimaryKey & AutoIncrement = 0;
    created: Date = new Date;

    constructor(public username: string) {
    }
}

@entity.name('user')
class User {
    id: number & PrimaryKey & AutoIncrement = 0;
    created: Date = new Date;

    group?: Group & Reference;

    constructor(public username: string) {
    }
}
//select only users with a group assigned (INNER JOIN)
const users = await session.query(User).innerJoin('group').find();
for (const user of users) {
    user.group; //error, since reference was not loaded
}
//select only users with a group assigned (INNER JOIN) and load the relation
const users = await session.query(User).innerJoinWith('group').find();
for (const user of users) {
    user.group.name; //works
}

To modify join queries, use the same methods, but with the use prefix: useJoin, useInnerJoin, useJoinWith or useInnerJoinWith. To end the join query modification, use end() to get back the parent query.

//select only users with a group with name 'admins' assigned (INNER JOIN)
const users = await session.query(User)
    .useInnerJoinWith('group')
        .filter({name: 'admins'})
        .end()  // returns to the parent query
    .find();

for (const user of users) {
    user.group.name; //always admin
}

10.5.6. Aggregation

Aggregation methods allow you to count records and aggregate fields.

The following examples are based on this model scheme:

@entity.name('file')
class File {
    id: number & PrimaryKey & AutoIncrement = 0;
    created: Date = new Date;

    downloads: number = 0;

    category: string = 'none';

    constructor(public path: string & Index) {
    }
}

groupBy allows to group the result by the specified field.

await database.persist(
    cast<File>({path: 'file1', category: 'images'}),
    cast<File>({path: 'file2', category: 'images'}),
    cast<File>({path: 'file3', category: 'pdfs'})
);

//[ { category: 'images' }, { category: 'pdfs' } ]
await session.query(File).groupBy('category').find();

There are several aggregation methods: withSum, withAverage, withCount, withMin, withMax, withGroupConcat. Each requires a field name as the first argument and an optional second argument to change the alias.

// first let's update some of the records:
await database.query(File).filter({path: 'images/file1'}).patchOne({$inc: {downloads: 15}});
await database.query(File).filter({path: 'images/file2'}).patchOne({$inc: {downloads: 5}});

//[{ category: 'images', downloads: 20 },{ category: 'pdfs', downloads: 0 }]
await session.query(File).groupBy('category').withSum('downloads').find();

//[{ category: 'images', downloads: 10 },{ category: 'pdfs', downloads: 0 }]
await session.query(File).groupBy('category').withAverage('downloads').find();

//[ { category: 'images', amount: 2 }, { category: 'pdfs', amount: 1 } ]
await session.query(File).groupBy('category').withCount('id', 'amount').find();

10.5.7. Returning

With returning additional fields can be requested in case of changes via patch and delete.

Caution: Not all database adapters return fields atomically. Use transactions to ensure data consistency.

await database.query(User).patchMany({visits: 0});

//{ modified: 1, returning: { visits: [ 5 ] }, primaryKeys: [ 1 ] }
const result = await database.query(User)
    .filter({username: 'User1'})
    .returning('username', 'visits')
    .patchOne({$inc: {visits: 5}});

10.5.8. Find

Returns an array of entries matching the specified filter.

const users: User[] = await database.query(User).filter({username: 'Peter'}).find();

10.5.9. FindOne

Returns an entry that matches the specified filter. If no item is found, an ItemNotFound error is thrown.

const users: User = await database.query(User).filter({username: 'Peter'}).findOne();

10.5.10. FindOneOrUndefined

Returns an entry that matches the specified filter. If no entry is found, undefined is returned.

const query = database.query(User).filter({username: 'Peter'});
const users: User|undefined = await query.findOneOrUndefined();

10.5.11. FindField

Returns a list of a field that match the specified filter.

const usernames: string[] = await database.query(User).findField('username');

10.5.12. FindOneField

Returns a list of a field that match the specified filter. If no item is found, an ItemNotFound error is thrown.

const username: string = await database.query(User).filter({id: 3}).findOneField('username');

10.5.13. Patch

Patch is a change query that patches the records described in the query. The methods patchOne and patchMany finish the query and execute the patch.

patchMany changes all entries in the database that match the specified filter. If no filter is set, the whole table will be changed. Use patchOne to change only one entry at a time.

await database.query(User).filter({username: 'Peter'}).patch({username: 'Peter2'});

await database.query(User).filter({username: 'User1'}).patchOne({birthdate: new Date});
await database.query(User).filter({username: 'User1'}).patchOne({$inc: {visits: 1}});

await database.query(User).patchMany({visits: 0});

10.5.14. Delete

deleteMany deletes all entries in the database that match the specified filter. If no filter is set, the entire table will be deleted. Use deleteOne to delete only one entry at a time.

const result = await database.query(User)
    .filter({visits: 0})
    .deleteMany();

const result = await database.query(User).filter({id: 4}).deleteOne();

10.5.15. Has

Returns whether at least one entry exists in the database.

const userExists: boolean = await database.query(User).filter({username: 'Peter'}).has();

10.5.16. Count

Returns the number of entries.

const userCount: number = await database.query(User).count();

10.5.17. Lift

Lifting a query means adding new functionality to it. This is usually used either by plugins or complex architectures to split larger query classes into several convenient, reusable classes.

import { FilterQuery, Query } from '@deepkit/orm';

class UserQuery<T extends {birthdate?: Date}> extends Query<T>  {
    hasBirthday() {
        const start = new Date();
        start.setHours(0,0,0,0);
        const end = new Date();
        end.setHours(23,59,59,999);

        return this.filter({$and: [{birthdate: {$gte: start}}, {birthdate: {$lte: end}}]} as FilterQuery<T>);
    }
}

await session.query(User).lift(UserQuery).hasBirthday().find();

10.6. Repository

10.7. Relations

Relationships allow you to connect two entities in a certain way. This is usually done in databases using the concept of foreign keys. Deepkit ORM supports relations for all official database adapters.

A relation is annotated with the Reference decorator. Normally a relation also has a reverse relation, which is annotated with the BackReference type, but is only needed if the reverse relation is to be used in a database query. Back references are only virtual.

10.7.1. One To Many

The entity that stores a reference is usually referred to as the owning page or the one that owns the reference. The following code shows two entities with a one-to-many relationship between User and Post. This means that a User can have multiple Post. The post entity has the post→user relationship. In the database itself there is now a field Post. "author" that contains the primary key of User.

import { SQLiteDatabaseAdapter } from '@deepkit/sqlite';
import { entity, PrimaryKey, AutoIncrement, Reference } from '@deepkit/type';
import { Database } from '@deepkit/orm';

async function main() {
    @entity.name('user').collectionName('users')
    class User {
        id: number & PrimaryKey & AutoIncrement = 0;
        created: Date = new Date;

        constructor(public username: string) {
        }
    }

    @entity.name('post')
    class Post {
        id: number & PrimaryKey & AutoIncrement = 0;
        created: Date = new Date;

        constructor(
            public author: User & Reference,
            public title: string
        ) {
        }
    }

    const database = new Database(new SQLiteDatabaseAdapter(':memory:'), [User, Post]);
    await database.migrate();

    const user1 = new User('User1');
    const post1 = new Post(user1, 'My first blog post');
    const post2 = new Post(user1, 'My second blog post');

    await database.persist(user1, post1, post2);
}

main();

References are not selected in queries by default. See Section 10.5.5, “Join” for details.

10.7.2. Many To One

A reference usually has a reverse reference called many-to-one. It is only a virtual reference, since it is not reflected in the database itself. A back reference is annotated BackReference and is mainly used for reflection and query joins. If you add a BackReference from User to Post, you can join Post directly from User queries.

@entity.name('user').collectionName('users')
class User {
    id: number & PrimaryKey & AutoIncrement = 0;
    created: Date = new Date;

    posts?: Post[] & BackReference;

    constructor(public username: string) {
    }
}
//[ { username: 'User1', posts: [ [Post], [Post] ] } ]
const users = await database.query(User)
    .select('username', 'posts')
    .joinWith('posts')
    .find();

10.7.3. Many To Many

A many-to-many relationship allows you to associate many records with many others. For example, it can be used for users in groups. A user can be in none, one or many groups. Consequently, a group can contain 0, one or many users.

Many-to-many relationships are usually implemented using a pivot entity. The pivot entity contains the actual own references to two other entities, and these two entities have back references to the pivot entity.

@entity.name('user')
class User {
    id: number & PrimaryKey & AutoIncrement = 0;
    created: Date = new Date;

    groups?: Group[] & BackReference<{via: typeof UserGroup}>;

    constructor(public username: string) {
    }
}

@entity.name('group')
class Group {
    id: number & PrimaryKey & AutoIncrement = 0;

    users?: User[] & BackReference<{via: typeof UserGroup}>;

    constructor(public name: string) {
    }
}

//the pivot entity
@entity.name('userGroup')
class UserGroup {
    id: number & PrimaryKey & AutoIncrement = 0;

    constructor(
        public user: User & Reference,
        public group: Group & Reference,
    ) {
    }
}

With these entities, you can now create users and groups and connect them to the pivot entity. By using a back reference in User, we can retrieve the groups directly with a User query.

const database = new Database(new SQLiteDatabaseAdapter(':memory:'), [User, Group, UserGroup]);
await database.migrate();

const user1 = new User('User1');
const user2 = new User('User2');
const group1 = new Group('Group1');

await database.persist(user1, user2, group1, new UserGroup(user1, group1), new UserGroup(user2, group1));

//[
//   { id: 1, username: 'User1', groups: [ [Group] ] },
//   { id: 2, username: 'User2', groups: [ [Group] ] }
// ]
const users = await database.query(User)
    .select('username', 'groups')
    .joinWith('groups')
    .find();

To unlink a user from a group, the UserGroup record is deleted:

const users = await database.query(UserGroup)
    .filter({user: user1, group: group1})
    .deleteOne();

10.7.4. One To One

10.7.5. Constraints

On Delete/Update: RESTRICT | CASCADE | SET NULL | NO ACTION | SET DEFAULT

10.8. Inheritance

10.8.1. Table Per Class

10.8.2. Single Table Inheritance

10.9. Index

10.10. Case Sensitivity

10.11. Character Sets

10.12. Collations

10.13. Batching

10.14. Caching

10.15. Multitenancy

10.16. Events

Events are a way to hook into Deepkit ORM and allow you to write powerful plugins. There are two categories of events: Query events and Unit-of-Work events. Plugin authors typically use both to support both ways of manipulating data.

Events are registered via Database.listen with an event token. It is also possible to register short-lived event listeners on sessions.

import { Query, Database } from '@deepkit/orm';

const database = new Database(...);
database.listen(Query.onFetch, async (event) => {
});

const session = database.createSession();

//will only be executed for this particular session
session.eventDispatcher.listen(Query.onFetch, async (event) => {
});

10.16.1. Query Events

Query events are triggered when a query is executed via Database.query() or Session.query().

Each event has its own additional properties such as the type of entity, the query itself and the database session. You can override the query by setting a new query to Event.query.

import { Query, Database } from '@deepkit/orm';

const database = new Database(...);

const unsubscribe = database.listen(Query.onFetch, async event => {
    //overwrite the query of the user, so something else is executed.
    event.query = event.query.filterField('fieldName', 123);
});

//to delete the hook call unsubscribe
unsubscribe();

Query" has several event tokens:

Event-Token Description

Query.onFetch

Query.onDeletePre

Query.onDeletePost

Query.onPatchPre

Query.onPatchPost

10.16.2. Unit Of Work Events

Unit-of-work events are triggered when a new session submits changes.

Event-Token Description

DatabaseSession.onUpdatePre

DatabaseSession.onUpdatePost

DatabaseSession.onInsertPre

DatabaseSession.onInsertPost

DatabaseSession.onDeletePre

DatabaseSession.onDeletePost

DatabaseSession.onCommitPre

10.17. Transactions

A transaction is a sequential group of statements, queries, or operations such as select, insert, update, or delete that are executed as a single unit of work that can be committed or rolled back.

Deepkit supports transactions for all officially supported databases. By default, no transactions are used for any query and database session. To enable transactions, there are two main methods: sessions and callback.

10.17.1. Session Transactions

You can start and assign a new transaction for each session you create. This is the preferred way of interacting with the database, as you can easily pass on the Session object and all queries instantiated by this session will be automatically assigned to its transaction.

A typical pattern is to wrap all operations in a try-catch block and execute commit() on the very last line (which is only executed if all previous commands succeeded) and rollback() in the catch block to roll back all changes as soon as an error occurs.

Although there is an alternative API (see below), all transactions work only with database session objects. To commit outstanding changes from the unit-of-work to the database in a database session, commit() is normally called. In a transactional session, commit() not only commits all pending changes to the database, but also completes ("commits") the transaction, thereby closing the transaction. Alternatively, you can call session.flush() to commit all pending changes without commit and thus without closing the transaction. To commit a transaction without flushing the unit-of-work, use session.commitTransaction().

const session = database.createSession();

//this assigns a new transaction, and starts it with the very next database operation.
session.useTransaction();

try {
    //this query is executed in the transaction
    const users = await session.query(User).find();

    await moreDatabaseOperations(session);

    await session.commit();
} catch (error) {
    await session.rollback();
}

Once commit() or rollback() is executed in a session, the transaction is released. You must then call useTransaction() again if you want to continue in a new transaction.

Please note that as soon as the first database operation is executed in a transactional session, the assigned database connection is permanently and exclusively assigned to the current session object (sticky). Thus, all subsequent operations will be performed on the same connection (and thus, in most databases, on the same database server). Only when either the transactional session is terminated (commit or rollback), the database connection is released again. It is therefore recommended to keep a transaction only as short as necessary.

If a session is already associated with a transaction, a call to session.useTransaction() always returns the same object. Use session.isTransaction() to check if a transaction is associated with the session.

Nested transactions are not supported.

10.17.2. Transaction Callback

An alternative to transactional sessions is database.transaction(callback).

await database.transaction(async (session) => {
    //this query is executed in the transaction
    const users = await session.query(User).find();

    await moreDatabaseOperations(session);
});

The database.transaction(callback) method performs an asynchronous callback within a new transactional session. If the callback succeeds (that is, no error is thrown), the session is automatically committed (and thus its transaction committed and all changes flushed). If the callback fails, the session automatically executes rollback() and the error is propagated.

10.17.3. Isolations

Many databases support different types of transactions. To change the transaction behavior, you can call different methods for the returned transaction object from useTransaction(). The interface of this transaction object depends on the database adapter used. For example, the transaction object returned from a MySQL database has different options than the one returned from a MongoDB database. Use code completion or view the database adapter’s interface to get a list of possible options.

const database = new Database(new MySQLDatabaseAdapter());

const session = database.createSession();
session.useTransaction().readUncommitted();

try {
    //...operations
    await session.commit();
} catch () {
    await session.rollback();
}

//or
await database.transaction(async (session) => {
    //this works as long as no database operation has been exuected.
    session.useTransaction().readUncommitted();

    //...operations
});

While transactions for MySQL, PostgreSQL, and SQLite work by default, you must first set up MongoDB as a "replica set".

To convert a standard MongoDB instance to a replica set, please refer to the official documentation Convert a Standalone to a Replica Set.

10.18. Naming Strategy

10.19. Locking

10.19.1. Optimistic Locking

10.19.2. Pessimistic Locking

10.20. Custom Types

10.21. Logging

10.22. Migration

10.23. Seeding

10.24. Raw Database Access

10.24.1. SQL

10.24.2. MongoDB

10.25. App Configuration

10.26. Composite Primary Key

Composite Primary-Key means, an entity has several primary keys, which are automatically combined to a "composite primary key". This way of modeling the database has advantages and disadvantages. We believe that composite primary keys have huge practical disadvantages that do not justify their advantages, so they should be considered bad practice and therefore avoided. Deepkit ORM does not support composite primary keys. In this chapter we explain why and show (better) alternatives.

10.26.1. Disadvantages

Joins are not trivial. Although they are highly optimized in RDBMS, they represent a constant complexity in applications that can easily get out of hand and lead to performance problems. Performance not only in terms of query execution time, but also in terms of development time.

10.26.2. Joins

Each individual join becomes more complicated as more fields are involved. While many databases have implemented optimizations to make joins with multiple fields not slower per se, it requires the developer to constantly think through these joins in detail, since, for example, forgetting keys can lead to subtle errors (since the join will work even without specifying all keys) and the developer therefore needs to know the full composite primary key structure.

10.26.3. Indexes

Indexes with multiple fields (which are composite primary keys) suffer from the problem of field ordering in queries. While database systems can optimize certain queries, complex structures make it difficult to write efficient operations that correctly use all defined indexes. For an index with multiple fields (such as a composite primary key), it is usually necessary to define the fields in the correct order for the database to actually use the index. If the order is not specified correctly (for example, in a WHERE clause), this can easily result in the database not using the index at all and instead performing a full table scan. Knowing which database query optimizes in which way is advanced knowledge that new developers don’t usually have, but is necessary once you start working with composite primary keys so that you get the most out of your database and don’t waste resources.

10.26.4. Migrations

Once you decide that a particular entity needs an additional field to uniquely identify it (and thus become the Composite Primary Key), this will result in the adjustment of all entities in your database that have relationships to that entity.

For example, suppose you have an entity user with composite primary key and decide to use a foreign key to this user in different tables, e.g. in a pivot table audit_log, groups and posts. Once you change the primary key of user, all these tables need to be adjusted in a migration as well.

Not only does this make migration files much more complex, but it can also lead to greater downtime when running migration files, since schema changes usually require either a full database lock or at least a table lock. The more tables affected by a large change like an index change, the longer the migration will take. And the larger a table is, the longer the migration takes. Think about the audit_log table. Such tables usually have many records (millions or so), and you have to touch them during a schema change only because you decided to use a composite primary key and add an additional field to the primary key of User. Depending on the size of all these tables, this either makes migration changes unnecessarily more expensive or, in some cases, so expensive that changing the primary key of User is no longer financially justifiable. This usually leads to workarounds (e.g. adding a unique index to the user table) that result in technical debt and sooner or later end up on the legacy list.

For large projects, this can result in enormous downtime (from minutes to hours) and sometimes even the introduction of an entirely new migration abstraction system that essentially copies tables, inserts records into ghost tables, and moves tables back and forth after migration. This added complexity is in turn imposed on any entity that has a relationship to another entity with a composite primary key, and becomes greater the larger your database structure becomes. The problem gets worse with no way to solve it (except by removing the composite primary key entirely).

10.26.5. Findability

If you are a database administrator or data engineer/scientist, you usually work directly on the database and explore the data as you need it. With composite primary keys, any user writing SQL directly must know the correct primary key of all tables involved (and the column order to get correct index optimizations). This added overhead not only complicates data exploration, report generation, etc., but can also lead to errors in older SQL if a composite primary key is suddenly changed. The old SQL is probably still valid and running fine, but suddenly returns incorrect results because the new field in the composite primary key is missing from the join. It is much easier here to have only one primary key. This makes it easier to find data and ensures that old SQL queries will still work correctly if you decide to change the way a user object is uniquely identified, for example.

10.26.6. Revision

Once a composite primary key is used in an entity, refactoring the key can result in significant additional refactoring. Because an entity with a composite primary key typically does not have a single unique field, all filters and links must contain all values of the composite key. This usually means that the code relies on knowing the composite primary key, so all fields must be retrieved (e.g., for URLs such as /user/:key1/:key2). Once this key is changed, all places where this knowledge is explicitly used, such as URLs, custom SQL queries, and other places, must be rewritten.

While ORMs typically create joins automatically without manually specifying the values, they cannot automatically cover refactoring for all other use cases such as URL structures or custom SQL queries, and especially not for places where the ORM is not used at all, such as in reporting systems and all external systems.

10.26.7. ORM Complexity

With the support of composite primary keys, the complexity of the code of a powerful ORM like Deepkit ORM increases tremendously. Not only will the code and maintenance become more complex and therefore more expensive, but there will be more edge cases from users that need to be fixed and maintained. The complexity of the query layer, change detection, migration system, internal relationship tracking, etc. increases significantly. The overall cost associated with building and supporting an ORM with composite primary keys is too high, all things considered, and cannot be justified, which is why Deepkit does not support it.

10.26.8. Advantages

That being said, composite primary keys also have advantages, albeit very superficial ones. Using as few indexes as possible for each table makes writing (inserting/updating) data more efficient, since fewer indexes need to be maintained. It also makes the structure of the model a bit cleaner (since it usually has one less column). However, the difference between a sequentially ordered, automatically incrementing primary key and a non-incrementing primary key is completely negligible these days, since disk space is cheap and the operation is usually an "append-only" operation, which is very fast.

There may certainly be a few edge cases (and for a few very specific database systems) where it is initially better to work with composite primary keys. But even in these systems, it might make more sense overall (considering all the costs) not to use them and to switch to another strategy.

10.26.9. Alternative

An alternative to composite primary keys is to use a single automatically incrementing numeric primary key, usually called "id", and move the composite primary key to a unique index with multiple fields. Depending on the primary key used (depending on the expected number of rows), the "id" uses either 4 or 8 bytes per record.

By using this strategy, you are no longer forced to think about the problems described above and find a solution, which greatly reduces the cost of ever-growing projects.

The strategy specifically means that each entity has an "id" field, usually at the very beginning, and this field is then used to identify unique rows by default and in joins.

class User {
    id: number & PrimaryKey & AutoIncrement = 0;

    constructor(public username: string) {}
}

As an alternative to a composite primary key, you would use a unique multi-field index instead.

@entity.index(['tenancyId', 'username'], {unique: true})
class User {
    id: number & PrimaryKey & AutoIncrement = 0;

    constructor(
        public tenancyId: number,
        public username: string,
    ) {}
}

Deepkit ORM automatically supports incremental primary keys, including for MongoDB. This is the preferred method for identifying records in your database. However, for MongoDB you can use the ObjectId (_id: MongoId & PrimaryKey = '') as a simple primary key. An alternative to the numeric, auto-incrementing primary key is a UUID, which works just as well (but has slightly different performance characteristics, since indexing is more expensive).

10.26.10. Summary

Composite primary keys essentially mean that once they are in place, any future changes and practical use will come at a much higher cost. While it looks like a clean architecture at the beginning (because you have one less column), it leads to significant practical costs once the project is actually developed, and the costs continue to increase as the project gets larger.

Looking at the asymmetries between advantages and disadvantages, it is clear that composite primary keys cannot be justified in most cases. The costs are much greater than the benefits. Not only for you as a user, but also for us as the author and maintainer of the ORM code. For this reason, Deepkit ORM does not support composite primary keys.

10.27. Plugins

10.27.1. Soft-Delete

The Soft-Delete plugin allows to keep database records hidden without actually deleting them. When a record is deleted, it is only marked as deleted and not actually deleted. All queries automatically filter for this deleted property, so it feels to the user as if it is actually deleted.

To use the plugin, you must instantiate the SoftDelete class and activate it for each entity.

import { entity, PrimaryKey, AutoIncrement } from '@deepkit/type';
import { SoftDelete } from '@deepkit/orm';

@entity.name('user')
class User {
    id: number & PrimaryKey & AutoIncrement = 0;
    created: Date = new Date;

    // this field is used as indicator whether the record is deleted.
    deletedAt?: Date;

    // this field is optional and can be used to track who/what deleted the record.
    deletedBy?: string;

    constructor(
        public name: string
    ) {
    }
}

const softDelete = new SoftDelete(database);
softDelete.enable(User);

//or disable again
softDelete.disable(User);
Delete

To soft-delete records, use the usual methods: deleteOne or deleteMany in a query, or use the session to delete them. The soft-delete plugin will do the rest automatically in the background.

Restore

Deleted records can be restored using a cancelled query via SoftDeleteQuery. It has restoreOne and restoreMany.

import { SoftDeleteQuery } from '@deepkit/orm';

await database.query(User).lift(SoftDeleteQuery).filter({ id: 1 }).restoreOne();
await database.query(User).lift(SoftDeleteQuery).filter({ id: 1 }).restoreMany();

The session also supports element recovery.

import { SoftDeleteSession } from '@deepkit/orm';

const session = database.createSession();
const user1 = session.query(User).findOne();

session.from(SoftDeleteSession).restore(user1);
await session.commit();
Hard Delete

To hard delete records, use a lifted query via SoftDeleteQuery. This essentially restores the old behavior without the single query plugin.

import { SoftDeleteQuery } from '@deepkit/orm';

await database.query(User).lift(SoftDeleteQuery).hardDeleteOne();
await database.query(User).lift(SoftDeleteQuery).hardDeleteMany();

//those are equal
await database.query(User).lift(SoftDeleteQuery).withSoftDeleted().deleteOne();
await database.query(User).lift(SoftDeleteQuery).withSoftDeleted().deleteMany();
Query Deleted.

With a "lifted" query via SoftDeleteQuery you can also include deleted records.

import { SoftDeleteQuery } from '@deepkit/orm';

// find all, soft deleted and not deleted
await database.query(User).lift(SoftDeleteQuery).withSoftDeleted().find();

// find only soft deleted
await database.query(s).lift(SoftDeleteQuery).isSoftDeleted().count()
Deleted By

deletedBy can be set via query and sessions.

import { SoftDeleteSession } from '@deepkit/orm';

const session = database.createSession();
const user1 = session.query(User).findOne();

session.from(SoftDeleteSession).setDeletedBy('Peter');
session.remove(user1);

await session.commit();
import { SoftDeleteQuery } from '@deepkit/orm';

database.query(User).lift(SoftDeleteQuery)
.deletedBy('Peter')
.deleteMany();

11. Template

The template engine allows to write type-safe, fast and secure HTML templates. It is based on TSX and is ready to use as soon as you use the file extension .tsx and adjust the tsconfig.json accordingly.

The important thing is: it is not compatible with React. As soon as React is to be used, @deepkit/template is incompatible. Deepkit’s template engine is only meant for SSR (server-side rendering).

11.1. Installation

In your tsconfig you have to adjust the following settings: jsx and jsxImportSource.

{
  "compilerOptions": {
    "experimentalDecorators": true,
    "emitDecoratorMetadata": true,
    "target": "ES2020",
    "moduleResolution": "node",

    "jsx": "react-jsx",
    "jsxImportSource": "@deepkit/template"
  }
}

Now you can use TSX directly in your controller.

#!/usr/bin/env ts-node-script
import { App } from '@deepkit/app';
import { FrameworkModule } from '@deepkit/framework';
import { http } from '@deepkit/http';

@http.controller('my-base-url/')
class MyPage {
    @http.GET('hello-world')
    helloWorld() {
        return <div style="color: red">Hello World</div>;
    }
}

new App({
    controllers: [MyPage],
    imports: [
        new FrameworkModule({
            debug: true,
        })
    ]
}).run();

If you return such a TSX in your route method, the HTTP content type is automatically set to text/html; charset=utf-8.

11.2. Components

You can structure your templates the way you are used to in React. Either modularize your layout into multiple function or class components.

11.2.1. Function Components

The easiest way is to use a function that returns TSX.

async function Website(props: {title: string, children?: any}) {
    return <html>
        <head>
            <title>{props.title}</title>
        </head>
        <body>
            {props.children}
        </body>
    </html>;
}

class MyPage {
    @http.GET('hello-world')
    helloWorld() {
        return <Website title="Hello world">
            <h1>Great page</h1>
        </Website>;
    }
}
$ curl http://localhost:8080/hello-world
<html><head><title>Hello world</title></head><body><h1>Great page</h1></body></html>

Function components can be asynchronous (unlike in React). This is an important difference from other template engines you may be familiar with, like React.

All functions have access to the dependency injection container and can reference any dependencies starting with the third parameter.

class Database {
    users: any[] = [{ username: 'Peter' }];
}

function UserList(props: {}, children: any, database: Database) {
    return <div>{database.users.length}</div>;
}

class MyPage {
    @http.GET('list')
    list() {
        return <UserList/>
    }
}

new App({
    controllers: [MyPage],
    providers: [Database],
    imports: [new FrameworkModule()]
}).run();

11.2.2. Class Components

An alternative way to write a component is a class component. They are handled and instantiated in the Dependency Injection container and thus have access to all services registered in the container. This makes it possible to directly access a data source such as a database in your components, for example.

class UserList {
    constructor(
        protected props: {},
        protected children: any,
        protected database: SQLiteDatabase) {
    }

    async render() {
        const users = await this.database.query(User).find();

        return <div class="users">
            {users.map((user) => <UserDetail user={user}/>)}
        </div>;
    }
}

class MyPage {
    @http.GET('')
    listUsers() {
        return <UserList/>;
    }
}

For class components the first constructor arguments are reserved. props can be defined arbitrarily, children is always "any", and then optional dependencies follow, which you can choose arbitrarily. Since class components are instantiated in the Dependency Injection container, you have access to all your services.

11.3. Dynamic HTML

The template engine has automatically sanitized all variables used, so you can safely use user input directly in the template. To render dynamic HTML, you can use the html function.

import { html } from '@deepkit/template';
helloWorld() {
    const yes = "<b>yes!</b>";
    return <div style="color: red">Hello World. {html(yes)}</div>;
}

11.4. Optimization

The template engine tries to optimize the generated JSX code so that it is much easier for NodeJS/V8 to generate the HTML string. For this to work correctly, you should move all your components from the main app.tsx file to separate files. A structure might look like this:

.
├── app.ts
└── views
    ├── user-detail.tsx
    ├── user-list.tsx
    └── website.tsx

12. Framework

12.1. Installation

Deepkit Framework is based on runtime types in Deepkit Type. Make sure that @deepkit/type is installed correctly. See Runtime Type Installation.

npm install ts-node @deepkit/framework

Make sure that all peer dependencies are installed. By default, NPM 7+ installs them automatically.

To compile your application, we need the TypeScript compiler and recommend ts-node to easily run the app.

An alternative to using ts-node is to compile the source code with the TypeScript compiler and execute the JavaScript source code directly. This has the advantage of dramatically increasing execution speed for short commands. However, it also creates additional workflow overhead by either manually running the compiler or setting up a watcher. For this reason, ts-node is used in all examples in this documentation.

12.2. First Application

Since the Deepkit framework does not use configuration files or a special folder structure, you can structure your project however you want. The only two files you need to get started are the TypeScript app.ts file and the TypeScript configuration tsconfig.json.

Our goal is to have the following files in our project folder:

.
├── app.ts
├── node_modules
├── package-lock.json
└── tsconfig.json

file: tsconfig.json

{
  "compilerOptions": {
    "outDir": "./dist",
    "experimentalDecorators": true,
    "strict": true,
    "esModuleInterop": true,
    "target": "ES2020",
    "module": "CommonJS",
    "moduleResolution": "node"
  },
  "reflection": true,
  "files": [
    "app.ts"
  ]
}

File: app.ts

#!/usr/bin/env ts-node-script
import { App } from '@deepkit/app';
import { Logger } from '@deepkit/logger';
import { cli, Command } from '@deepkit/app';
import { FrameworkModule } from '@deepkit/framework';

@cli.controller('test')
export class TestCommand implements Command {
    constructor(protected logger: Logger) {
    }

    async execute() {
        this.logger.log('Hello World!');
    }
}

new App({
    controllers: [TestCommand],
    imports: [new FrameworkModule]
}).run();

In this code, you can see that we have defined a test command using the TestCommand class and created a new application that we run directly using run(). By running this script, we start the app.

With the shebang in the first line (#!…​) we can make our script executable with the following command.

chmod +x app.ts

And then execute:

$ ./app.ts
VERSION
  Node

USAGE
  $ ts-node-script app.ts [COMMAND]

TOPICS
  debug
  migration  Executes pending migration files. Use migration:pending to see which are pending.
  server     Starts the HTTP server

COMMANDS
  test

Now, to execute our test command, we run the following command.

$ ./app.ts test
Hello World

In Deepkit Framework everything is now done via this app.ts. You can rename the file as you like or create more. Custom CLI commands, HTTP/RPC server, migration commands, etc are all started from this entry point.

To start the HTTP/RPC server, do the following:

./app.ts server:start

To serve requests please read chapter HTTP or RPC. In chapter CLI you can learn more about CLI commands.

12.3. App

Via the App object starts like application.

The run() method lists the arguments and executes the corresponding CLI controller. Since FrameworkModule provides its own CLI controllers, which are responsible for starting the HTTP server, for example, these can be called via it.

The App object can also be used to access the Dependency Injection container without running a CLI controller.

const app = new App({
    controllers: [TestCommand],
    imports: [new FrameworkModule]
});

//get access to all registered services
const eventDispatcher = app.get(EventDispatcher);

//then run the app, or do something else
app.run();

12.4. Modules

Deepkit framework is highly modular and allows you to split your application into several handy modules. Each module has its own dependency injection sub-container, configuration, commands and much more. In the chapter "First application" you have already created one module - the root module. new App takes almost the same arguments as a module, because it creates the root module for you automatically in the background.

You can skip this chapter if you do not plan to split your application into submodules, or if you do not plan to make a module available as a package to others.

A module is a simple class:

import { createModule } from '@deepkit/app';

export class MyModule extends createModule({}) {
}

It basically has no functionality at this point because its module definition is an empty object and it has no methods, but this demonstrates the relationship between modules and your application (your root module). This MyModule module can then be imported into your application or into other modules.

import { MyModule } from './module.ts'

new App({
    imports: [
        new MyModule(),
    ]
}).run();

You can now add features to this module as you would with App. The arguments are the same, except that imports are not available in a module definition. Add HTTP/RPC/CLI controllers, services, a configuration, event listeners, and various module hooks to make modules more dynamic.

12.4.1. Controllers

Modules can define controllers that are processed by other modules. For example, if you add a controller with decorators from the @deepkit/http package, its HttpModule module will pick this up and register the found routes in its router. A single controller may contain several such decorators. It is up to the module author who gives you these decorators how he processes the controllers.

In Deepkit there are three packages that handles such controllers: HTTP, RPC, and CLI. See their respective chapters to learn more. Below is an example of an HTTP controller:

import { createModule } from '@deepkit/app';
import { http } from '@deepkit/http';
import { injectable } from '@deepkit/injector';

class MyHttpController {
    @http.GET('/hello)
    hello() {
        return 'Hello world!';
    }
}

export class MyModule extends createModule({
    controllers: [MyHttpController]
}) {}

//same is possible for App
new App({
    controllers: [MyHttpController]
}).run();

12.4.2. Provider

When you define a provider in the providers section of your application, it is accessible throughout your application. For modules, however, these providers are automatically encapsulated in that module’s dependency injection subcontainer. You must manually export each provider to make it available to another module or your application.

To learn more about how providers work, please refer to the Dependency Injection chapter.

import { createModule } from '@deepkit/app';
import { http } from '@deepkit/http';
import { injectable } from '@deepkit/injector';

export class HelloWorldService {
    helloWorld() {
        return 'Hello there!';
    }
}

class MyHttpController {
    constructor(private helloService: HelloWorldService) {}

    @http.GET('/hello)
    hello() {
        return this.helloService.helloWorld();
    }
}

export class MyModule extends createModule({
    controllers: [MyHttpController],
    providers: [HelloWorldService],
}) {}

//same is possible for App
new App({
    controllers: [MyHttpController],
    providers: [HelloWorldService],
}).run();

When a user imports this module, he has no access to HelloWorldService because it is encapsulated in the subdependency injection container of MyModule.

12.4.3. Exports

To make providers available in the importer’s module, you can include the provider’s token in exports. This essentially moves the provider up one level into the dependency injection container of the parent module - the importer.

import { createModule } from '@deepkit/app';

export class MyModule extends createModule({
    controllers: [MyHttpController]
    providers: [HelloWorldService],
    exports: [HelloWorldService],
}) {}

If you have other providers like FactoryProvider, UseClassProvider etc., you should still use only the class type in the exports.

import { createModule } from '@deepkit/app';

export class MyModule extends createModule({
    controllers: [MyHttpController]
    providers: [
        {provide: HelloWorldService, useValue: new HelloWorldService}
    ],
    exports: [HelloWorldService],
}) {}

We can now import that module and use its exported service in our application code.

#!/usr/bin/env ts-node-script
import { App } from '@deepkit/app';
import { cli, Command } from '@deepkit/app';
import { HelloWorldService, MyModule } from './my-module';

@cli.controller('test')
export class TestCommand implements Command {
    constructor(protected helloWorld: HelloWorldService) {
    }

    async execute() {
        this.helloWorld.helloWorld();
    }
}

new App({
    controllers: [TestCommand],
    imports: [
        new MyModule(),
    ]
}).run();

Read the Dependency Injection chapter to learn more.

12.5. Configuration

In Deepkit framework, modules and your application can have configuration options. For example, a configuration can consist of database URLs, passwords, IPs, and so on. Services, HTTP/RPC/CLI controllers, and template functions can read these configuration options via dependency injection.

A configuration can be defined by defining a class with properties. This is a type-safe way to define a configuration for your entire application, and its values are automatically serialized and validated.

12.5.1. Example

import { MinLength } from '@deepkit/type';
import { App } from '@deepkit/app';
import { FrameworkModule } from '@deepkit/framework';
import { http } from '@deepkit/http';

class Config {
    pageTitle: string & MinLength<2> = 'Cool site';
    domain: string = 'example.com';
    debug: boolean = false;
}

class MyWebsite {
    constructor(protected allSettings: Config) {
    }

    @http.GET()
    helloWorld() {
        return 'Hello from ' + this.allSettings.pageTitle + ' via ' + this.allSettings.domain;
    }
}

new App({
    config: Config,
    controllers: [MyWebsite],
    imports: [new FrameworkModule]
}).run();
$ curl http://localhost:8080/
Hello from Cool site via example.com

12.5.2. Configuration Class

import { MinLength } from '@deepkit/type';

export class Config {
    title!: string & MinLength<2>; //this makes it required and needs to be provided
    host?: string;

    debug: boolean = false; //default values are supported as well
}
import { createModule } from '@deepkit/app';
import { Config } from './module.config.ts';

export class MyModule extends createModule({
   config: Config
}) {}

The values for the configuration options can be provided either in the constructor of the module, with the .configure() method or via configuration loaders (e.g. environment variable loaders).

import { MyModule } from './module.ts';

new App({
   imports: [new MyModule({title: 'Hello World'}],
}).run();

To dynamically change the configuration options of an imported module, you can use the process hook. This is a good place to either redirect configuration options or set up an imported module depending on the current module configuration or other module instance information.

import { MyModule } from './module.ts';

export class MainModule extends createModule({
}) {
    process() {
        this.getImportedModuleByClass(MyModule).configure({title: 'Changed'});
    }
}

At the application level, it works a little differently:

new App({
    imports: [new MyModule({title: 'Hello World'}],
})
    .setup((module, config) => {
        module.getImportedModuleByClass(MyModule).configure({title: 'Changed'});
    })
    .run();

When the root application module is created from a regular module, it works similarly to regular modules.

class AppModule extends createModule({
}) {
    process() {
        this.getImportedModuleByClass(MyModule).configure({title: 'Changed'});
    }
}

App.fromModule(new AppModule()).run();

12.5.3. Configuration Options Readout

To use a configuration option in a service, you can use normal dependency injection. It is possible to inject either the entire configuration object, a single value, or a portion of the configuration.

Partial

To inject only a subset of the configuration values, use the pick type.

import { Config } from './module.config';

export class MyService {
     constructor(private config: Pick<Config, 'title' | 'host'}) {
     }

     getTitle() {
         return this.config.title;
     }
}


//In unit tests, it can be instantiated via
new MyService({title: 'Hello', host: '0.0.0.0'});

//or you can use type aliases
type MyServiceConfig = Pick<Config, 'title' | 'host'};
export class MyService {
     constructor(private config: MyServiceConfig) {
     }
}
Single Value

To inject only a single value, use the index access operator.

import { Config } from './module.config';

export class MyService {
     constructor(private title: Config['title']) {
     }

     getTitle() {
         return this.title;
     }
}
All

To inject all config values, use the class as dependency.

import { Config } from './module.config';

export class MyService {
     constructor(private config: Config) {
     }

     getTitle() {
         return this.config.title;
     }
}

12.5.4. Debugger

The configuration values of your application and all modules can be displayed in the debugger. Activate the debug option in the FrameworkModule and open http://localhost:8080/_debug/configuration.

import { App } from '@deepkit/app';
import { FrameworkModule } from '@deepkit/framework';

new App({
    config: Config,
    controllers: [MyWebsite],
    imports: [
        new FrameworkModule({
            debug: true,
        })
    ]
}).run();