- For a saved execution, we write to disk binary data and metadata.
These two are only ever deleted via `POST /executions/delete`. No marker
file, so untouched by pruning.
- For an unsaved execution, we write to disk binary data, binary data
metadata, and a marker file at `/meta`. We later delete all three during
pruning.
- The third flow is legacy. Currently, if the execution is unsaved, we
actually store it in the DB while running the workflow and immediately
after the workflow is finished during the `onWorkflowPostExecute()` hook
we delete that execution, so the second flow applies. But formerly, we
did not store unsaved executions in the DB ("ephemeral executions") and
so we needed to write a marker file at `/persistMeta` so that, if the
ephemeral execution crashed after the step where binary data was stored,
we had a way to later delete its associated dangling binary data via a
second pruning cycle, and if the ephemeral execution succeeded, then we
immediately cleaned up the marker file at `/persistMeta` during the
`onWorkflowPostExecute()` hook.
This creation and cleanup at `/persistMeta` is still happening, but this
third flow no longer has a purpose, as we now store unsaved executions
in the DB and delete them immediately after. Hence the third flow can be
removed.
Github issue / Community forum post (link here to close automatically):
For the upcoming workflow history feature, we're creating the necessary
database tables.
Also changes the schema for Postgres so the versionId column is now
properly a UUID. The `using` statement prevents losing data, basically
converting the strings to UUIDs.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <netroy@users.noreply.github.com>
When ever we have migrations that use `.addColumn` or `.dropColumn`,
typeorm recreates tables for sqlite. so, we need to disable foreign key
enforcement for sqlite, or else data in some tables can get deleted
because of `ON DELETE CASCADE`
[This has happened in the
past](https://github.com/n8n-io/n8n/pull/6739), and we should really
come up with a way to prevent this from happening again.
---------
Signed-off-by: Oleg Ivaniv <me@olegivaniv.com>
Co-authored-by: Oleg Ivaniv <me@olegivaniv.com>
Github issue / Community forum post (link here to close automatically):
This PR aims to address an issue where an Error workflow cannot be
started, either due to insufficient permissions or because its settings
prevent it from being called.
The way of addressing this is by creating a failed execution for the
appointed error workflow stating the error, as can be seen below.
This means the execution itself won't start, as it's prevented before
the execution beings, but we save a "stub" execution to show the error.
![Screenshot 2023-08-17 at 16 17
02](https://github.com/n8n-io/n8n/assets/219272/d8ec0144-13c5-4b11-b91c-a6b440816ccf)
In scope:
- Consolidate `WorkflowService.getMany()`.
- Support non-entity field `ownedBy` for `select`.
- Support `tags` for `filter`.
- Move `addOwnerId` to `OwnershipService`.
- Remove unneeded check for `filter.id`.
- Simplify DTO validation for `filter` and `select`.
- Expand tests for `GET /workflows`.
Workflow list query DTOs:
```
filter → name, active, tags
select → id, name, active, tags, createdAt, updatedAt, versionId, ownedBy
```
Out of scope:
- Migrate `shared_workflow.roleId` and `shared_credential.roleId` to
string IDs.
- Refactor `WorkflowHelpers.getSharedWorkflowIds()`.
Issue: during startup, unfinished executions trigger a recovery process
that, under certain circumstances, can in itself crash the instance
(e.g. by running our of memory), resulting in an infinite recovery loop
This PR aims to change this behaviour by writing a flag file when the
recovery process starts, and removing it when it finishes. In the case
of a crash, this flag will persist and upon the next attempt, the
recovery will instead do the absolute minimal (marking executions as
'crashed'), without attempting any 'crashable' actions.
Changes in https://github.com/n8n-io/n8n/pull/6394 removed xml body parsing for all non-webhook routes. This broken SAML endpoints as they need the XML body parser to function correctly.