## Summary
Add `ShutdownService` and `OnShutdown` decorator for more unified way to
shutdown different components. Use this new way in the following
components:
- HTTP(S) server
- Pruning service
- Push connection
- License
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
## Summary
A circular dependency between `WorkflowService` and
`ActiveWorkflowRunner` is sometimes causing `this.activeWorkflowRunner`
to be `undefined` in `WorkflowService`.
Breaking this circular dependency should hopefully fix this issue.
## Related tickets and issues
#8122
## Review / Merge checklist
- [x] PR title and summary are descriptive
- [ ] Tests included
In case someone manually prunes their executions table before upgrading
to 1.x, `MigrateIntegerKeysToString` should gracefully handle that,
instead of crashing the application.
## Review / Merge checklist
- [x] PR title and summary are descriptive
In the case of a filesystem failure to rename the binary files as part
of the execution's cleanup process, the execution would fail to be saved
and would never finish. This catch prevents it.
## Summary
Whenever an execution is wrapping u to save the data, if it uses binary
data n8n will try to find possibly misallocated files and place them in
the right folder. If this process fails, the execution fails to finish.
Given the execution has already finished at this point, and we cannot
handle the binary data errors more gracefully, all we can do at this
point is log the message as it's a filesystem issue. The rest of the
execution saving process should remain as normal.
## Related tickets and issues
https://linear.app/n8n/issue/HELP-430
## Review / Merge checklist
- [ ] PR title and summary are descriptive. **Remember, the title
automatically goes into the changelog. Use `(no-changelog)` otherwise.**
([conventions](https://github.com/n8n-io/n8n/blob/master/.github/pull_request_title_conventions.md))
- [ ] [Docs updated](https://github.com/n8n-io/n8n-docs) or follow-up
ticket created.
- [ ] Tests included.
> A bug is not considered fixed, unless a test is added to prevent it
from happening again.
> A feature is not complete without tests.
---------
Co-authored-by: Iván Ovejero <ivov.src@gmail.com>
This reverts commit a895ee87fc (#8090)
Our telemetry backend is throwing 500s with the updated rudderstack sdk.
Until that is resolved, we need to downgrade.
## Review / Merge checklist
- [x] PR title and summary are descriptive
Remove duplication, improve readability, and expand tests for
`TestWebhooks.ts` - in anticipation for storing test webhooks in Redis.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
This helps remove some of the older versions of transient dependencies,
like axios 0.x and ioredis 4.x.
## Review / Merge checklist
- [x] PR title and summary are descriptive.
## Summary
We accidentally made some functions `async` in
https://github.com/n8n-io/n8n/pull/7846
This PR reverts that change.
## Review / Merge checklist
- [x] PR title and summary are descriptive.
`tsc-alias` doesn't seem to replace imports when using template strings
## Related tickets and issues
#8085
## Review / Merge checklist
- [x] PR title and summary are descriptive.
## Summary
This PR updates our backend sentry setup to remove integrations that
don't provide us any value. This also reduces the amount of PII that
gets sent to Sentry.
[Sample event](https://n8nio.sentry.io/issues/4725315362/)
## Related tickets
[ENG-95](https://linear.app/n8n/issue/ENG-95)
## Review / Merge checklist
- [x] PR title and summary are descriptive.
Add generic N8N_GRACEFUL_SHUTDOWN_TIMEOUT which controls how long n8n
process will wait for graceful exit before exitting forcefully. This
variables replaces the QUEUE_WORKER_TIMEOUT variable that was used for
worker process.
DEPRECATED: QUEUE_WORKER_TIMEOUT deprected
QUEUE_WORKER_TIMEOUT environment variable has been replaced with
N8N_GRACEFUL_SHUTDOWN_TIMEOUT.
If the process doesn't shutdown within a time limit, exit with error
code.
1. conceptually something timing out is an error.
2. on successful exit we close down the DB connection gracefully. On an
exit timeout we rather not do that, since it will wait for any active
connections to close and would possible block the exit.
## Summary
Hashicorp Vault prefers a `LIST` HTTP method to be used when fetching
secrets but not all environments will allow custom http methods through
WAFs. This PR adds `N8N_EXTERNAL_SECRETS_PREFER_GET` which when set to
`true` will use GET instead of LIST to fetch secrets.
## Review / Merge checklist
- [x] PR title and summary are descriptive. **Remember, the title
automatically goes into the changelog. Use `(no-changelog)` otherwise.**
([conventions](https://github.com/n8n-io/n8n/blob/master/.github/pull_request_title_conventions.md))
## Summary
Handle circular references in the public API for executions created
prior to the fix from #8030
## Related tickets
[PAY-1119](https://linear.app/n8n/issue/PAY-1119)
## Review / Merge checklist
- [x] PR title and summary are descriptive.
Refactor static workflow service classes into DI-compatible classes
Context: https://n8nio.slack.com/archives/C069HS026UF/p1702466571648889
Up next:
- Inject dependencies into workflow services
- Consolidate workflow controllers into one
- Make workflow controller injectable
- Inject dependencies into workflow controller
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
We added ID-less workflow reporting at #8031, which has already produced
multiple reports coming from internal, enough info to tackle [this
story](https://linear.app/n8n/issue/PAY-1147). To prevent an
overwhelming number of reports from cloud, this PR removes the reporting
for now.
When setting up queue mode, it is easy to overlook that not exporting
Postgres env vars will default the worker to use sqlite, which will fail
during execution with a non-obvious error. Hence add warnings when
starting a worker with an incompatible DB type.
We're initializing the queue twice because of a [bad
merge](2c63474538).
No associated known bugs but no need to init the queue twice. We should
follow up by investigating if any pending bugs can be associated to
this.
Github issue / Community forum post (link here to close automatically):
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
Co-authored-by: Giulio Andreini <andreini@netseven.it>
When performing actions such as renaming a workflow or updating its
settings, n8n errors with "Failed to save workflow version" in the
console although the saving process was successful. We are now correctly
checking whether `nodes` and `connections` exist and only then save a
snapshot.
Github issue / Community forum post (link here to close automatically):
## Summary
Fixes an issue preventing n8n from pulling secrets from Hashicorp Vault
KV stores if the secret path contained a `-` or a `/`, An example
provided was `integrations/n8n-workflows` which I have tested in my
local instance of Vault.
This still needs testing with Infisical to make sure nothing breaks
there.
## Summary
Adds `N8N_EXTERNAL_SECRETS_UPDATE_INTERVAL` to allow enterprise users to
tweak the update internal for importing new secrets.
If using a config file the value is:
```
"externalSecrets": {
"updateInterval": 300
}
```
#### How to test the change:
1. Run as normal and check that the secret is updated every 5 minutes
2. Set `N8N_EXTERNAL_SECRETS_UPDATE_INTERVAL` to 10
3. Check the secret is reloaded after 10 seconds
## Review / Merge checklist
- [x] PR title and summary are descriptive. **Remember, the title
automatically goes into the changelog. Use `(no-changelog)` otherwise.**
([conventions](https://github.com/n8n-io/n8n/blob/master/.github/pull_request_title_conventions.md))
- [x] [Docs updated](https://github.com/n8n-io/n8n-docs) or follow-up
ticket created.
Saving execution data is one of the slowest DB operations in the
application, and is likely behind some of the sqlite transaction
concurrency issues we've been seeing.
This not only remove the 2 separate transactions for saving
`ExecutionEntity` and `ExecutionData`, but also remove fields from
`ExecutionData.workflowData` that don't need to be saved (like `tags`,
`shared`, `statistics`, `triggerCount`, etc).
This PR introduces the possibility of inviting new users with an `admin`
role and changing the role of already invited users.
Also using scoped permission checks where applicable instead of using
user role checks.
---------
Co-authored-by: Val <68596159+valya@users.noreply.github.com>
Co-authored-by: Alex Grozav <alex@grozav.com>
Co-authored-by: Iván Ovejero <ivov.src@gmail.com>
## Summary
Extend existing user types in the E2E database. Currently, we have only
owner and member but we need also admin
---------
Co-authored-by: Val <68596159+valya@users.noreply.github.com>
I have observed that the next hard deletion timeout is not scheduled if
the `hardDeleteOnPruningCycle` function throws when fetching the data
from the database. That is because the thrown error is not caught and
the `scheduleHardDeletion` method is not called.
This PR moves the call to `scheduleHardDeletion` into the
`scheduleHardDeletion` for better cohesion, and ensures that it is
called even if `hardDeleteOnPruningCycle` throws.
## Summary
Ensure `ownedBy` and `sharedWith` are present and uniform for
credentials and workflows.
Details in story: https://linear.app/n8n/issue/PAY-987
Ensure all errors in `cli` are `ApplicationError` or children of it and
contain no variables in the message, to continue normalizing all the
errors we report to Sentry
Follow-up to: https://github.com/n8n-io/n8n/pull/7839
extracted out of #7336
---------
Co-authored-by: Jan Oberhauser <jan.oberhauser@gmail.com>
Co-authored-by: Oleg Ivaniv <me@olegivaniv.com>
Co-authored-by: Alex Grozav <alex@grozav.com>
Ensure all errors in `cli` inherit from `ApplicationError` to continue
normalizing all the errors we report to Sentry
Follow-up to: https://github.com/n8n-io/n8n/pull/7820
This PR continues the effort of moving logic inside execution lifecycle
hooks into standalone testable functions, as a stepping stone to
refactoring the hooks themselves.
Keep reporting [path-related
errors](https://n8nio.sentry.io/issues/4649493725) in Sentry but
consolidate them in a single error group.
Also, add `options.extra` as `meta` so they remain visible in debug
logs:
```
2023-11-24T11:50:54.852Z | error | ReportableError: Something went wrong "{ test: 123, file: 'LoggerProxy.js', function: 'exports.error' }"
```
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
https://linear.app/n8n/issue/PAY-985
```
PATCH /users/:id/role
unauthenticated user
✓ should receive 401 (349 ms)
member
✓ should fail to demote owner to member (349 ms)
✓ should fail to demote owner to admin (359 ms)
✓ should fail to demote admin to member (381 ms)
✓ should fail to promote other member to owner (353 ms)
✓ should fail to promote other member to admin (377 ms)
✓ should fail to promote self to admin (354 ms)
✓ should fail to promote self to owner (371 ms)
admin
✓ should receive 400 on invalid payload (351 ms)
✓ should receive 404 on unknown target user (351 ms)
✓ should fail to demote owner to admin (349 ms)
✓ should fail to demote owner to member (347 ms)
✓ should fail to promote member to owner (384 ms)
✓ should fail to promote admin to owner (350 ms)
✓ should be able to demote admin to member (354 ms)
✓ should be able to demote self to member (350 ms)
✓ should be able to promote member to admin (349 ms)
owner
✓ should be able to promote member to admin (349 ms)
✓ should be able to demote admin to member (349 ms)
✓ should fail to demote self to admin (348 ms)
✓ should fail to demote self to member (354 ms)
```
This PR introduces the following changes:
- New Vue stores: `collaborationStore` and `pushConnectionStore`
- Front-end push connection handling overhaul: Keep only a singe
connection open and handle it from the new store
- Add user avatars in the editor header when there are multiple users
working on the same workflow
- Sending a heartbeat event to back-end service periodically to confirm
user is still active
- Back-end overhauls (authored by @tomi):
- Implementing a cleanup procedure that removes inactive users
- Refactoring collaboration service current implementation
---------
Co-authored-by: Tomi Turtiainen <10324676+tomi@users.noreply.github.com>
Validate first and last names before saving them to database. This
should prevent security issue with un-sanitized data that ends up in
emails.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
When we upgrade typeorm in #5151, we switched from no pooling to a
default pool-size of 10. This somehow significantly deteriorates the
performance of queries when the application is under load.
Followup to #7566 | Story: https://linear.app/n8n/issue/PAY-926
### Manual workflow activation and deactivation
In a multi-main scenario, if the user manually activates or deactivates
a workflow, the process (whether leader or follower) that handles the
PATCH request and updates its internal state should send a message into
the command channel, so that all other main processes update their
internal state accordingly:
- Add to `ActiveWorkflows` if activating
- Remove from `ActiveWorkflows` if deactivating
- Remove and re-add to `ActiveWorkflows` if the update did not change
activation status.
After updating their internal state, if activating or deactivating, the
recipient main processes should push a message to all connected
frontends so that these can update their stores and so reflect the value
in the UI.
### Workflow activation errors
On failure to activate a workflow, the main instance should record the
error in Redis - main instances should always pull activation errors
from Redis in a multi-main scenario.
### Leadership change
On leadership change...
- The old leader should stop pruning and the new leader should start
pruning.
- The old leader should remove trigger- and poller-based workflows and
the new leader should add them.
1. Reduce a lot of code duplication
2. Move more endpoints out of `Server.ts`
3. Move all query-param parsing and validation into a middleware to make
the route handlers simpler.
This PR:
- Creates `InvitationController`
- Moves `POST /users` to `POST /invitations` and move related test to
`invitations.api.tests`
- Moves `POST /users/:id` to `POST /invitations/:id/accept` and move
related test to `invitations.api.tests`
- Adjusts FE to use new endpoints
- Moves all the invitation logic to the `UserService`
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
Github issue / Community forum post (link here to close automatically):
---------
Co-authored-by: Giulio Andreini <g.andreini@gmail.com>
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
This change expands on the command channel communication introduced
lately between the main instance(s) and the workers. The frontend gets a
new menu entry "Workers" which will, when opened, trigger a regular call
to getStatus from the workers. The workers then respond via their
response channel to the backend, which then pushes the status to the
frontend.
This introduces the use of ChartJS for metrics.
This feature is still in MVP state and thus disabled by default for the
moment.
- Enable two-way communication with web sockets
- Enable sending push messages to specific users
- Add collaboration service for managing active users for workflow
Missing things:
- State is currently kept only in memory, making this not work in
multi-master setups
- Removing a user from active users in situations where they go inactive
or we miss the "workflow closed" message
- I think a timer based solution for this would cover most edge cases.
I.e. have FE ping every X minutes, BE removes the user unless they have
received a ping in Y minutes, where Y > X
- FE changes to be added later by @MiloradFilipovic
Github issue / Community forum post (link here to close automatically):
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
Story: https://linear.app/n8n/issue/PAY-926
This PR coordinates workflow activation on instance startup and on
leadership change in multiple main scenario in the internal API. Part 3
on manual workflow activation and deactivation will be a separate PR.
### Part 1: Instance startup
In multi-main scenario, on starting an instance...
- [x] If the instance is the leader, it should add webhooks, triggers
and pollers.
- [x] If the instance is the follower, it should not add webhooks,
triggers or pollers.
- [x] Unit tests.
### Part 2: Leadership change
In multi-main scenario, if the main instance leader dies…
- [x] The new main instance leader must activate all trigger- and
poller-based workflows, excluding webhook-based workflows.
- [x] The old main instance leader must deactivate all trigger- and
poller-based workflows, excluding webhook-based workflows.
- [x] Unit tests.
To test, start two instances and check behavior on startup and
leadership change:
```
EXECUTIONS_MODE=queue N8N_LEADER_SELECTION_ENABLED=true N8N_LICENSE_TENANT_ID=... N8N_LICENSE_ACTIVATION_KEY=... N8N_LOG_LEVEL=debug npm run start
EXECUTIONS_MODE=queue N8N_LEADER_SELECTION_ENABLED=true N8N_LICENSE_TENANT_ID=... N8N_LICENSE_ACTIVATION_KEY=... N8N_LOG_LEVEL=debug N8N_PORT=5679 npm run start
```
This PR ensures `MultiMainInstancePublisher` is initialized before
checking if the instance is leader or follower. Followers skip license
init, license check, and pruning start and stop.
Github issue / Community forum post (link here to close automatically):
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <netroy@users.noreply.github.com>
To help debugging possible issues in startup and migrations, log the
executed migrations with log level 'info', instead of 'debug'.
Github issue / Community forum post (link here to close automatically):
Due to a change, during the credentials import command, the core's
Credential object is being called through its prototype. This caused the
Credential's cipher variable to not be set, thus no cipher service being
available during import. This fix catches this edge case and provides a
fix.
https://linear.app/n8n/issue/PAY-933/set-up-leader-selection-for-multiple-main-instances
- [x] Set up new envs
- [x] Add config and license checks
- [x] Implement `MultiMainInstancePublisher`
- [x] Expand `RedisServicePubSubPublisher` to support
`MultiMainInstancePublisher`
- [x] Init `MultiMainInstancePublisher` on startup and destroy on
shutdown
- [x] Add to sandbox plans
- [x] Test manually
Note: This is only for setup - coordinating in reaction to leadership
changes will come in later PRs.
Github issue / Community forum post (link here to close automatically):
---------
Signed-off-by: Oleg Ivaniv <me@olegivaniv.com>
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
This PR allows users to configure the settings to Bull, possibly
reducing the errors with `maxStalledCount` and other issues, that
usually happen either when a worker crashes or when the event loop is
super busy. Increasing the lease time and the `maxStalledCount` settings
might improve UX.
Github issue / Community forum post (link here to close automatically):
This PR converts the hard-deletion interval to a timeout:
- to prevent the interval from not being restored when hard deletion
throws, and
- to prevent a long-running hard deletion from leading to duplicate
deletions.
Since we do not store which executions produced binary data, for pruning
on S3 we need to query for binary data items for each execution in order
to delete them. To minimize requests to S3, allow the user to skip
pruning requests when setting TTL at bucket level.
This change ensures that things like `encryptionKey` and `instanceId`
are always available directly where they are needed, instead of passing
them around throughout the code.
This is related to an issue with how Bull handles stalled jobs, see
https://github.com/OptimalBits/bull/issues/1415 for reference.
CPU intensive workflows can in certain cases take a long while to finish
up, thereby blocking the thread and causing Bull queue to think the job
has stalled, even though it finished successfully. In these cases the
error handling could then overwrite the successful execution data with
the error message.
In a rare edge case an undefined queue could be returned - this should
not happen and now an error is thrown.
Also using the opportunity to remove a cyclic dependency from the Queue.
This fixes a bug in the pruning (soft-delete). The pruning was a bit too
aggressive, as it also pruned executions that weren't in an end state
yet. This only becomes an issue if there are long-running executions
(e.g. workflow with Wait node) or the prune parameters are set to keep
only a tiny number of executions.
This PR adds a message for queue mode which triggers an external secrets
provider reload inside the workers if the configuration has changed on
the main instance.
It also refactors some of the message handler code to remove cyclic
dependencies, as well as remove unnecessary duplicate redis clients
inside services (thanks to no more cyclic deps)
Depends on https://github.com/n8n-io/n8n/pull/7220 | Story:
[PAY-840](https://linear.app/n8n/issue/PAY-840/introduce-object-store-service-and-manager-for-binary-data)
This PR introduces an object store service for Enterprise edition. Note
that the service is tested but currently unused - it will be integrated
soon as a binary data manager, and later for execution data.
`amazonaws.com` in the host is temporarily hardcoded until we integrate
the service and test against AWS, Cloudflare and Backblaze, in the next
PR.
This is ready for review - the PR it depends on is approved and waiting
for CI.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
all commands sent between main instance and workers need to contain a
server id to prevent senders from reacting to their own messages,
causing loops
this PR makes sure all sent messages contain a sender id by default as
part of constructing a sending redis client.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
Depends on: https://github.com/n8n-io/n8n/pull/7195 | Story:
[PAY-837](https://linear.app/n8n/issue/PAY-837/implement-object-store-manager-for-binary-data)
This PR includes `workflowId` in binary data writes so that the S3
manager can support this filepath structure
`/workflows/{workflowId}/executions/{executionId}/binaryData/{binaryFilename}`
to easily delete binary data for workflows. Also all binary data service
and manager methods that take `workflowId` and `executionId` are made
consistent in arg order.
Note: `workflowId` is included in filesystem mode for compatibility with
the common interface, but `workflowId` will remain unused by filesystem
mode until we decide to restructure how this mode stores data.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
Story: [PAY-846](https://linear.app/n8n/issue/PAY-846) | Related:
https://github.com/n8n-io/n8n/pull/7225
For the S3 backend for external storage of binary data and execution
data, the `getAsStream` method in the binary data manager interface used
by FS and S3 will need to become async. This is a breaking change for
nodes-base.
Story: https://linear.app/n8n/issue/PAY-839
This is a longstanding bug, fixed now so that the S3 backend for binary
data can use execution IDs as part of the filename.
To reproduce:
1. Set up a workflow with a POST Webhook node that accepts binary data.
2. Activate the workflow and call it sending a binary file, e.g. `curl
-X POST -F "file=@/path/to/binary/file/test.jpg"
http://localhost:5678/webhook/uuid`
3. Check `~/.n8n/binaryData`. The binary data and metadata files will be
missing the execution ID, e.g. `11869055-83c4-4493-876a-9092c4708b9b`
instead of `39011869055-83c4-4493-876a-9092c4708b9b`.
Depends on: #7092 | Story:
[PAY-768](https://linear.app/n8n/issue/PAY-768)
This PR:
- Generalizes the `IBinaryDataManager` interface.
- Adjusts `Filesystem.ts` to satisfy the interface.
- Sets up an S3 client stub to be filled in in the next PR.
- Turns `BinaryDataManager` into an injectable service.
- Adjusts the config schema and adds new validators.
Note that the PR looks large but all the main changes are in
`packages/core/src/binaryData`.
Out of scope:
- `BinaryDataManager` (now `BinaryDataService`) and `Filesystem.ts` (now
`fs.client.ts`) were slightly refactored for maintainability, but fully
overhauling them is **not** the focus of this PR, which is meant to
clear the way for the S3 implementation. Future improvements for these
two should include setting up a backwards-compatible dir structure that
makes it easier to locate binary data files to delete, removing
duplication, simplifying cloning methods, using integers for binary data
size instead of `prettyBytes()`, writing tests for existing binary data
logic, etc.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
Github issue / Community forum post (link here to close automatically):
---------
Co-authored-by: Omar Ajoue <krynble@gmail.com>
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
Based on #7065 | Story: https://linear.app/n8n/issue/PAY-771
n8n on filesystem mode marks binary data to delete on manual execution
deletion, on unsaved execution completion, and on every execution
pruning cycle. We later prune binary data in a separate cycle via these
marker files, based on the configured TTL. In the context of introducing
an S3 client to manage binary data, the filesystem mode's mark-and-prune
setup is too tightly coupled to the general binary data management
client interface.
This PR...
- Ensures the deletion of an execution causes the deletion of any binary
data associated to it. This does away with the need for binary data TTL
and simplifies the filesystem mode's mark-and-prune setup.
- Refactors all execution deletions (including pruning) to cause soft
deletions, hard-deletes soft-deleted executions based on the existing
pruning config, and adjusts execution endpoints to filter out
soft-deleted executions. This reduces DB load, and keeps binary data
around long enough for users to access it when building workflows with
unsaved executions.
- Moves all execution pruning work from an execution lifecycle hook to
`execution.repository.ts`. This keeps related logic in a single place.
- Removes all marking logic from the binary data manager. This
simplifies the interface that the S3 client will meet.
- Adds basic sanity-check tests to pruning logic and execution deletion.
Out of scope:
- Improving existing pruning logic.
- Improving existing execution repository logic.
- Adjusting dir structure for filesystem mode.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
This PR implements the updated license SDK so that worker and webhook
instances do not auto-renew licenses any more.
Instead, they receive a `reloadLicense` command via the Redis client
that will fetch the updated license after it was saved on the main
instance
This also contains some refactoring with moving redis sub and pub
clients into the event bus directly, to prevent cyclic dependency
issues.
PR adds a new field to the SourceControlPreferences as well as to the
POST parameters for the `source-control/preferences` and
`source-control/generate-key-pair` endpoints. Both now accept an
optional string parameter `keyGeneratorType` of `'ed25519' | 'rsa'`
Calling the `source-control/generate-key-pair` endpoint with the
parameter set, it will also update the stored preferences accordingly
(so that in the future new keys will use the same method)
By default ed25519 is being used. The default may be changed using a new
environment parameter:
`N8N_SOURCECONTROL_DEFAULT_SSH_KEY_TYPE` which can be `rsa` or `ed25519`
RSA keys are generated with a length of 4096 bytes.
# Motivation
In Queue mode, finished executions would cause the main instance to
always pull all execution data from the database, unflatten it and then
use it to send out event log events and telemetry events, as well as
required returns to Respond to Webhook nodes etc.
This could cause OOM errors when the data was large, since it had to be
fully unpacked and transformed on the main instance’s side, using up a
lot of memory (and time).
This PR attempts to limit this behaviour to only happen in those
required cases where the data has to be forwarded to some waiting
webhook, for example.
# Changes
Execution data is only required in cases, where the active execution has
a `postExecutePromise` attached to it. These usually forward the data to
some other endpoint (e.g. a listening webhook connection).
By adding a helper `getPostExecutePromiseCount()`, we can decide that in
cases where there is nothing listening at all, there is no reason to
pull the data on the main instance.
Previously, there would always be postExecutePromises because the
telemetry events were called. Now, these have been moved into the
workers, which have been given the various InternalHooks calls to their
hook function arrays, so they themselves issue these telemetry and event
calls.
This results in all event log messages to now be logged on the worker’s
event log, as well as the worker’s eventbus being the one to send out
the events to destinations. The main event log does…pretty much nothing.
We are not logging executions on the main event log any more, because
this would require all events to be replicated 1:1 from the workers to
the main instance(s) (this IS possible and implemented, see the worker’s
`replicateToRedisEventLogFunction` - but it is not enabled to reduce the
amount of traffic over redis).
Partial events in the main log could confuse the recovery process and
would result in, ironically, the recovery corrupting the execution data
by considering them crashed.
# Refactor
I have also used the opportunity to reduce duplicate code and move some
of the hook functionality into
`packages/cli/src/executionLifecycleHooks/shared/sharedHookFunctions.ts`
in preparation for a future full refactor of the hooks
This PR adds new endpoints to the REST API:
`/orchestration/worker/status` and `/orchestration/worker/id`
Currently these just trigger the return of status / ids from the workers
via the redis back channel, this still needs to be handled and passed
through to the frontend.
It also adds the eventbus to each worker, and triggers a reload of those
eventbus instances when the configuration changes on the main instances.
Until https://github.com/n8n-io/n8n/pull/7061 we had an edge case where
a manual unsaved workflow when run creates an orphan execution, i.e. a
saved execution not pointing to any workflow. This execution is only
ever visible to the instance owner (even if triggered by a member), and
is wrongly stored as unfinished and crashed. This PR enforces that the
DB disallows any such executions from making it into the DB.
This is needed also for the S3 client, which will include the
`workflowId` in the path-like filename.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
- For a saved execution, we write to disk binary data and metadata.
These two are only ever deleted via `POST /executions/delete`. No marker
file, so untouched by pruning.
- For an unsaved execution, we write to disk binary data, binary data
metadata, and a marker file at `/meta`. We later delete all three during
pruning.
- The third flow is legacy. Currently, if the execution is unsaved, we
actually store it in the DB while running the workflow and immediately
after the workflow is finished during the `onWorkflowPostExecute()` hook
we delete that execution, so the second flow applies. But formerly, we
did not store unsaved executions in the DB ("ephemeral executions") and
so we needed to write a marker file at `/persistMeta` so that, if the
ephemeral execution crashed after the step where binary data was stored,
we had a way to later delete its associated dangling binary data via a
second pruning cycle, and if the ephemeral execution succeeded, then we
immediately cleaned up the marker file at `/persistMeta` during the
`onWorkflowPostExecute()` hook.
This creation and cleanup at `/persistMeta` is still happening, but this
third flow no longer has a purpose, as we now store unsaved executions
in the DB and delete them immediately after. Hence the third flow can be
removed.
Github issue / Community forum post (link here to close automatically):
For the upcoming workflow history feature, we're creating the necessary
database tables.
Also changes the schema for Postgres so the versionId column is now
properly a UUID. The `using` statement prevents losing data, basically
converting the strings to UUIDs.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <netroy@users.noreply.github.com>