Depends on: #7092 | Story:
[PAY-768](https://linear.app/n8n/issue/PAY-768)
This PR:
- Generalizes the `IBinaryDataManager` interface.
- Adjusts `Filesystem.ts` to satisfy the interface.
- Sets up an S3 client stub to be filled in in the next PR.
- Turns `BinaryDataManager` into an injectable service.
- Adjusts the config schema and adds new validators.
Note that the PR looks large but all the main changes are in
`packages/core/src/binaryData`.
Out of scope:
- `BinaryDataManager` (now `BinaryDataService`) and `Filesystem.ts` (now
`fs.client.ts`) were slightly refactored for maintainability, but fully
overhauling them is **not** the focus of this PR, which is meant to
clear the way for the S3 implementation. Future improvements for these
two should include setting up a backwards-compatible dir structure that
makes it easier to locate binary data files to delete, removing
duplication, simplifying cloning methods, using integers for binary data
size instead of `prettyBytes()`, writing tests for existing binary data
logic, etc.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
Github issue / Community forum post (link here to close automatically):
---------
Co-authored-by: Omar Ajoue <krynble@gmail.com>
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
This PR introduces banner framework overhaul:
First version of the banner framework was built to allow multiple
banners to be shown at the same time. Since that proven to be the case
we don't need and it turned out to be pretty messy keeping only one
banner visible in such setup, this PR reworks it so it renders only one
banner at a time, based on [this priority
list](https://www.notion.so/n8n/Banner-stack-60948c4167c743718fde80d6745258d5?pvs=4#6afd052ec8d146a1b0fab8884a19add7)
that is assembled together with our product & design team.
### How to test banner stack:
1. Available banners and their priorities are registered
[here](f9f122d46d/packages/editor-ui/src/components/banners/BannerStack.vue (L14))
2. Banners are pushed to stack using `pushBannerToStack` action, for
example:
```
useUIStore().pushBannerToStack('TRIAL');
```
4. Try pushing different banners to stack and check if only the one with
highest priorities is showing up
### How to test the _Email confirmation_ banner:
1. Comment out [this
line](b80d2e3bec/packages/editor-ui/src/stores/cloudPlan.store.ts (L59)),
so cloud data is always fetched
2. Create an
[override](https://chrome.google.com/webstore/detail/resource-override/pkoacgokdfckfpndoffpifphamojphii)
(URL -> File) that will serve user data that triggers this banner:
- **URL**: `*/rest/cloud/proxy/admin/user/me`
- **File**:
```
{
"confirmed": false,
"id": 1,
"email": "test@test.com",
"username": "test"
}
```
3. Run n8n
Based on #7065 | Story: https://linear.app/n8n/issue/PAY-771
n8n on filesystem mode marks binary data to delete on manual execution
deletion, on unsaved execution completion, and on every execution
pruning cycle. We later prune binary data in a separate cycle via these
marker files, based on the configured TTL. In the context of introducing
an S3 client to manage binary data, the filesystem mode's mark-and-prune
setup is too tightly coupled to the general binary data management
client interface.
This PR...
- Ensures the deletion of an execution causes the deletion of any binary
data associated to it. This does away with the need for binary data TTL
and simplifies the filesystem mode's mark-and-prune setup.
- Refactors all execution deletions (including pruning) to cause soft
deletions, hard-deletes soft-deleted executions based on the existing
pruning config, and adjusts execution endpoints to filter out
soft-deleted executions. This reduces DB load, and keeps binary data
around long enough for users to access it when building workflows with
unsaved executions.
- Moves all execution pruning work from an execution lifecycle hook to
`execution.repository.ts`. This keeps related logic in a single place.
- Removes all marking logic from the binary data manager. This
simplifies the interface that the S3 client will meet.
- Adds basic sanity-check tests to pruning logic and execution deletion.
Out of scope:
- Improving existing pruning logic.
- Improving existing execution repository logic.
- Adjusting dir structure for filesystem mode.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
Github issue / Community forum post (link here to close automatically):
https://github.com/n8n-io/n8n/pull/6348
---------
Co-authored-by: Giulio Andreini <g.andreini@gmail.com>
Co-authored-by: Marcus <marcus@n8n.io>
This PR implements the updated license SDK so that worker and webhook
instances do not auto-renew licenses any more.
Instead, they receive a `reloadLicense` command via the Redis client
that will fetch the updated license after it was saved on the main
instance
This also contains some refactoring with moving redis sub and pub
clients into the event bus directly, to prevent cyclic dependency
issues.
PR adds a new field to the SourceControlPreferences as well as to the
POST parameters for the `source-control/preferences` and
`source-control/generate-key-pair` endpoints. Both now accept an
optional string parameter `keyGeneratorType` of `'ed25519' | 'rsa'`
Calling the `source-control/generate-key-pair` endpoint with the
parameter set, it will also update the stored preferences accordingly
(so that in the future new keys will use the same method)
By default ed25519 is being used. The default may be changed using a new
environment parameter:
`N8N_SOURCECONTROL_DEFAULT_SSH_KEY_TYPE` which can be `rsa` or `ed25519`
RSA keys are generated with a length of 4096 bytes.
# Motivation
In Queue mode, finished executions would cause the main instance to
always pull all execution data from the database, unflatten it and then
use it to send out event log events and telemetry events, as well as
required returns to Respond to Webhook nodes etc.
This could cause OOM errors when the data was large, since it had to be
fully unpacked and transformed on the main instance’s side, using up a
lot of memory (and time).
This PR attempts to limit this behaviour to only happen in those
required cases where the data has to be forwarded to some waiting
webhook, for example.
# Changes
Execution data is only required in cases, where the active execution has
a `postExecutePromise` attached to it. These usually forward the data to
some other endpoint (e.g. a listening webhook connection).
By adding a helper `getPostExecutePromiseCount()`, we can decide that in
cases where there is nothing listening at all, there is no reason to
pull the data on the main instance.
Previously, there would always be postExecutePromises because the
telemetry events were called. Now, these have been moved into the
workers, which have been given the various InternalHooks calls to their
hook function arrays, so they themselves issue these telemetry and event
calls.
This results in all event log messages to now be logged on the worker’s
event log, as well as the worker’s eventbus being the one to send out
the events to destinations. The main event log does…pretty much nothing.
We are not logging executions on the main event log any more, because
this would require all events to be replicated 1:1 from the workers to
the main instance(s) (this IS possible and implemented, see the worker’s
`replicateToRedisEventLogFunction` - but it is not enabled to reduce the
amount of traffic over redis).
Partial events in the main log could confuse the recovery process and
would result in, ironically, the recovery corrupting the execution data
by considering them crashed.
# Refactor
I have also used the opportunity to reduce duplicate code and move some
of the hook functionality into
`packages/cli/src/executionLifecycleHooks/shared/sharedHookFunctions.ts`
in preparation for a future full refactor of the hooks
Context
When a user is attempting to interact with a foreground action inside an
entity card (workflow, credential, community node, logging destination),
they might accidentally open that entity instead of interacting with a
foreground action.
For these card components, actions are always placed on right side.
A/C
Area around right "column" of entity cards (workflow, cred, community
node, logging destination) should not be a hoverable area (that opens
that entity when clicked). This area is roughly highlighted in screen
shot below in orange.
![image](https://github.com/n8n-io/n8n/assets/5410822/0916bcd5-e972-4367-a862-41d2086a2334)
Github issue / Community forum post (link here to close automatically):
Since the change to allow workflow IDs to become strings in Nano ID
formats, this input broke.
This PR allows all characters that comprise workflow IDs.
---------
Co-authored-by: Iván Ovejero <ivov.src@gmail.com>
This PR adds new endpoints to the REST API:
`/orchestration/worker/status` and `/orchestration/worker/id`
Currently these just trigger the return of status / ids from the workers
via the redis back channel, this still needs to be handled and passed
through to the frontend.
It also adds the eventbus to each worker, and triggers a reload of those
eventbus instances when the configuration changes on the main instances.
Until https://github.com/n8n-io/n8n/pull/7061 we had an edge case where
a manual unsaved workflow when run creates an orphan execution, i.e. a
saved execution not pointing to any workflow. This execution is only
ever visible to the instance owner (even if triggered by a member), and
is wrongly stored as unfinished and crashed. This PR enforces that the
DB disallows any such executions from making it into the DB.
This is needed also for the S3 client, which will include the
`workflowId` in the path-like filename.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
- For a saved execution, we write to disk binary data and metadata.
These two are only ever deleted via `POST /executions/delete`. No marker
file, so untouched by pruning.
- For an unsaved execution, we write to disk binary data, binary data
metadata, and a marker file at `/meta`. We later delete all three during
pruning.
- The third flow is legacy. Currently, if the execution is unsaved, we
actually store it in the DB while running the workflow and immediately
after the workflow is finished during the `onWorkflowPostExecute()` hook
we delete that execution, so the second flow applies. But formerly, we
did not store unsaved executions in the DB ("ephemeral executions") and
so we needed to write a marker file at `/persistMeta` so that, if the
ephemeral execution crashed after the step where binary data was stored,
we had a way to later delete its associated dangling binary data via a
second pruning cycle, and if the ephemeral execution succeeded, then we
immediately cleaned up the marker file at `/persistMeta` during the
`onWorkflowPostExecute()` hook.
This creation and cleanup at `/persistMeta` is still happening, but this
third flow no longer has a purpose, as we now store unsaved executions
in the DB and delete them immediately after. Hence the third flow can be
removed.