Due to a change, during the credentials import command, the core's
Credential object is being called through its prototype. This caused the
Credential's cipher variable to not be set, thus no cipher service being
available during import. This fix catches this edge case and provides a
fix.
https://linear.app/n8n/issue/PAY-933/set-up-leader-selection-for-multiple-main-instances
- [x] Set up new envs
- [x] Add config and license checks
- [x] Implement `MultiMainInstancePublisher`
- [x] Expand `RedisServicePubSubPublisher` to support
`MultiMainInstancePublisher`
- [x] Init `MultiMainInstancePublisher` on startup and destroy on
shutdown
- [x] Add to sandbox plans
- [x] Test manually
Note: This is only for setup - coordinating in reaction to leadership
changes will come in later PRs.
Github issue / Community forum post (link here to close automatically):
---------
Signed-off-by: Oleg Ivaniv <me@olegivaniv.com>
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
This PR allows users to configure the settings to Bull, possibly
reducing the errors with `maxStalledCount` and other issues, that
usually happen either when a worker crashes or when the event loop is
super busy. Increasing the lease time and the `maxStalledCount` settings
might improve UX.
Github issue / Community forum post (link here to close automatically):
Fixes the issue that currently no Switch-Node can be created by for
example pressing + on the parent node or via tab when another node is
already selected.
Github issue / Community forum post (link here to close automatically):
---------
Signed-off-by: Oleg Ivaniv <me@olegivaniv.com>
Co-authored-by: Oleg Ivaniv <me@olegivaniv.com>
This PR converts the hard-deletion interval to a timeout:
- to prevent the interval from not being restored when hard deletion
throws, and
- to prevent a long-running hard deletion from leading to duplicate
deletions.
Since we do not store which executions produced binary data, for pruning
on S3 we need to query for binary data items for each execution in order
to delete them. To minimize requests to S3, allow the user to skip
pruning requests when setting TTL at bucket level.
This change ensures that things like `encryptionKey` and `instanceId`
are always available directly where they are needed, instead of passing
them around throughout the code.
extracted out of #7336
---------
Co-authored-by: Jan Oberhauser <jan.oberhauser@gmail.com>
Co-authored-by: OlegIvaniv <me@olegivaniv.com>
Co-authored-by: Jan Oberhauser <janober@users.noreply.github.com>
Co-authored-by: Val <68596159+valya@users.noreply.github.com>
Co-authored-by: Alex Grozav <alex@grozav.com>
Co-authored-by: Deborah <deborah@starfallprojects.co.uk>
Co-authored-by: Jesper Bylund <mail@jesperbylund.com>
Co-authored-by: Jon <jonathan.bennetts@gmail.com>
Sometimes canvas selection stops working after users interact with node
action buttons (for example if node is moved by dragging one of the
buttons)
NOTE: Ticket number in the branch name is wrong, this fixes ADO-1226
This adds support for
1. custom delimiters
2. reading offsets to avoid having to read a large CSV all at once
3. excluding byte-order-mark
NODE-861
#7443
This is related to an issue with how Bull handles stalled jobs, see
https://github.com/OptimalBits/bull/issues/1415 for reference.
CPU intensive workflows can in certain cases take a long while to finish
up, thereby blocking the thread and causing Bull queue to think the job
has stalled, even though it finished successfully. In these cases the
error handling could then overwrite the successful execution data with
the error message.
## Issue
In community edition, clicking on "View plans" button on "Settings" ->
"Usage and plan" page (e.g. http://127.0.0.1:5678/settings/usage) opens
two new tabs with n8n pricing (one of them with UTM tracking, another
without).
This was introduced in #6317 , when click handler of "View plans" link
container [started
calling](https://github.com/n8n-io/n8n/pull/6317/files#diff-0bf26afac8a06e03b3d39d0668f22408859355b585a9ab420800c125e33f0691R109)
`uiStore.goToUpgrade(...)` which opens n8n pricing in a new tab, while
browser opens another tab for the link URL.
The simplest fix, implemented in this PR, is to prevent default event
handling (so that, after `onViewPlans` is called, browser will not
attempt to process the click additionally as clicking on the link),
similarly to how it is prevented on some other pages. It only solves the
immediate problem of browser opening two new tabs on clicking "View
plans".
Note that **I didn't implement any tests for the changed behavior**,
because it was not covered by tests before, and I couldn't quite figure
out how to cover it now within the existing test approach (considering
that testing the fact that only one new tab is open will likely require
to write entirely new tests relying on puppeteer; as far as I can see,
no existing `editor-ui` tests are doing anything like that). I'll gladly
implement tests for the new behavior if you tell me how you would like
them to look.
The existing tests for `editor-ui` still pass; I didn't run tests for
other subpackages (see "additional contribution notes" below).
## Additional notes on the issue.
I'm not sure that the change in this PR is the correct long-term
solution for the issue, because the URLs for these two methods (custom
click handler for link container and default link handling) are slightly
different:
* Custom click handler calls `useTelemetryStore().track('User clicked
upgrade CTA', ...)`; then calls `sendUsageTelemetry('view_plans')` (it
feels weird that two calls to telemetry are made); then opens new tab
for `https://n8n.io/pricing?utm_campaign=open&source=usage_page` (note
that prior to #7316 the second call to telemetry was done after the new
tab is opened, not before);
* Link itself refers to another page, with slightly different tracking
parameters:
`https://subscription.n8n.io/?instanceid=[REDACTED]&version=1.10.0&callback=http%3A%2F%2F127.0.0.1%3A5678%2Fsettings%2Fusage&source=usage_page`;
but this page redirects to `https://n8n.io/pricing/`.
It is not clear which one of the two is the right way of doing things.
Although `goToUpgrade` is called in 20 places throughout `editor-ui`,
while `viewPlansUrl`, as far as I can see, is used for this button only.
Additionally, since Settings pages don't work without JS anyway, I can
only think of two separate scenarios where any tab would be opened:
* Left-clicking the link (or Ctrl-clicking, or pressing Space or Enter
when the link is focused, or tapping): previously, both custom click
handler was executed and link's `href` was opened; in this PR, only
custom click handler is executed (similarly to how it is done in the
other places where `goToUpgrade` is called);
* Right-clicking (or long tapping, or opening context menu in any other
way) and selecting "open link in new tab" (or similar): opens a new tab
for URL from the `href` attribute (and does not send any telemetry at
all).
I'd say that the better permanent solution would probably be to get rid
of one of these methods entirely, and only rely on another in all cases
(for me, as an outside contributor, the preferred way would be for
custom click handler to only send telemetry, while letting my browser
handle the actual navigation). However, that would be a large change,
much more than one line in this PR.
Additionally, other similar places where `goToUpgrade` is currently
called (directly or indirectly) would also need to be adapted for this
change.
## Additional contribution notes
As a first-time contributor, I've encountered several things I didn't
expect; I'm not sure if they should be expected or are issues:
1. Tests for the entire monorepo consume a lot of RAM; 20GB free RAM was
not enough, so I couldn't run tests for the entire monorepo and had to
only run them for `packages/editor-ui`;
2. Linting is very slow; `pnpm lint` in `packages/editor-ui` takes ten
minutes to complete;
3. It seems that types are not checked. Code OSS highlights numerous
errors in code files: for example, `'debug'` is incompatible with
`CloudUpdateLinkSourceType` expected by `goToUpgrade` here:
3e7a4d3b2c/packages/editor-ui/src/composables/useExecutionDebugging.ts (L128)
However, I'm not getting any errors during build. There is a `typecheck`
script defined in `package.json`, but `pnpm typecheck` fails with:
```
n8n-toy-demo:~/projects/n8n/packages/editor-ui$ pnpm typecheck
> n8n-editor-ui@1.10.0 typecheck
/home/inga/projects/n8n/packages/editor-ui
> vue-tsc --emitDeclarationOnly
error TS5069: Option 'emitDeclarationOnly' cannot be specified without
specifying
option 'declaration' or option 'composite'.
Found 1 error.
ELIFECYCLE Command failed with exit code 1.
n8n-toy-demo:~/projects/n8n/packages/editor-ui$
```
Replacing `--emitDeclarationsOnly` with `--noEmit` in `package.json`
unblocks typechecking and results in seemingly, at first glance, correct
"Found 1924 errors in 306 files" (at least several of the reported
errors that I've checked seem to be correct).
But maybe I'm missing something and there are not in fact two thousands
type errors in `editor-ui`?
In a rare edge case an undefined queue could be returned - this should
not happen and now an error is thrown.
Also using the opportunity to remove a cyclic dependency from the Queue.
- Fix pinch-to-zoom
- Support command + scroll to zoom
- Improve accuracy of zooming (scroll more = zoom more)
- Zoom limits
- Zoom relative to mouse position
Nodes can have properties that have a displayOption which specifies a
version
for which node versions that property applies to. We should take this
into
account when forming the action types for a Node in the NodeList.
For example Notion node has 2 version which have different Page
operations.
This fixes a bug in the pruning (soft-delete). The pruning was a bit too
aggressive, as it also pruned executions that weren't in an end state
yet. This only becomes an issue if there are long-running executions
(e.g. workflow with Wait node) or the prune parameters are set to keep
only a tiny number of executions.
`tsBuildInfoFile` is supposed to be relative to `tsconfig` like `outDir`
is.
Because of this, we are currently saving the TS incremental build cache
for all packages in the same file. This is likely causing issues where
the built backend code sometimes does not accurately map to the current
source code.
This PR changes the incremental build setup to keep the cache in
individual `dist` folders, like it used to be up until a 2 months ago,
before https://github.com/n8n-io/n8n/pull/6816.
This PR adds a message for queue mode which triggers an external secrets
provider reload inside the workers if the configuration has changed on
the main instance.
It also refactors some of the message handler code to remove cyclic
dependencies, as well as remove unnecessary duplicate redis clients
inside services (thanks to no more cyclic deps)
Depends on https://github.com/n8n-io/n8n/pull/7220 | Story:
[PAY-840](https://linear.app/n8n/issue/PAY-840/introduce-object-store-service-and-manager-for-binary-data)
This PR introduces an object store service for Enterprise edition. Note
that the service is tested but currently unused - it will be integrated
soon as a binary data manager, and later for execution data.
`amazonaws.com` in the host is temporarily hardcoded until we integrate
the service and test against AWS, Cloudflare and Backblaze, in the next
PR.
This is ready for review - the PR it depends on is approved and waiting
for CI.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
all commands sent between main instance and workers need to contain a
server id to prevent senders from reacting to their own messages,
causing loops
this PR makes sure all sent messages contain a sender id by default as
part of constructing a sending redis client.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
Depends on: https://github.com/n8n-io/n8n/pull/7195 | Story:
[PAY-837](https://linear.app/n8n/issue/PAY-837/implement-object-store-manager-for-binary-data)
This PR includes `workflowId` in binary data writes so that the S3
manager can support this filepath structure
`/workflows/{workflowId}/executions/{executionId}/binaryData/{binaryFilename}`
to easily delete binary data for workflows. Also all binary data service
and manager methods that take `workflowId` and `executionId` are made
consistent in arg order.
Note: `workflowId` is included in filesystem mode for compatibility with
the common interface, but `workflowId` will remain unused by filesystem
mode until we decide to restructure how this mode stores data.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
Story: [PAY-846](https://linear.app/n8n/issue/PAY-846) | Related:
https://github.com/n8n-io/n8n/pull/7225
For the S3 backend for external storage of binary data and execution
data, the `getAsStream` method in the binary data manager interface used
by FS and S3 will need to become async. This is a breaking change for
nodes-base.
Story: https://linear.app/n8n/issue/PAY-839
This is a longstanding bug, fixed now so that the S3 backend for binary
data can use execution IDs as part of the filename.
To reproduce:
1. Set up a workflow with a POST Webhook node that accepts binary data.
2. Activate the workflow and call it sending a binary file, e.g. `curl
-X POST -F "file=@/path/to/binary/file/test.jpg"
http://localhost:5678/webhook/uuid`
3. Check `~/.n8n/binaryData`. The binary data and metadata files will be
missing the execution ID, e.g. `11869055-83c4-4493-876a-9092c4708b9b`
instead of `39011869055-83c4-4493-876a-9092c4708b9b`.
Depends on: #7164 | Story:
[PAY-838](https://linear.app/n8n/issue/PAY-838/introduce-object-store-service-for-binary-data)
This PR removes `storeMetadata` and `getSize` from the binary data
manager interface, as these are specific to filesystem mode. Also this
disambiguates identifiers:
```
binaryDataId
filesystem:289b4aac51e-dac6-4167-b793-6d5c415e2b47 {mode}:{fileId}
fileId - FS
289b4aac51e-dac6-4167-b793-6d5c415e2b47 {executionId}{uuid}
fileId - S3
/workflows/{workflowId}/executions/{executionId}/binary_data/b4aac51e-dac6-4167-b793-6d5c415e2b47
```
Note: The object store changes originally in this PR were extracted out
into the final PR.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
Depends on: #7092 | Story:
[PAY-768](https://linear.app/n8n/issue/PAY-768)
This PR:
- Generalizes the `IBinaryDataManager` interface.
- Adjusts `Filesystem.ts` to satisfy the interface.
- Sets up an S3 client stub to be filled in in the next PR.
- Turns `BinaryDataManager` into an injectable service.
- Adjusts the config schema and adds new validators.
Note that the PR looks large but all the main changes are in
`packages/core/src/binaryData`.
Out of scope:
- `BinaryDataManager` (now `BinaryDataService`) and `Filesystem.ts` (now
`fs.client.ts`) were slightly refactored for maintainability, but fully
overhauling them is **not** the focus of this PR, which is meant to
clear the way for the S3 implementation. Future improvements for these
two should include setting up a backwards-compatible dir structure that
makes it easier to locate binary data files to delete, removing
duplication, simplifying cloning methods, using integers for binary data
size instead of `prettyBytes()`, writing tests for existing binary data
logic, etc.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
Github issue / Community forum post (link here to close automatically):
---------
Co-authored-by: Omar Ajoue <krynble@gmail.com>
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
This PR introduces banner framework overhaul:
First version of the banner framework was built to allow multiple
banners to be shown at the same time. Since that proven to be the case
we don't need and it turned out to be pretty messy keeping only one
banner visible in such setup, this PR reworks it so it renders only one
banner at a time, based on [this priority
list](https://www.notion.so/n8n/Banner-stack-60948c4167c743718fde80d6745258d5?pvs=4#6afd052ec8d146a1b0fab8884a19add7)
that is assembled together with our product & design team.
### How to test banner stack:
1. Available banners and their priorities are registered
[here](f9f122d46d/packages/editor-ui/src/components/banners/BannerStack.vue (L14))
2. Banners are pushed to stack using `pushBannerToStack` action, for
example:
```
useUIStore().pushBannerToStack('TRIAL');
```
4. Try pushing different banners to stack and check if only the one with
highest priorities is showing up
### How to test the _Email confirmation_ banner:
1. Comment out [this
line](b80d2e3bec/packages/editor-ui/src/stores/cloudPlan.store.ts (L59)),
so cloud data is always fetched
2. Create an
[override](https://chrome.google.com/webstore/detail/resource-override/pkoacgokdfckfpndoffpifphamojphii)
(URL -> File) that will serve user data that triggers this banner:
- **URL**: `*/rest/cloud/proxy/admin/user/me`
- **File**:
```
{
"confirmed": false,
"id": 1,
"email": "test@test.com",
"username": "test"
}
```
3. Run n8n
Based on #7065 | Story: https://linear.app/n8n/issue/PAY-771
n8n on filesystem mode marks binary data to delete on manual execution
deletion, on unsaved execution completion, and on every execution
pruning cycle. We later prune binary data in a separate cycle via these
marker files, based on the configured TTL. In the context of introducing
an S3 client to manage binary data, the filesystem mode's mark-and-prune
setup is too tightly coupled to the general binary data management
client interface.
This PR...
- Ensures the deletion of an execution causes the deletion of any binary
data associated to it. This does away with the need for binary data TTL
and simplifies the filesystem mode's mark-and-prune setup.
- Refactors all execution deletions (including pruning) to cause soft
deletions, hard-deletes soft-deleted executions based on the existing
pruning config, and adjusts execution endpoints to filter out
soft-deleted executions. This reduces DB load, and keeps binary data
around long enough for users to access it when building workflows with
unsaved executions.
- Moves all execution pruning work from an execution lifecycle hook to
`execution.repository.ts`. This keeps related logic in a single place.
- Removes all marking logic from the binary data manager. This
simplifies the interface that the S3 client will meet.
- Adds basic sanity-check tests to pruning logic and execution deletion.
Out of scope:
- Improving existing pruning logic.
- Improving existing execution repository logic.
- Adjusting dir structure for filesystem mode.
---------
Co-authored-by: कारतोफ्फेलस्क्रिप्ट™ <aditya@netroy.in>
Github issue / Community forum post (link here to close automatically):
https://github.com/n8n-io/n8n/pull/6348
---------
Co-authored-by: Giulio Andreini <g.andreini@gmail.com>
Co-authored-by: Marcus <marcus@n8n.io>
This PR implements the updated license SDK so that worker and webhook
instances do not auto-renew licenses any more.
Instead, they receive a `reloadLicense` command via the Redis client
that will fetch the updated license after it was saved on the main
instance
This also contains some refactoring with moving redis sub and pub
clients into the event bus directly, to prevent cyclic dependency
issues.
PR adds a new field to the SourceControlPreferences as well as to the
POST parameters for the `source-control/preferences` and
`source-control/generate-key-pair` endpoints. Both now accept an
optional string parameter `keyGeneratorType` of `'ed25519' | 'rsa'`
Calling the `source-control/generate-key-pair` endpoint with the
parameter set, it will also update the stored preferences accordingly
(so that in the future new keys will use the same method)
By default ed25519 is being used. The default may be changed using a new
environment parameter:
`N8N_SOURCECONTROL_DEFAULT_SSH_KEY_TYPE` which can be `rsa` or `ed25519`
RSA keys are generated with a length of 4096 bytes.
# Motivation
In Queue mode, finished executions would cause the main instance to
always pull all execution data from the database, unflatten it and then
use it to send out event log events and telemetry events, as well as
required returns to Respond to Webhook nodes etc.
This could cause OOM errors when the data was large, since it had to be
fully unpacked and transformed on the main instance’s side, using up a
lot of memory (and time).
This PR attempts to limit this behaviour to only happen in those
required cases where the data has to be forwarded to some waiting
webhook, for example.
# Changes
Execution data is only required in cases, where the active execution has
a `postExecutePromise` attached to it. These usually forward the data to
some other endpoint (e.g. a listening webhook connection).
By adding a helper `getPostExecutePromiseCount()`, we can decide that in
cases where there is nothing listening at all, there is no reason to
pull the data on the main instance.
Previously, there would always be postExecutePromises because the
telemetry events were called. Now, these have been moved into the
workers, which have been given the various InternalHooks calls to their
hook function arrays, so they themselves issue these telemetry and event
calls.
This results in all event log messages to now be logged on the worker’s
event log, as well as the worker’s eventbus being the one to send out
the events to destinations. The main event log does…pretty much nothing.
We are not logging executions on the main event log any more, because
this would require all events to be replicated 1:1 from the workers to
the main instance(s) (this IS possible and implemented, see the worker’s
`replicateToRedisEventLogFunction` - but it is not enabled to reduce the
amount of traffic over redis).
Partial events in the main log could confuse the recovery process and
would result in, ironically, the recovery corrupting the execution data
by considering them crashed.
# Refactor
I have also used the opportunity to reduce duplicate code and move some
of the hook functionality into
`packages/cli/src/executionLifecycleHooks/shared/sharedHookFunctions.ts`
in preparation for a future full refactor of the hooks
Context
When a user is attempting to interact with a foreground action inside an
entity card (workflow, credential, community node, logging destination),
they might accidentally open that entity instead of interacting with a
foreground action.
For these card components, actions are always placed on right side.
A/C
Area around right "column" of entity cards (workflow, cred, community
node, logging destination) should not be a hoverable area (that opens
that entity when clicked). This area is roughly highlighted in screen
shot below in orange.
![image](https://github.com/n8n-io/n8n/assets/5410822/0916bcd5-e972-4367-a862-41d2086a2334)
Github issue / Community forum post (link here to close automatically):
Since the change to allow workflow IDs to become strings in Nano ID
formats, this input broke.
This PR allows all characters that comprise workflow IDs.
---------
Co-authored-by: Iván Ovejero <ivov.src@gmail.com>
This PR adds new endpoints to the REST API:
`/orchestration/worker/status` and `/orchestration/worker/id`
Currently these just trigger the return of status / ids from the workers
via the redis back channel, this still needs to be handled and passed
through to the frontend.
It also adds the eventbus to each worker, and triggers a reload of those
eventbus instances when the configuration changes on the main instances.