Conversation
|
I see you updated files related to
|
CORA - Pending Reviewers
Legend: ✅ Approved | ❌ Changes Requested | 💬 Commented | 🚫 Dismissed | ⏳ Pending | ❓ Unknown For more details, see the full review summary. |
| @@ -444,9 +445,14 @@ func (e *Engine) runTriggerSubscriptionPhase(ctx context.Context) error { | |||
| // no Config needed - NoDAG uses Payload | |||
| }) | |||
| if regErr != nil { | |||
There was a problem hiding this comment.
Note - now the remote trigger can return an error, this means workflow will fail to start when this happens. In the legacy code as the error from a remote trigger was not reported, it would continue to poll the publisher with registration requests, such that in theory an inactive trigger could become active eventually should the error be caused by a transient issue.
If we wanted to preserve that behaviour we could differentiate between error types that are consider to be recoverable/non-recoverable and only fail workflow in the non-recoverable case. (for example we could consider user errors non-recoverable, system errors recoverable or even introduce a seperate attribute for recoverability on the capability error). For this to work the engine would need to know if the trigger capability is being run locally or remotely.
However, we already have workflow retry behaviour, so IMO it's acceptable to fail the workflow where remote trigger returns an error on initial registration and rely on workflow retry to handle transient errors. This makes it consistent with local triggers.
|
✅ No conflicts with other open PRs targeting |
| "unicode" | ||
| ) | ||
|
|
||
| // TODO this is duplicated from utils to minimise the PR changeset - will be deduped later |
There was a problem hiding this comment.
Going to address this in a follow on commit
pickup fix to local trigger capability error reporting
| } | ||
| } else { | ||
| e.logger().Errorw("Trigger registration failed", "triggerID", sub.Id, "err", regErr) | ||
| e.metrics.With(platform.KeyTriggerID, sub.Id).IncrementRegisterTriggerFailureCounter(gCtx) |
There was a problem hiding this comment.
why are you omitting the platform.KeyCapabilityErrorCode here?
Also, for the capability_executor we used the type of error caperrors.Internal.String() so it will help us later on figuring out unclassified errors from the capabilities actions, should be the same for failed registrations
There was a problem hiding this comment.
This relates to the change in trigger subscriber (previous comment). In the scenario where the consensus is not reached on the errors, a capability error with the consensus_failed error code is returned to indicate such has occurred, the subscription_registration logs out the underlying errors so that it can be determined what is the underlying cause i.e. why we are getting errors on which consensus cannot be determined.
| } | ||
| } else if msg.Method == types.TriggerRegistrationStatus { | ||
| meta := msg.GetTriggerRegistrationMetadata() | ||
| if meta == nil { |
There was a problem hiding this comment.
could this ever happen?
There was a problem hiding this comment.
Honestly I can't see how, but the same check is in place for types.MethodTriggerEvent, assuming this may have actually happened so erring on the side of caution.
| s.mu.Unlock() | ||
|
|
||
| if !found { | ||
| s.lggr.Warnw("received trigger registration status message for unregistered workflow", "workflowID", SanitizeLogString(meta.WorkflowId), "triggerID", SanitizeLogString(meta.TriggerId), "sender", sender) |
There was a problem hiding this comment.
what's the point in calling SanitizeLogString for WF ID or Trigger ID?
There was a problem hiding this comment.
The sending node (of the trigger registration status msg) could put any old nonsense (including very long string) in this field, so this protects against that, its a common patten in this code.
| reg.HandleTriggerRegistrationStatusUpdate(sender, msg, cfg.remoteConfig.MinResponsesToAggregate, cfg.remoteConfig.MessageExpiry, | ||
| len(cfg.capDonInfo.Members)) | ||
| } else { | ||
| s.lggr.Errorw("received trigger event with unknown method", "method", SanitizeLogString(msg.Method), "sender", sender, "err", SanitizeLogString(msg.ErrorMsg)) |
There was a problem hiding this comment.
what's the point in calling SanitizeLogString for method?
(it does make sense for ErrorMsg tho)
There was a problem hiding this comment.
The sending node (of the trigger registration status msg) could put any old nonsense (including very long string) in this field, so this protects against that, its a common patten in this code.
| if totalErrorCount >= publisherNodeCount-int(minResponseToAggregate)+1 { | ||
| // There is no consensus error, return a generic error message | ||
| sr.setRegistrationStatus(types.RegistrationStatus_REGISTRATION_ERROR, | ||
| caperrors.NewPublicSystemError(fmt.Errorf("received %d errors, last error %s : %s", totalErrorCount, msg.Error, log.SanitizeLogString(lastErr)), caperrors.Unknown)) |
There was a problem hiding this comment.
wouldn't this make all errors that somehow are a bit different become public and system errors?
If so, shouldn;t you try to wrap the current msg.Error using DeserializeErrorFromString() too in a generic error?
Just thinking on what I did on this PR https://github.com/smartcontractkit/chainlink/pull/21746/changes with the function newRemoteCapabilityExecuteErrorWithMessage
There was a problem hiding this comment.
I think in this scenario as we are unable to get consensus on the errors the correct behaviour is to return a public system capability error with the 'ConsensusFailed' error code and log the errors. I've update the code to this. (note public is acceptable, the cap don side will have obfuscated any private errors before sending them across).
|
| triggerResponseCh chan commoncap.TriggerResponse | ||
| rawRequest []byte | ||
|
|
||
| registrationStatusUpdateCache *messagecache.MessageCache[string, p2ptypes.PeerID] |
There was a problem hiding this comment.
Don't you need a mutex for this?
| func (pr *PublisherRegistration) AddRegistrationRequest(ctx context.Context, sender p2ptypes.PeerID, rawRequest []byte, | ||
| callerDon commoncap.DON, registrationExpiry time.Duration) { | ||
| nowMs := time.Now().UnixMilli() | ||
| // First remote any expired messages from the cache, the cache is not expected to be large so this is acceptable |




Change overview:
This change is to support the reporting of capability errors when a remote trigger is registered across the DON2DON (D2D) layer, no other changes required to handle the errors produced by trigger registration are included in this PR.
Brief overview of the current situation (before this change) for context:
On RegisterTrigger the TriggerSubscriber (TS) adds the registration to a list, this list is then polled at regular intervals and for each registration in the list a TriggerRegistration request is sent to the TriggerPublisher (TP). The TS does not wait for any response from the TP, it assumes the registration was successful, thus there is currently no way to report registration errors to the workflow DON (where the TS sits).
On receipt of a TriggerRegistration request the TriggerPublisher will attempt to register to the underlying trigger (if it hasn't already), if an error occurs it will be logged but no indication of the error is sent back to the TS. Note, the TS must send regular TriggerRegistration requests to the TP to keep the registration alive, failure to do so will result in the TP considering the registration to be dead and unregistering from the underlying trigger.
Summary of the change in this PR
The change is subject to the following constraints:
Must be backwards compatible - that is a DON with this change must continue to work when communicating to a DON without this change.
The D2D trigger communication mechanism has been proven in production, we want to avoid anything that fundamentally changes the behaviour.
This change introduces the concept of a TriggerRegistrationStatus message (actually it sort of existed already but was unused) which is sent by the TP to the TS whenever the TP receives a TriggerRegistrationRequest from the TS. The TriggerRegistrationStatus message includes the latest state of the registration on the underlying trigger including any registration error if one occurred. As before, if the trigger is not currently successfully registered the receipt of the TriggerRegistrationRequest will cause the TP to attempt a new registration. Also as before, the continued receipt of TriggerRegistrationRequests by the TP is necessary to keep a registration live.
On initial registration the TS can now (optionally) wait for TriggerRegistrationStatus messages from the TP, these can be used to determine if the initial registration attempt was successful. Note, even though the TS will continue to send TriggerRegistrationRequests as part of polling, it does not (currently) have any code that deals with updates to the trigger registration status beyond the initial registration - this could be something to address in a future change, for now we are focussed on getting back the status on initial subscription so we can identify most errors, and in particular user errors.
Note, if the TS does not receive sufficient TriggerRegisrationStatus messages during the initial trigger registration phase it will return the events channel associated with the subscription along with an error indicating that registration status was unable to be determined. Essentially this allows the code to behave in the current way (pre-this change) if running against old DON or for whatever other reason the status messages are not received.
It is expected that in most, if not all, cases initial trigger registration status will be returned, so will be a significant improvement over the current behaviour. At some future point, perhaps after all DONs are migrated, it would be possible to make the wait for initial trigger registration status mandatory, tbd.
Testing:
In addition to additional system test, manual testing was carried out on staging zone b to ensure that there are no compatibility issue when running DONs with this change against DONs without this change and to confirm user registration errors are reported as expected, summarised here:
https://docs.google.com/spreadsheets/d/1-SP9lUCmjRQrY-xiufwioUbf5dnvz6zIgoZB-ZEi9GQ/edit?usp=sharing
Deployment:
Testing confirms that the change can de deployed in any order (i.e. workflow node followed by cap node, or vice versa) with no adverse impact.
Once both DONS are updated, the behaviour can be enabled by updating a setting, here is an example of a CLD PR that updates the setting, in this case to disable the behaviour, to enable the behaviour change the setting to '10s'
https://github.com/smartcontractkit/chainlink-deployments/pull/12204