Config to enable/disable filtering when sequencing delayed messages#4377
Config to enable/disable filtering when sequencing delayed messages#4377
Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #4377 +/- ##
==========================================
+ Coverage 32.72% 32.77% +0.05%
==========================================
Files 493 493
Lines 58290 58294 +4
==========================================
+ Hits 19076 19108 +32
+ Misses 35868 35850 -18
+ Partials 3346 3336 -10 |
❌ 6 Tests Failed:
View the top 3 failed tests by shortest run time
📣 Thoughts on this report? Let Codecov know! | Powered by Codecov |
| // different hooks, and we need access to filteringHooks.FilteredTxHash to report | ||
| // which tx caused the halt. | ||
| if applyDelayedFilter { | ||
| if !s.disableDelayedSequencingFilterConfigFetcher() && applyDelayedFilter { |
There was a problem hiding this comment.
There is a problem now where applyDelayedFilter is true in the case where we are doing a reorg (resequenceReorgedMessages->sequenceDelayedMessageWithBlockMutex->createBlockFromNextMessage(applyDelayedFilter=true). We shouldn't be going into sequencing mode and hence filtering transactions by address when in reorg mode. The filtered address list may change and cause historically already sequenced transactions to now be filtered. I think, conceptually, we don't want to be in MessageSequencingContext if we are reorging.
I saw your comment that #3111 might address this, but in the meantime, before it is merged, what should we do?
There was a problem hiding this comment.
We already have this issue today, this is not coupled with introducing filtering BTW.
We can inject SyncMonitor into ExecutionEngine, so ExecutionEngine can check if it is synced before trying to resequence a reorged out message.
But this introduces a circular dependency between ExecutionEngine and SyncMonitor, since SyncMonitor already depends on ExecutionEngine today, which is annoying.
We can solve that, but we are creating more conflicts that will need to be handled by #3111, that already solves that in a more elegant way.
I am OK with not solving that since resequencing reorged out messages is unlikely to happen today, at least in arb1 with our current deployment architecture.
I will let @tsahee decide if we should tackle that right now 🙂
| require.NoError(t, err) | ||
| require.True(t, senderBalanceAfter.Cmp(senderBalanceBefore) < 0, "sender balance should decrease due to gas consumption") | ||
| } | ||
|
|
There was a problem hiding this comment.
The test requested in the ticket with two sequencers, one with filtering and one without is not present.
There was a problem hiding this comment.
Yes, but current test isn't enough for the changes included in this PR?
|
This PR doesn't actually use the sequencing mode for anything either so it doesn't resolve NIT-4026. |
Current PR uses the sequencing mode, but it doesn't change node's behavior based on it today. |
|
Reassigning to @tsahee since we need his input here. |
|
|
||
| // must hold createBlockMutex | ||
| func (s *ExecutionEngine) createBlockFromNextMessage(msg *arbostypes.MessageWithMetadata, isMsgForPrefetch bool, applyDelayedFilter bool) (*types.Block, *state.StateDB, types.Receipts, error) { | ||
| func (s *ExecutionEngine) createBlockFromNextMessage(msg *arbostypes.MessageWithMetadata, isMsgForPrefetch bool, isSequencing bool) (*types.Block, *state.StateDB, types.Receipts, error) { |
There was a problem hiding this comment.
I think it's better to check the disableDelayedSequencingFilterConfigFetcher config when calling this function (from sequenceDelayedMessageWithBlockMutex) and not here. That'll make it easier to use the entire config from sequencer and eventually remove the need of executionEngine to be aware of the config.
Resolves NIT-4026
pulls in OffchainLabs/go-ethereum#622