Conversation
|
@martinconic thanks for this. I have some comments:
|
pkg/node/node.go
Outdated
| b.p2pHalter = p2ps | ||
|
|
||
| post, err := postage.NewService(logger, stamperStore, batchStore, chainID) | ||
| dirtyItem := &stamperDirtyItem{} |
There was a problem hiding this comment.
i'm not sure why this should be here. why can all this code live inside the stamper store init and close methods?
pkg/node/node.go
Outdated
| tryClose(b.topologyCloser, "topology driver") | ||
| tryClose(b.storageIncetivesCloser, "storage incentives agent") | ||
| tryClose(b.stateStoreCloser, "statestore") | ||
| if b.stamperCleanShutdown != nil { |
There was a problem hiding this comment.
ditto. Close is called in L1453 anyway, why cant this be executed as part of that method as the last thing done? and in any case, if you really wanna keep it outside, this block should execute after L1453 no? since you still didn't close it, how do you know to say it was shutdown cleanly? also, even if it did you wouldn't know, because the err returned from Close is not available in this scope
pkg/postage/service.go
Outdated
| if err := issuer.recover(item.BatchIndex); err != nil { | ||
| s.logger.Error(err, "postage recovery of bucket count failed") | ||
| } | ||
| issuer.setDirty(true) |
There was a problem hiding this comment.
why is it dirty all of a sudden? it was just restored from disk, so it is not dirty per-se (since dirty means there are writes which aren't flushed yet)
Checklist
Description
Resolves #4884
This PR addresses extreme LevelDB I/O bottlenecks and CPU serialization overhead encountered during massive high-frequency chunk uploads.
The Problem
Under a load of hundreds of thousands of concurrent chunks, the node suffered from performance degradation across two distinct vectors:
The Solution