Skip to content

Proposal: mitigating broadcast storms #2245

@rikkertkoppes

Description

@rikkertkoppes

In the Netherlands we experience what is known in literature as broadcast storms: flood messages that are repeated faster than our densely connected network can handle.

The underlying problem is that densely connected networks have a dimension that is (far) higher than the physical dimension of the network, causing congestion around the edges of the broadcast hop wavefront if not mitigated properly. In essence, flood message outpaces the network’s capacity to handle transmissions efficiently.

At the core: flood messages are fundamentally incompatible with a dense mesh topology if not filtered properly.

We have done an analysis here: https://github.com/meshwiki/digitwin/tree/main/dimanalysis

There is a paper on exactly this problem here: https://www.csie.ntpu.edu.tw/~yschen/mypapers/winet2002.pdf it suggests 5 solutions, which all revolve around filtering flood messages to bring the effective dimension of the network down

  1. Probabilistic filtering (3.1 in the paper). Only forwarding messages with a certain probablility factor. This is also proposed by @ViezeVingertjes in Proposal: Probabilistic Flood Forwarding for Advertisement Packets #1223 for flood adverts
  2. Counter based filtering (3.2 in the paper). This is a lot like the loop detection algorithm already implemented, but with some additional random wait time (random tx delay is an important improvement over fixed tx delay, it effectively increases the dimension of the physical network into the time axis)
  3. Distance based filtering (3.3 in the paper). Use neighbor distance (rssi of the incoming message) to decide whether to repeat or not
  4. Location based filtering (3.4 in the paper). Use actual location of the last repeater, calculate the actual distance and use that to decide whether to repeat or not. This requires having received adverts from neigbors. Which would require the 0 hop advert interval to not be disabled
  5. Cluster based filtering (3.5 in the paper). This implements a cluster discovery algorithm to discover the local mesh topology and use that to decide whether to repeat or not. This is the most elaborate solutions and requires some more additional logic.

The paper recommends location based filtering as "the best choice because it can eliminate most redundant rebroadcasts under all kinds of host distributions without compromising reachability"

I propose to implement location based filtering, with a fallback to distance based filtering when geo data is not available

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions