Skip to content

feat(connector): implement multitransport bootstrapping handshake#1098

Open
Greg Lamberson (glamberson) wants to merge 1 commit intoDevolutions:masterfrom
glamberson:multitransport-connector
Open

feat(connector): implement multitransport bootstrapping handshake#1098
Greg Lamberson (glamberson) wants to merge 1 commit intoDevolutions:masterfrom
glamberson:multitransport-connector

Conversation

@glamberson
Copy link
Copy Markdown
Contributor

This is a design proposal — looking for feedback on the API approach before wiring up the async/blocking drivers.

Makes the MultitransportBootstrapping state functional instead of a no-op pass-through.

After licensing, the server may send 0, 1, or 2 Initiate Multitransport Request PDUs before starting capabilities exchange. This PR reads those PDUs by peeking at the BasicSecurityHeader flags (SEC_TRANSPORT_REQ), then pauses in a new MultitransportPending state so the application can establish UDP transport (RDPEUDP2 + TLS + RDPEMT) or decline.

The API follows the existing should_perform_X() / mark_X_as_done() pattern used by TLS upgrade and CredSSP:

  • should_perform_multitransport() — true when requests have been collected
  • multitransport_requests() — access the server's request PDUs
  • complete_multitransport(responses, output) — send response PDUs, advance
  • skip_multitransport() — decline, advance

Open question: the Demand Active PDU that signals the end of multitransport bootstrapping arrives while we're still in MultitransportBootstrapping. When the connector transitions to MultitransportPending, this PDU needs to be buffered and re-fed after the application completes. Two options:

(a) The driving code (ironrdp-async/ironrdp-blocking) buffers the PDU externally and re-feeds it. This keeps the connector stateless w.r.t. buffering but requires changes to connect_finalize().

(b) The connector buffers the PDU internally in MultitransportPending and replays it when complete_multitransport() / skip_multitransport() is called. This is self-contained but adds buffer state to the connector.

I've gone with (a) in this draft — the connector doesn't buffer. Feedback on which approach is preferred would be helpful before wiring up the async/blocking drivers.

Builds on: #1091

Related: #140

@glamberson
Copy link
Copy Markdown
Contributor Author

I really need feedback on this so I'll take the draft status off and see where we get.

@glamberson Greg Lamberson (glamberson) marked this pull request as ready for review February 14, 2026 14:35
@CBenoit
Copy link
Copy Markdown
Member

Hi! Good job. I need a little bit more time to dig into this and craft a well thought answer. I’ll come back to you in the next 24h.

@glamberson
Copy link
Copy Markdown
Contributor Author

Hi! Good job. I need a little bit more time to dig into this and craft a well thought answer. I’ll come back to you in the next 24h.

Thanks! Great! Yes this is pretty important so I knew it would take some time. Much appreciated.

@CBenoit
Copy link
Copy Markdown
Member

Benoît Cortier (CBenoit) commented Feb 22, 2026

Hi! I’m sorry for the delay. Monday is a holiday for me, so I may only come back to you on Tuesday!

FYI, I think I may be leaning on b), because the overall architecture stays "sans-IO", it can be thought as of "state-machine responsibility" where the “pause” is part of the protocol state machine. If the state machine consumes the bytes that triggered the pause, it could thus own deferring/replaying them; we avoid duplicating subtle handshake logic in every driver. Otherwise we’ll need the same “connect_finalize must stash last PDU and replay it later” logic in ironrdp-async, ironrdp-blocking, and any third-party driver using the connector directly. Last, this makes the API more misuse resistant, because with a) it’s easy for the caller to forget to buffer, or buffer the wrong input (e.g. after a faulty framing), and so on. This also keeps the API consistent with the existing should_perform_X / complete_X, etc. Once we call complete_multitransport() or skip_multitransport(), the connector can immediately continue processing without requiring extra steps.

But I’m still thinking a bit on this, and double checking other parts of the spec.

@glamberson
Copy link
Copy Markdown
Contributor Author

But I’m still thinking a bit on this, and double checking other parts of the spec.

That's fine. I relocated for now back to Illinois in the USA, and I'm just now getting up and running here.

Regarding your inclinations, (b) makes sense. Keeps replay logic in one place and the driver API stays clean. I'll rework to buffer the Demand Active internally once you confirm.

Two questions that would help me get the rework right:

  1. Response PDU ownership: should complete_multitransport() accept the caller's MultitransportResponsePdus (connector writes them to output), or should the connector generate them from a success/failure result per request?
  2. Scope of feat(connector,session): dispatch multitransport PDUs on IO channel #1108: if the connector handles multitransport bootstrapping internally during initial connection, does the session layer still need to surface MultitransportRequest for server-initiated multitransport during an active session (MS-RDPBCGR 2.2.15.1), or is that out of scope for now?

No rush, happy to wait until Tuesday.

Greg

@glamberson Greg Lamberson (glamberson) force-pushed the multitransport-connector branch 2 times, most recently from 7815daa to c9947d4 Compare March 17, 2026 10:08
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Implements the multitransport bootstrapping portion of the RDP connector state machine by collecting optional server Initiate Multitransport Request PDU(s) after licensing and exposing an application-driven pause point before proceeding to capabilities exchange.

Changes:

  • Added MultitransportBootstrapping request collection and a new MultitransportPending pause state.
  • Introduced a connector API for multitransport handling: should_perform_multitransport(), multitransport_requests(), complete_multitransport(), and skip_multitransport().
  • Updated next_pdu_hint() and step() logic to route post-licensing traffic into the multitransport flow.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

@glamberson
Copy link
Copy Markdown
Contributor Author

Coming back to the design discussion from February. Two questions are still open and they affect how I address the new Copilot review comments, so I'd like to settle them first.

  1. Response ownership: the current implementation takes caller-constructed MultitransportResponsePdus. Given your preference for option (b) where the connector owns more state internally, it might be cleaner for complete_multitransport() to accept a success/failure enum per request and let the connector build the response PDUs (it already has the request_id and cookie). This also naturally validates the response count (Copilot thread on line 287).

  2. Demand Active buffering: the current code requires the caller to re-feed the Demand Active PDU after calling complete/skip_multitransport(). Option (b) would have the connector buffer it internally and replay it on state transition. This would eliminate the driver regression risk Copilot flagged (line 762) since existing drivers wouldn't need updating.

If you confirm (b) on both points I'll rework accordingly. The detection logic (distinguishing multitransport from Demand Active via BasicSecurityHeader flags) and the state extraction pattern are independent of this choice and I can address those either way.

Makes MultitransportBootstrapping functional: reads the server's
optional Initiate Multitransport Request PDUs (0, 1, or 2), then
pauses in MultitransportPending for the application to establish
UDP transport before proceeding to capabilities exchange.

Follows the existing should_perform_X() / mark_X_as_done() pattern
used by TLS upgrade and CredSSP.
@glamberson
Copy link
Copy Markdown
Contributor Author

I went ahead and forged ahead despite my last comment. If you want to revise jsut let me know.

In summary:

  1. Demand Active buffered internally. MultitransportPending stores the raw PDU bytes and replays them through ConnectionActivationSequence::step() when the app calls complete_multitransport() or skip_multitransport(). Drivers don't
    need re-feed logic.

  2. MultitransportResult enum. Callers pass Success or Failure(hresult) instead of building response PDUs. The connector pairs each with the stored request ID/cookie and validates the count.

  3. Try-decode detection. Instead of peeking at BasicSecurityHeaderFlags bits (which could false-positive on a Demand Active totalLength), we attempt decode::(). The decoder validates SEC_TRANSPORT_REQ internally, so failure cleanly means "not multitransport".

  4. mem::replace for state extraction. Both complete_multitransport() and skip_multitransport() use mem::replace(&mut self.state, Consumed), matching step()'s pattern. Returns ConnectorError on wrong state instead of panicking.

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

3 participants