Conversation
There was a problem hiding this comment.
Pull request overview
Adds support for additional Sync-model-specific video inference settings in the Runware ComfyUI integration, including new dedicated settings nodes and an example workflow.
Changes:
- Extend Runware Video Inference Settings to emit new settings fields (e.g.,
syncMode,mode,emotion,temperature,occlusionDetection, plus nested blocks liketts,activeSpeakerDetection,segments). - Add new settings helper nodes: Settings TTS, Active Speaker Detection, Active Speaker Bounding Boxes, and Settings Segments.
- Add
sync:3@0 (sync-3)to video model search lists (Python + client) and include a new Sync3 workflow JSON.
Reviewed changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| workflows/Runware_Video_Inference_Sync3.json | New example workflow wiring Sync3 model + new settings nodes. |
| modules/videoSettings.py | Adds new optional settings params + includes nested settings blocks in emitted settings dict. |
| modules/videoModelSearch.py | Adds sync:3@0 to model list and dimension/resolution maps. |
| modules/videoInferenceSettingsTts.py | New node to build settings.tts. |
| modules/videoInferenceSettingsSegments.py | New node to build settings.segments[]. |
| modules/videoInferenceSettingsActiveSpeakerDetection.py | New node to build settings.activeSpeakerDetection. |
| modules/videoInferenceSettingsActiveSpeakerBoundingBoxes.py | New node to build settings.activeSpeakerDetection.boundingBoxes. |
| modules/videoInference.py | Updates tooltip to reflect expanded settings support. |
| clientlibs/utils.js | Adds toggle handlers for new nodes + new settings fields. |
| clientlibs/types.js | Registers new node types + visual props. |
| clientlibs/main.js | Wires new toggle handlers into extension init. |
| init.py | Registers new nodes in NODE_CLASS_MAPPINGS. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if (useSegment && useCrop) togglePair(useSegment, useCrop, `useSegment${idx}AudioCrop`); | ||
| if (useCrop && cropStart) togglePair(useCrop, cropStart, `segment${idx}AudioStartTime`); | ||
| if (useCrop && cropEnd) togglePair(useCrop, cropEnd, `segment${idx}AudioEndTime`); |
There was a problem hiding this comment.
In videoInferenceSettingsSegmentsToggleHandler, crop start/end widgets are toggled only off useSegment{idx}AudioCrop. If a user disables useSegment{idx} while useSegment{idx}AudioCrop is still true, the crop start/end widgets can remain enabled because they don't re-evaluate on useSegment{idx} changes. Consider updating the logic so segment{idx}AudioStartTime/segment{idx}AudioEndTime are enabled only when both useSegment{idx} and useSegment{idx}AudioCrop are true (or force useSegment{idx}AudioCrop to false when the segment is disabled).
No description provided.