Skip to content

Added new settings params support#126

Open
Sirsho1997 wants to merge 1 commit intomainfrom
feature-syncUpdateVideoInference
Open

Added new settings params support#126
Sirsho1997 wants to merge 1 commit intomainfrom
feature-syncUpdateVideoInference

Conversation

@Sirsho1997
Copy link
Copy Markdown
Contributor

No description provided.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds support for additional Sync-model-specific video inference settings in the Runware ComfyUI integration, including new dedicated settings nodes and an example workflow.

Changes:

  • Extend Runware Video Inference Settings to emit new settings fields (e.g., syncMode, mode, emotion, temperature, occlusionDetection, plus nested blocks like tts, activeSpeakerDetection, segments).
  • Add new settings helper nodes: Settings TTS, Active Speaker Detection, Active Speaker Bounding Boxes, and Settings Segments.
  • Add sync:3@0 (sync-3) to video model search lists (Python + client) and include a new Sync3 workflow JSON.

Reviewed changes

Copilot reviewed 12 out of 12 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
workflows/Runware_Video_Inference_Sync3.json New example workflow wiring Sync3 model + new settings nodes.
modules/videoSettings.py Adds new optional settings params + includes nested settings blocks in emitted settings dict.
modules/videoModelSearch.py Adds sync:3@0 to model list and dimension/resolution maps.
modules/videoInferenceSettingsTts.py New node to build settings.tts.
modules/videoInferenceSettingsSegments.py New node to build settings.segments[].
modules/videoInferenceSettingsActiveSpeakerDetection.py New node to build settings.activeSpeakerDetection.
modules/videoInferenceSettingsActiveSpeakerBoundingBoxes.py New node to build settings.activeSpeakerDetection.boundingBoxes.
modules/videoInference.py Updates tooltip to reflect expanded settings support.
clientlibs/utils.js Adds toggle handlers for new nodes + new settings fields.
clientlibs/types.js Registers new node types + visual props.
clientlibs/main.js Wires new toggle handlers into extension init.
init.py Registers new nodes in NODE_CLASS_MAPPINGS.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread clientlibs/utils.js
Comment on lines +1576 to +1578
if (useSegment && useCrop) togglePair(useSegment, useCrop, `useSegment${idx}AudioCrop`);
if (useCrop && cropStart) togglePair(useCrop, cropStart, `segment${idx}AudioStartTime`);
if (useCrop && cropEnd) togglePair(useCrop, cropEnd, `segment${idx}AudioEndTime`);
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In videoInferenceSettingsSegmentsToggleHandler, crop start/end widgets are toggled only off useSegment{idx}AudioCrop. If a user disables useSegment{idx} while useSegment{idx}AudioCrop is still true, the crop start/end widgets can remain enabled because they don't re-evaluate on useSegment{idx} changes. Consider updating the logic so segment{idx}AudioStartTime/segment{idx}AudioEndTime are enabled only when both useSegment{idx} and useSegment{idx}AudioCrop are true (or force useSegment{idx}AudioCrop to false when the segment is disabled).

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants