Skip to content

module torch.distributed' has no attribute 'tensor' error while loading model with Lora #17

@starkeepersam

Description

@starkeepersam

I am using Diffuse v0.4.9-beta. I have tried loading the z-image turbo model which loads fine. When I try loading it with the 40s Cyberpunk Lora I get the below error

Module torch.distributed' has no attribute 'tensor'

Image

Here is the log extract related to this error

2026-02-01 19:21:39.971 +05:30 [INF] [PythonPipeline] [PythonRuntime] Loading pipeline components...
2026-02-01 19:21:41.332 +05:30 [ERR] [PythonPipeline] [PythonRuntime] AttributeError exception occurred
CSnakes.Runtime.PythonRuntimeException: module 'torch.distributed' has no attribute 'tensor'
File "C:\AI\Diffuse\PythonRuntime\Pipelines\ZImagePipeline.py", line 97, in reload
Utils.load_lora_weights(_pipeline, config)

File "C:\AI\Diffuse\PythonRuntime\Python\Lib\tensorstack\utils.py", line 149, in load_lora_weights
pipeline.load_lora_weights(lora.path, weight_name=lora.weights, adapter_name=lora.name)

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\diffusers\loaders\lora_pipeline.py", line 5190, in load_lora_weights
self.load_lora_into_transformer(

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\diffusers\loaders\lora_pipeline.py", line 5222, in load_lora_into_transformer
transformer.load_lora_adapter(

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\diffusers\loaders\peft.py", line 326, in load_lora_adapter
inject_adapter_in_model(

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\mapping.py", line 88, in inject_adapter_in_model
peft_model = tuner_cls(
^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\tuners_utils.py", line 295, in init
self.inject_adapter(self.model, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage, state_dict=state_dict)

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\tuners_utils.py", line 814, in inject_adapter
self._create_and_replace(

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\model.py", line 249, in _create_and_replace
new_module = self._create_new_module(lora_config, adapter_name, target, device_map=device_map, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\model.py", line 336, in _create_new_module
new_module = dispatcher(target, adapter_name, lora_config=lora_config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\layer.py", line 2282, in dispatch_default
new_module = Linear(target, adapter_name, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\layer.py", line 619, in init
LoraLayer.init(self, base_layer, **kwargs)

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\layer.py", line 126, in init
in_features, out_features = _get_in_out_features(base_layer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\tuners_utils.py", line 161, in _get_in_out_features
if torch_supports_dtensor and isinstance(module.weight, torch.distributed.tensor.DTensor):
^^^^^^^^^^^^^^^^^^^^^^^^

2026-02-01 19:21:41.354 +05:30 [ERR] File "C:\AI\Diffuse\PythonRuntime\Pipelines\ZImagePipeline.py", line 97, in reload
Utils.load_lora_weights(_pipeline, config)

File "C:\AI\Diffuse\PythonRuntime\Python\Lib\tensorstack\utils.py", line 149, in load_lora_weights
pipeline.load_lora_weights(lora.path, weight_name=lora.weights, adapter_name=lora.name)

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\diffusers\loaders\lora_pipeline.py", line 5190, in load_lora_weights
self.load_lora_into_transformer(

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\diffusers\loaders\lora_pipeline.py", line 5222, in load_lora_into_transformer
transformer.load_lora_adapter(

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\diffusers\loaders\peft.py", line 326, in load_lora_adapter
inject_adapter_in_model(

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\mapping.py", line 88, in inject_adapter_in_model
peft_model = tuner_cls(
^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\tuners_utils.py", line 295, in init
self.inject_adapter(self.model, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage, state_dict=state_dict)

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\tuners_utils.py", line 814, in inject_adapter
self._create_and_replace(

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\model.py", line 249, in _create_and_replace
new_module = self._create_new_module(lora_config, adapter_name, target, device_map=device_map, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\model.py", line 336, in _create_new_module
new_module = dispatcher(target, adapter_name, lora_config=lora_config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\layer.py", line 2282, in dispatch_default
new_module = Linear(target, adapter_name, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\layer.py", line 619, in init
LoraLayer.init(self, base_layer, **kwargs)

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\layer.py", line 126, in init
in_features, out_features = _get_in_out_features(base_layer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\tuners_utils.py", line 161, in _get_in_out_features
if torch_supports_dtensor and isinstance(module.weight, torch.distributed.tensor.DTensor):
^^^^^^^^^^^^^^^^^^^^^^^^

2026-02-01 19:21:41.356 +05:30 [ERR] [PipelineServer] [ReloadPipeline] An exception occurred reloading pipeline.
System.Exception: module 'torch.distributed' has no attribute 'tensor'
---> CSnakes.Runtime.PythonRuntimeException: module 'torch.distributed' has no attribute 'tensor'
File "C:\AI\Diffuse\PythonRuntime\Pipelines\ZImagePipeline.py", line 97, in reload
Utils.load_lora_weights(_pipeline, config)

File "C:\AI\Diffuse\PythonRuntime\Python\Lib\tensorstack\utils.py", line 149, in load_lora_weights
pipeline.load_lora_weights(lora.path, weight_name=lora.weights, adapter_name=lora.name)

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\diffusers\loaders\lora_pipeline.py", line 5190, in load_lora_weights
self.load_lora_into_transformer(

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\diffusers\loaders\lora_pipeline.py", line 5222, in load_lora_into_transformer
transformer.load_lora_adapter(

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\diffusers\loaders\peft.py", line 326, in load_lora_adapter
inject_adapter_in_model(

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\mapping.py", line 88, in inject_adapter_in_model
peft_model = tuner_cls(
^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\tuners_utils.py", line 295, in init
self.inject_adapter(self.model, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage, state_dict=state_dict)

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\tuners_utils.py", line 814, in inject_adapter
self._create_and_replace(

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\model.py", line 249, in _create_and_replace
new_module = self._create_new_module(lora_config, adapter_name, target, device_map=device_map, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\model.py", line 336, in _create_new_module
new_module = dispatcher(target, adapter_name, lora_config=lora_config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\layer.py", line 2282, in dispatch_default
new_module = Linear(target, adapter_name, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\layer.py", line 619, in init
LoraLayer.init(self, base_layer, **kwargs)

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\lora\layer.py", line 126, in init
in_features, out_features = _get_in_out_features(base_layer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\AI\Diffuse\PythonRuntime\Pipelines.default-rocm\Lib\site-packages\peft\tuners\tuners_utils.py", line 161, in _get_in_out_features
if torch_supports_dtensor and isinstance(module.weight, torch.distributed.tensor.DTensor):
^^^^^^^^^^^^^^^^^^^^^^^^

--- End of inner exception stack trace ---
at TensorStack.Python.PythonPipeline.<>c__DisplayClass17_0.b__0()
at System.Threading.Tasks.Task`1.InnerInvoke()
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location ---
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
--- End of stack trace from previous location ---
at DiffuseApp.Common.PipelineServer.ReloadPipelineAsync(PipelineRequest request, PythonPipeline pipeline, CancellationToken cancellationToken)
2026-02-01 19:21:41.384 +05:30 [INF] [PipelineServer] [PipelineChannel] Waiting for request.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions