-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when attempting to optimize stable-diffusion with directML #1267
Comments
@PatriceVignola can you please take a look? |
I have some almost the same issues, here's the error log : Optimizing text_encoder Invoked with: %341 : Tensor = onnx::Constant(), scope: transformers.models.clip.modeling_clip.CLIPTextModel::/transformers.models.clip.modeling_clip.CLIPTextTransformer::text_model/transformers.models.clip.modeling_clip.CLIPEncoder::encoder/transformers.models.clip.modeling_clip.CLIPEncoderLayer::layers.0/transformers.models.clip.modeling_clip.CLIPSdpaAttention::self_attn as well as the text_encoder_2, error log: Optimizing text_encoder_2 Invoked with: %662 : Tensor = onnx::Constant(), scope: transformers.models.clip.modeling_clip.CLIPTextModelWithProjection::/transformers.models.clip.modeling_clip.CLIPTextTransformer::text_model/transformers.models.clip.modeling_clip.CLIPEncoder::encoder/transformers.models.clip.modeling_clip.CLIPEncoderLayer::layers.0/transformers.models.clip.modeling_clip.CLIPSdpaAttention::self_attn System information : |
Use following can optimize the model, it needs to change json OLIVE modifications Note that weights.pb does not copied into optimized unet folder |
Key for this issue is downgrade transformer to transformers==4.42.4 |
I am also stuck on this issue
This has broken all stable-diffusion converter scripts |
For this issue, downgrade transformers to 4.42.4 is work for me. |
When attempting: python stable_diffusion.py --optimize, I get a "TypeError: z_(): incompatible function arguments" error for "Optimizing text_encoder". Note that "Optimizing vae_encoder", "Optimizing vae_decoder" and "Optimizing unet" worked.
Log under "Optimizing text_encoder":
[DEBUG] [olive_evaluator.py:1153:validate_metrics] No priority is specified, but only one sub type metric is specified. Use rank 1 for single for this metric.
[INFO] [run.py:138:run_engine] Running workflow default_workflow
[INFO] [engine.py:986:save_olive_config] Saved Olive config to cache\default_workflow\olive_config.json
[DEBUG] [run.py:179:run_engine] Registering pass OnnxConversion
[DEBUG] [run.py:179:run_engine] Registering pass OrtTransformersOptimization
[DEBUG] [accelerator_creator.py:130:_fill_accelerators] The accelerator device and execution providers are specified, skipping deduce.
[DEBUG] [accelerator_creator.py:169:_check_execution_providers] Supported execution providers for device gpu: ['DmlExecutionProvider', 'CPUExecutionProvider']
[DEBUG] [accelerator_creator.py:199:create_accelerators] Initial accelerators and execution providers: {'gpu': ['DmlExecutionProvider']}
[INFO] [accelerator_creator.py:224:create_accelerators] Running workflow on accelerator specs: gpu-dml
[DEBUG] [run.py:235:run_engine] Pass OnnxConversion already registered
[DEBUG] [run.py:235:run_engine] Pass OpenVINOConversion already registered
[DEBUG] [run.py:235:run_engine] Pass OrtTransformersOptimization already registered
[DEBUG] [run.py:235:run_engine] Pass OrtTransformersOptimization already registered
[INFO] [engine.py:109:initialize] Using cache directory: cache\default_workflow
[INFO] [engine.py:265:run] Running Olive on accelerator: gpu-dml
[INFO] [engine.py:1085:_create_system] Creating target system ...
[DEBUG] [engine.py:1081:create_system] create native OliveSystem SystemType.Local
[INFO] [engine.py:1088:_create_system] Target system created in 0.000000 seconds
[INFO] [engine.py:1097:_create_system] Creating host system ...
[DEBUG] [engine.py:1081:create_system] create native OliveSystem SystemType.Local
[INFO] [engine.py:1100:_create_system] Host system created in 0.000000 seconds
[DEBUG] [engine.py:711:_cache_model] Cached model bebe0e3c to cache\default_workflow\models\bebe0e3c.json
[DEBUG] [engine.py:338:run_accelerator] Running Olive in no-search mode ...
[DEBUG] [engine.py:430:run_no_search] Running ['convert', 'optimize'] with no search ...
[INFO] [engine.py:867:_run_pass] Running pass convert:OnnxConversion
[DEBUG] [resource_path.py:156:create_resource_path] Resource path runwayml/stable-diffusion-v1-5 is inferred to be of type string_name.
[DEBUG] [resource_path.py:156:create_resource_path] Resource path user_script.py is inferred to be of type file.
[DEBUG] [resource_path.py:156:create_resource_path] Resource path runwayml/stable-diffusion-v1-5 is inferred to be of type string_name.
[DEBUG] [resource_path.py:156:create_resource_path] Resource path user_script.py is inferred to be of type file.
[DEBUG] [resource_path.py:156:create_resource_path] Resource path C:\Users\ih\sd-test\converter\olive\examples\stable_diffusion\user_script.py is inferred to be of type file.
[DEBUG] [dummy_inputs.py:45:get_dummy_inputs] Using dummy_inputs_func to get dummy inputs
[DEBUG] [conversion.py:234:_export_pytorch_model] Converting model on device cpu with dtype None.
Traceback (most recent call last):
File "C:\Users\ih\sd-test\converter\olive\examples\stable_diffusion\stable_diffusion.py", line 433, in
main()
File "C:\Users\ih\sd-test\converter\olive\examples\stable_diffusion\stable_diffusion.py", line 370, in main
optimize(common_args.model_id, common_args.provider, unoptimized_model_dir, optimized_model_dir)
File "C:\Users\ih\sd-test\converter\olive\examples\stable_diffusion\stable_diffusion.py", line 244, in optimize
run_res = olive_run(olive_config)
File "C:\Users\ih.conda\envs\converter\lib\site-packages\olive\workflows\run\run.py", line 297, in run
return run_engine(package_config, run_config, data_root)
File "C:\Users\ih.conda\envs\converter\lib\site-packages\olive\workflows\run\run.py", line 261, in run_engine
engine.run(
File "C:\Users\ih.conda\envs\converter\lib\site-packages\olive\engine\engine.py", line 267, in run
run_result = self.run_accelerator(
File "C:\Users\ih.conda\envs\converter\lib\site-packages\olive\engine\engine.py", line 339, in run_accelerator
output_footprint = self.run_no_search(
File "C:\Users\ih.conda\envs\converter\lib\site-packages\olive\engine\engine.py", line 431, in run_no_search
should_prune, signal, model_ids = self._run_passes(
File "C:\Users\ih.conda\envs\converter\lib\site-packages\olive\engine\engine.py", line 829, in _run_passes
model_config, model_id = self._run_pass(
File "C:\Users\ih.conda\envs\converter\lib\site-packages\olive\engine\engine.py", line 937, in _run_pass
output_model_config = host.run_pass(p, input_model_config, data_root, output_model_path, pass_search_point)
File "C:\Users\ih.conda\envs\converter\lib\site-packages\olive\systems\local.py", line 32, in run_pass
output_model = the_pass.run(model, data_root, output_model_path, point)
File "C:\Users\ih.conda\envs\converter\lib\site-packages\olive\passes\olive_pass.py", line 224, in run
output_model = self._run_for_config(model, data_root, config, output_model_path)
File "C:\Users\ih.conda\envs\converter\lib\site-packages\olive\passes\onnx\conversion.py", line 132, in _run_for_config
output_model = self._run_for_config_internal(model, data_root, config, output_model_path)
File "C:\Users\ih.conda\envs\converter\lib\site-packages\olive\passes\onnx\conversion.py", line 182, in _run_for_config_internal
return self._convert_model_on_device(model, data_root, config, output_model_path, device, torch_dtype)
File "C:\Users\ih.conda\envs\converter\lib\site-packages\olive\passes\onnx\conversion.py", line 439, in _convert_model_on_device
converted_onnx_model = OnnxConversion._export_pytorch_model(
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "C:\Users\ih.conda\envs\converter\lib\site-packages\olive\passes\onnx\conversion.py", line 285, in _export_pytorch_model
torch.onnx.export(
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\onnx\utils.py", line 551, in export
_export(
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\onnx\utils.py", line 1648, in _export
graph, params_dict, torch_out = _model_to_graph(
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\onnx\utils.py", line 1174, in _model_to_graph
graph = _optimize_graph(
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\onnx\utils.py", line 714, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\onnx\utils.py", line 1997, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\onnx\symbolic_helper.py", line 292, in wrapper
return fn(g, *args, **kwargs)
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\onnx\symbolic_opset14.py", line 177, in scaled_dot_product_attention
query_scaled = g.op("Mul", query, g.op("Sqrt", scale))
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\onnx_internal\jit_utils.py", line 93, in op
return _add_op(self, opname, *raw_args, outputs=outputs, **kwargs)
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\onnx_internal\jit_utils.py", line 244, in _add_op
inputs = [_const_if_tensor(graph_context, arg) for arg in args]
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\onnx_internal\jit_utils.py", line 244, in
inputs = [_const_if_tensor(graph_context, arg) for arg in args]
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\onnx_internal\jit_utils.py", line 276, in _const_if_tensor
return _add_op(graph_context, "onnx::Constant", value_z=arg)
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\onnx_internal\jit_utils.py", line 252, in _add_op
node = _create_node(
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\onnx_internal\jit_utils.py", line 312, in _create_node
add_attribute(node, key, value, aten=aten)
File "C:\Users\ih.conda\envs\converter\lib\site-packages\torch\onnx_internal\jit_utils.py", line 363, in add_attribute
return getattr(node, f"{kind}")(name, value)
TypeError: z(): incompatible function arguments. The following argument types are supported:
1. (self: torch._C.Node, arg0: str, arg1: torch.Tensor) -> torch._C.Node
Invoked with: %340 : Tensor = onnx::Constant(), scope: transformers.models.clip.modeling_clip.CLIPTextModel::/transformers.models.clip.modeling_clip.CLIPTextTransformer::text_model/transformers.models.clip.modeling_clip.CLIPEncoder::encoder/transformers.models.clip.modeling_clip.CLIPEncoderLayer::layers.0/transformers.models.clip.modeling_clip.CLIPSdpaAttention::self_attn
, 'value', 0.125
(Occurred when translating scaled_dot_product_attention).
Other information
The text was updated successfully, but these errors were encountered: