Replies: 5 comments 3 replies
-
Hi @C-Ehmke, Do you need a .fld file per variation? because if this is not the case you could avoid calling the method for every variation and simply pass all the variations at once. Hope this helps, Giulia |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
@C-Ehmke is there a reason why you are using 0.7? if not please update to 0.11 and retry. |
Beta Was this translation helpful? Give feedback.
-
Hi @C-Ehmke, Please upgrade to PyAedt 0.11.1 and simply try this:
Please do let me know if this works. Kind regards, Giulia |
Beta Was this translation helpful? Give feedback.
-
Hi @C-Ehmke, No it's not possible, even if you try manually and record the script you see that only the variations at the top is taken. Hope this helps, Kind regards, Giulia |
Beta Was this translation helpful? Give feedback.
-
Hi guys,
Hope everyone is doing great.
I'm working on a script using PyAEDT to automatically export .fld files of a design with a lot of parametric variations (500 to 1000 variations). For this application, the .fld file needs to have a good density of points (about 130000).
I'm no expert in python, but the algorithm is working as intended (I know it could be better written, but for now it's what I need). In this script, I created a function that reads all solutions and uses a for loop to iterate each variation and export the .fld file using the hfss.post.export_field_file_on_grid() function.
Here is the code:
The problem is that, in the beginning the exportation procedure is satisfactory and efficient, like 5 or 6 .fld files per minute. After a while, I noticed that the number of files exported per minute decreases, 1 or 2 per minute. For what I noticed, it gets worse as the loop goes on. This is something that surprised me because I thought it would be constant since this was the only thing running in my machine. I'm sending some images to illustrate and with the CPU and RAM monitor.
From what I read, maybe this could be a cache problem since the RAM consumption increases with time. Also, I've seen that it is possible to use memorization to make it more time efficient. Once again, I'm no python expert, so I'm probably missing something here.
Has anyone faced this issue before? How could I make it more efficient? Any suggestions would be very appreciated.
Thank you in advance!
Beta Was this translation helpful? Give feedback.
All reactions