You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all,
I am currently investigating options to decrease the run time of my bifacial_radiance simulations and I was wondering if there is a way to do the irradiance calculations via the GPU based Accelerad rtrace.exe.
Accelerad is supposed to do GPU based calculations by replacing Radiance's original rtrace and rpict functions. Since bifacial_radiance calls Radiance rtrace.exe I thought it is sufficient to replace the binariaries of my Radiance installation with the ones from Accelerad.
However comparing directly run times using bifacial_radiance with the original Radiance rtrace and the Accelerad rtrace shows no improvement of the execution time.
I am running the simulations on a Windows computer for the original Radiance installation and on a Linux machine for the version with the binaries from Accelerad. Both are running without a problem. The Windows computer has a 11th Gen Intel(R) Core(TM) i7-1185g7 @ 3.00GHz with 4 cores, no GPU. The Linux computer has a NVIDIA Tesla M10 with 5 multiprocessors.
The Windows simulation of 3 timestamps takes 7 minutes, on Linux the same simulation takes 8:45 minutes. When calling nvidia-smi -l 10, the output shows that the GPU is used by rtrace but only a small fraction of the available memory is used.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla M10 On | 00000000:0B:00.0 Off | N/A |
| N/A 47C P0 41W / 53W | 708MiB / 8192MiB | 100% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 691523 C rtrace 705MiB |
+-----------------------------------------------------------------------------+
I appreciate any help!
The text was updated successfully, but these errors were encountered:
Thanks! let us know what your research uncovers here. We don't have any expertise with Accelerad, but it's likely that the community could benefit from your findings!
Hi all,
I am currently investigating options to decrease the run time of my bifacial_radiance simulations and I was wondering if there is a way to do the irradiance calculations via the GPU based Accelerad rtrace.exe.
Accelerad is supposed to do GPU based calculations by replacing Radiance's original rtrace and rpict functions. Since
bifacial_radiance
calls Radiance rtrace.exe I thought it is sufficient to replace the binariaries of my Radiance installation with the ones from Accelerad.However comparing directly run times using bifacial_radiance with the original Radiance rtrace and the Accelerad rtrace shows no improvement of the execution time.
I am running the simulations on a Windows computer for the original Radiance installation and on a Linux machine for the version with the binaries from Accelerad. Both are running without a problem. The Windows computer has a 11th Gen Intel(R) Core(TM) i7-1185g7 @ 3.00GHz with 4 cores, no GPU. The Linux computer has a NVIDIA Tesla M10 with 5 multiprocessors.
The Windows simulation of 3 timestamps takes 7 minutes, on Linux the same simulation takes 8:45 minutes. When calling
nvidia-smi -l 10
, the output shows that the GPU is used by rtrace but only a small fraction of the available memory is used.I appreciate any help!
The text was updated successfully, but these errors were encountered: