Skip to content

Commit

Permalink
Postponed the assessment of GPU memory for testing (#601)
Browse files Browse the repository at this point in the history
Since the GPUtil library does not allow us to measure the GPU memory usage for a specific process, we need to take a more cautious approach to gauge GPU usage.

There's a likelihood that the GPU memory usage may not stabilize during the initial iterations, as illustrated below:
```
[12999.0, 14257.0, 14569.0, 14617.0, 14623.0, 14621.0]
```
Rather than evaluating `mem_usage_history[4] - mem_usage_history[1] < 180.0`, we will now measure 10 points, and begin our examination from the midpoint of these points (`mem_usage_history[5] - mem_usage_history[9] < 180.0`).

This change addresses #595.

Authors:
  - Gigon Bae (https://github.com/gigony)

Approvers:
  - https://github.com/jakirkham

URL: #601
  • Loading branch information
gigony authored Aug 3, 2023
1 parent 7295429 commit 1408971
Showing 1 changed file with 5 additions and 3 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def test_read_region_cuda_memleak(testimg_tiff_stripe_4096x4096_256_jpeg):
gpu = gpus[0]
mem_usage_history = [gpu.memoryUsed]

for i in range(5):
for i in range(10):
_ = img.read_region(device='cuda')
gpus = GPUtil.getGPUs()
gpu = gpus[0]
Expand All @@ -40,8 +40,10 @@ def test_read_region_cuda_memleak(testimg_tiff_stripe_4096x4096_256_jpeg):

# The difference in memory usage should be less than 180MB.
# Note: Since we cannot measure GPU memory usage for a process,
# we use a rough number (experimentally measured).
assert mem_usage_history[4] - mem_usage_history[1] < 180.0
# we use a rough number.
# (experimentally measured, assuming that each image load
# consumes around 50MB of GPU memory).
assert mem_usage_history[5] - mem_usage_history[9] < 180.0


def test_read_region_cpu_memleak(testimg_tiff_stripe_4096x4096_256):
Expand Down

0 comments on commit 1408971

Please sign in to comment.