Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use torchmetric PSNR implementation and argument ordering #693

Merged
merged 22 commits into from
Aug 2, 2023

Conversation

FelixSteinbauer
Copy link
Contributor

@FelixSteinbauer FelixSteinbauer commented Jul 18, 2023

  1. I replaced the self-implemented PSNR computation with the one provided by torchmetric. Main reason; the MAX_I is actually not the same as data_range (even if the original torchmetrics code suggests so)
  2. The ordering of torchmetric function call arguments is actually predictions ("preds") and then target ("target"), not the other way around (see torchmetric documentation SSIM, MSE, PSNR, MSLE, MAE or the code directly. e.g for MSE). The differentiation between what is target and what prediction is probably irrelevant for most metrics (like MSE). However, for PSNR it does play a role. I changed the order for all the torchmetric calls for consitency and used the respective named arguments for clarity.

I did not test the changes for GaNDLF specifically as they are minimal and I tested them for our BraTS repo where they did work and we basically use the same code anyways...

Fixes #ISSUE_NUMBER

Proposed Changes

  • Use torchmetrics own PSNR implementation
  • Reorder target and prediction argument as intended by torchmetrics

Checklist

  • I have read the CONTRIBUTING guide.
  • My PR is based from the current GaNDLF master .
  • Non-breaking change (does not break existing functionality): provide as many details as possible for any breaking change.
  • Function/class source code documentation added/updated.
  • Code has been blacked for style consistency.
  • If applicable, version information has been updated in GANDLF/version.py.
  • If adding a git submodule, add to list of exceptions for black styling in pyproject.toml file.
  • Usage documentation has been updated, if appropriate.
  • Tests added or modified to cover the changes; if coverage is reduced, please give explanation.
  • If customized dependency installation is required (i.e., a separate pip install step is needed for PR to be functional), please ensure it is reflected in all the files that control the CI, namely: python-test.yml, and all docker files [1,2,3].

…ntation with torchmetric PSNR

- I replaced the self-implemented PSNR computation with the one provided by torchmetric. 
- The ordering of torchmetric function call arguments is actually predictions ("preds") and then target ("target"), not the other way around.
@github-actions
Copy link
Contributor

github-actions bot commented Jul 18, 2023

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

@sarthakpati
Copy link
Collaborator

Thanks for this PR, @FelixSteinbauer.

Can you elaborate why using the TorchMetrics' implementation would be preferred over what is currently available? Especially since the current implementation seems to handle divisions by zero better [ref] than that of torchmetrics [ref]? I think this would make sense if we were using the reduction key, though. What do you think?

@sarthakpati sarthakpati self-requested a review July 18, 2023 14:10
@codecov
Copy link

codecov bot commented Jul 18, 2023

Codecov Report

Merging #693 (d8b1f8d) into master (6579e15) will decrease coverage by 0.01%.
The diff coverage is 100.00%.

❗ Current head d8b1f8d differs from pull request most recent head 974fde4. Consider uploading reports for the commit 974fde4 to get more accurate results

@@            Coverage Diff             @@
##           master     #693      +/-   ##
==========================================
- Coverage   94.69%   94.68%   -0.01%     
==========================================
  Files         117      117              
  Lines        8200     8208       +8     
==========================================
+ Hits         7765     7772       +7     
- Misses        435      436       +1     
Flag Coverage Δ
unittests 94.68% <100.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Changed Coverage Δ
GANDLF/cli/generate_metrics.py 100.00% <100.00%> (ø)
GANDLF/metrics/synthesis.py 100.00% <100.00%> (ø)

... and 1 file with indirect coverage changes

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@FelixSteinbauer
Copy link
Contributor Author

That is a good point. I did neglect the division-by-zero aspect in the description of this PR.

Regarding PSNR, I am actually proposing two changes:

  1. Using torch.max(target) (in this line) instead of torch.max(target) - torch.min(target). When I wrote the code first, I though the value range makes sense, especially because the torchmetrics code looks like it would use the "data_range". However, it seems that torchmetrics and Wikipedia consistently use the maximum only (Wikipedia formula in this artilce and not the range. I tested manually and troch and wikipedia produce the same outputs which are different to what you get if you use the range. I am not an PSNR expert, so maybe the range makes sense in some scenarios (which is probably why torchmetrics has code segments dedicated to that), but at least for our challenge we decided our metrics should be consistent with Wikipedia and the torchmetrics output.
  2. Letting divisions by zero happen. So going from / (mse + sys.float_info.epsilon) to / mse (in this line). Then you really have the plain Wikipedia formula. Also torchmetrics will give you infinity for cases where mse is 0. Also, it makes sense to return infinity; PSNR is an Signal-to-noise ratio and if there is no noise, the ratio goes to infinity. Or to cite the Wikipedia article (under "Quality estimation with PSNR"):

In the absence of noise, the two images I and K are identical, and thus the MSE is zero. In this case the PSNR is infinite (or undefined, see Division by zero).

I see that from a practical perspective, you might want to avoid inf's and nans in your framework. For our challenge however, we wanted to be most consistent with the "theoretical" definition of PSNR, which seems also to align with the torchmetrics implementation. This (and readability & simplicity) is the reason why I went straight with the torchmetrics implementation.

Concerning the reduction parameter: If I understood it purpose correctly it just defines how the elementwise PSNR values are collapsed to an output (summing them up, taking the mean, etc.). The default is elementwise mean and that is also what you get if you do 10.0 * torch.log10(torch.max(gt) ** 2 / MSE) (the wikipedia formular). So I think we would not need to change that?

@sarthakpati
Copy link
Collaborator

  1. Using torch.max and not torch.max - torch.min won't make any difference if your data's minimum value is 0, as is the case for MRIs of the brains processed for BraTS. However, for a more general-purpose solution (such as for CT/PET), utilizing the data range is perhaps more robust. And indeed, this is precisely what torchmetrics does, so we should be fine in either case.

  2. I understand that you would want to have "theoretical correctness" WRT any metrics. Discounting the fact that having inf and nan in the metrics is definitely problematic in terms of parsing for a framework like GaNDLF, having inf and nan makes any generated metrics unusable for any practical purposes as well, such as for ranking for a challenge, and you will need to do some kind of data engineering to make it work, which will undoubtedly is a less than ideal solution. Since the addition of epsilon for anything other than for mse==0 won't have any discernable impact, isn't using this a more robust solution?

  3. You are right about reduction - it doesn't specifically matter in this case.

@FelixSteinbauer
Copy link
Contributor Author

  1. For our particular challenge it does make a difference as we work on (small) segments of the brain scans. These segments do not necessarily include background, so the minimum can be >0. This is actually one reason why we noticed that max vs range produces different results. However, I am now getting insecure about what is actually the best choice for our task; range or maximum. I will need to discuss this with the people in my group that have a better intuition about the meaning of PSNR.

Regarding what torchmetrics does: are you sure it actually computes PSNR using the range? because for me it does not (maybe that is a versioning thigh though). If you do:

from torchmetrics.image import PeakSignalNoiseRatio
from torchmetrics import MeanSquaredError
import torch

mse = MeanSquaredError()
psnr = PeakSignalNoiseRatio() #torchmetrics version

def psnr_max(preds, target): # manual version using maximum only
    return 10.0 * torch.log10(torch.max(target) ** 2 / mse(preds=preds, target=target))

def psnr_range(preds, target): # manual version using range
    return 10.0 * torch.log10( (torch.max(target)-torch.min(target)) ** 2 / mse(preds=preds, target=target))

target = torch.Tensor([2,3,4])
preds = torch.Tensor([4,3,2])

print(f"psnr_range:\t{psnr_range(preds=preds, target=target)}")
print(f"psrn_max:\t{psnr_max(preds=preds, target=target)}")
print(f"torchmetrics:\t{psnr(preds=preds, target=target)}")

The output on my machine is:

psnr_range:     1.760912537574768
psrn_max:       7.78151273727417
torchmetrics:   7.781512260437012

In the torchmetrics version I have the line you referenced sets the self.min_target to 0 no matter what. Afterwards in this line the self.min_target is updated with min( min(target), 0 ) which in the above example is min( 2, 0) = 0. This is why the data_range in this line is computed as max_target - min_target = 4 - 0 = 4. Which is equivalent to not using the range (2) but the max (4). I agree that this is counter-intuitive, but that is what happens on my machine (-> please verify. maybe that is a bug in torchmetrics? or it is intended that 0 (or below) is the lower limit of the range no matter what the actual min(target) would be? or I am doing something wrong?)

  1. In the discussion I had today, the consensus was that users should receive infinity from PSNR as this is what they would expect. For ranking purposes, it should not make a difference as is bigger than other floats. It is still a valid order in python: e.g. float("inf") > 100 -> True, float("inf") < 100 -> False, float("inf") == float("inf") -> True . However, for statistical purposes PSNR with its infinities, does not make much sense. When plugging lists containing infinities in numpy, np.mean will get inf, np.std will get nan and np.var will produce a RuntimeWarning (on my machine). In my opinion, it makes more sense to inform users that PSNR failed then to silently set it to another values because its easier to implement. Because the very big values that emerge instead of the infinity will bias the statistical analysis. However, the user gets not informed about the introduced bias (e.g. by a RuntimeWarning)...

I am sorry for this lengthy discussion. Maybe what makes sense for GaNDLF is not what makes sense for our specific challenge evaluation. I will also to talk to Florian and the others about these aspects and update this post later.

@neuronflow
Copy link

neuronflow commented Jul 20, 2023

From my understanding, the range-based implementations are meant for images with negative values, so images that do not go from 0 to max?

Imagine an image that has values from -1000 to 0.

As our images do not necessarily contain 0 (background) a range-based implementation is likely to produce non-sense?

I agree it is important to spot perfect reconstructions and we also want to avoid distorting our distribution statistics.

@neuronflow
Copy link

What is the range for images in the test set? Are all images normalized?

@sarthakpati
Copy link
Collaborator

sarthakpati commented Jul 20, 2023

@FelixSteinbauer: TBH, I am unsure which version of torchmetrics takes the range by default. The version that is currently used by GaNDLF is 0.8.1, and that version seems to take the range into account [ref]. I am unsure how this exactly gets mapped, though.

I think we would all better confer with a statistician regarding the stability of this entire process. If we take the following example "results":

SubjectID T_Max T_Min MSE PSNR_range_noEps PSNR_noRange_noEps PNSR_fix
001 100 0 5 8.0 8.0 8.0
002 100 0 0 inf/nan inf/nan 1.8014398509481984e+17
003 100 -100 5 9.204119982655925 8.0 9.204119982655925
004 100 -100 0 inf/nan inf/nan 2.072583566208128e+17
005 100 25 5 7.5002450535668 8.0 7.5002450535668
006 100 25 0 inf/nan inf/nan 1.8887286306425088e+17

PSNR_range_noEps: PSNR with range considered by not epsilon
PSNR_noRange_noEps: PSNR with range and epsilon both not considered
PSNR_fix: PSNR with both range and epsilon considered, i.e., current GaNDLF implementation

Firstly, we can clearly see that for the case that the PSNR calculations that consider range are able to discern the difference between the different minimum values in the target, which is I think what we would to consider in a challenge so that the results are as accurate and reliable as possible.

Secondly, in my experience working with the stats profs on our end, they will always suggest to either drop PSNR_range_noEps and PSNR_noRange_noEps, or substitute inf (or nan, depends on how to you want to interpret division by zero) by something else that cannot be replicated in any other row and something that always skews the scaling in the direction where the analysis is appropriate (e.g., for something like Hausdorff, this needs to be as close to 0 as possible, and for something like PSNR, it should be something as close to inf as possible, and yet be a real number). The same holds true for PSNR_fix as well, but, it is able to accurately model "perfect" reconstruction (i.e., the case of mse==0) and can take that into account during any modeling (this can be done easily by performing per-feature normalization or feature scaling).

In general, raising run-time exceptions for users is fine when the input is faulty, which is not actually the case when mse==0 (which means "perfect" reconstruction). What do you guys think?

@neuronflow
Copy link

neuronflow commented Jul 21, 2023

Thanks for conducting these experiments and providing a basis for discussion @sarthakpati ! Currently waiting for feedback from our team.

From my naive perspective, it should not matter much..at least for our ranking; we rank the performance for each metric, for each case, and sum up the ranks. I don't see how a participant team should profit systematically from one implementation or the other?

In the paper, it might be a bit weird if we report PSNR, which is not really PSNR, but PSNR + epsilon sauce?

Perfect reconstructions for 3D images should be highly unlikely, so it's more of an academic problem?
If perfect reconstruction occurs, we should report it. (But even with epsilon, we would spot these cases because of MSE==0].

What is the big benefit of avoiding inf values? Is it about maintaining comparibility of performance?

I am not a statistics prof, but we could report:

Team A inpainted `n` out of `570` cases perfectly. For the remaining cases, we observe PSNR of `mean+-SD`.
Team B inpainted `j` out of `570` cases perfectly. For the remaining cases, we observe PSNR of `mean+-SD`.

It would be a bit hard to compare if n!=j, but I don't consider it a major problem as this is how PSNR in the absence of noise is defined. However, perfect reconstructions are very unlikely so I don't expect we will run into this scenario unless we are dealing with cheating participants?

@sarthakpati
Copy link
Collaborator

sarthakpati commented Jul 21, 2023

I agree that this is primarily an academic problem, and the entire premise lies on the fact whether or not perfect reconstruction is possible. If yes, then it can be easily detected through MSE, and if that is the case, if epsilon is not present, the entire PSNR column will need to be "engineered" through one of 3 ways:

  1. Remove the column entirely
  2. Alter the inf/nan values to something that cannot be replicated anywhere in the column
  3. Add epsilon in denominator of PSNR calculation

I feel that 3 is more easily explained in a paper than the rest, and provides a more consistent mechanism for performing calculations.

What is the big benefit of avoiding inf values? Is it about maintaining comparibility of performance?

Yup, large real numbers can be compared, but inf values can't. 😄

@neuronflow
Copy link

neuronflow commented Jul 21, 2023

Yup, large real numbers can be compared, but inf values can't. 😄

# Define a really large number (you can adjust this as needed)
large_number = 10**1000

# Compare the large number to positive infinity
if large_number > float('inf'):
    print("The large number is greater than infinity.")
else:
    print("The large number is not greater than infinity.")

try this code :)

@sarthakpati
Copy link
Collaborator

float('inf') is well-known to be an approximation [ref]. Regardless, even this check needs to be manual, which is point of failure/break.

@neuronflow
Copy link

Hmm, for me, both options sound fine. If we go for the epsilon solution, we should probably report median + median absolute deviation instead of mean+SD as they should be robust to potential large PSNRs introduced by epsilon.

@neuronflow
Copy link

Sarthak suggested to output both PSNR with- and without epsilon. I believe this is a good idea, then the users can choose and will be aware.

@sarthakpati
Copy link
Collaborator

Sarthak suggested to output both PSNR with- and without epsilon. I believe this is a good idea, then the users can choose and will be aware.

Yup, let's add a new peak_signal_noise_ratio_eps function with the current implementation. The peak_signal_noise_ratio can have the implementation proposed by @FelixSteinbauer. Both of these can be called in this location in GANDLF.cli.generate_metrics.

@FelixSteinbauer
Copy link
Contributor Author

That sounds like a good solution. Thanks for this constructive and elaborate discussion.

Now, regarding this pull request; Should I just close it? Or can/should I do the modifications for peak_signal_noise_ratio_eps and peak_signal_noise_ratio? (I am not very experienced with large github projects and PRs)

@sarthakpati
Copy link
Collaborator

That sounds like a good solution. Thanks for this constructive and elaborate discussion.

Sure thing!

Now, regarding this pull request; Should I just close it? Or can/should I do the modifications for peak_signal_noise_ratio_eps and peak_signal_noise_ratio? (I am not very experienced with large github projects and PRs)

I can put this PR as a draft. You can make the changes on this branch itself, and then once you push your changes, they will get automatically reflected on the PR. Sound okay?

@FelixSteinbauer FelixSteinbauer marked this pull request as draft July 21, 2023 18:28
peak_signal_noise_ratio_eps with the initial PSNR implementation (using range and epsilon)
Additionally to the vanilla PSNR, also the PSNR based on value range and with epsilon in the denominator is now added to the overall_stats_dict as "psnr_range_eps"
@neuronflow
Copy link

From my understanding, Sarthak's goal is to have one implementation to rule them all?

This is tricky as, from my understanding, correct computation of PSNR might require top-down knowledge about the images that cannot be computed from the images themselves. One example of this is are our images not featuring background. Here PSNR needs to be computed for a range from 0 to max.

To make it more complicated, some of the images in the BraTS dataset seem to feature negative values, which are probably artifacts from registration or skull-stripping. Such specifics will vary from dataset to dataset.

The only option I see to serve and please everyone is to provide arguments for defining min and max for the range computation. At this stage, you could also think about adding an option to define epsilon.

@sarthakpati
Copy link
Collaborator

The current torchmetrics version getting installed with GaNDLF takes care of the range, and I think we should keep it like that:

>>> import torch
>>> import torchmetrics
>>> torchmetrics.__version__
'0.8.1'
>>> from torchmetrics.image import PeakSignalNoiseRatio
>>> psnr = PeakSignalNoiseRatio()
>>> prediction = torch.Tensor([4,3,2])
>>> target_0 = torch.Tensor([2,3,4])    
>>> target_1 = torch.Tensor([0,3,4]) 
>>> target_2 = torch.Tensor([-1,3,4]) 
>>> psnr(preds=prediction, target=target_0)  
tensor(7.7815)
>>> psnr(preds=prediction, target=target_1) 
tensor(3.8021)
>>> psnr(preds=prediction, target=target_2) 
tensor(4.1266)

@sarthakpati sarthakpati marked this pull request as ready for review July 21, 2023 20:13
@FelixSteinbauer
Copy link
Contributor Author

I don't we have this case on our dataset...
At least for the BraTS 2023 Training Dataset, the minima are strictly <= 0. And the <0 are very rare outliers (artefacts).

@sarthakpati
Copy link
Collaborator

At least for the BraTS 2023 Training Dataset, the minima are strictly <= 0. And the <0 are very rare outliers (artefacts).

If that's the case, then is it not prudent for the artefacts themselves to be fixed? And the PSNR calculation with proper min and max will correctly point this issue out.

@FelixSteinbauer
Copy link
Contributor Author

Fixing artefacts would be great. But I think the BraTS Umbrella organisation needs to deal with this topic.

The outliers are not that big. The 4 strongest outliers in the training set have following ranges:
(-77.7, 1458.07)
(-131.9, 2704.7)
(-162.0, 4481.0)
(-192.2, 8025.7)
So the difference between the (0,max) range and the (min,max) range is just a few (2-5) percent.

@FelixSteinbauer
Copy link
Contributor Author

So, I think I managed to include the ideas/issues of the above discussion into a unified PSNR function by adding parameters for the data_range and epsilon. The function now looks like this:

def peak_signal_noise_ratio(target, prediction, data_range=None, epsilon=None) -> torch.Tensor:
    """
    Computes the peak signal to noise ratio between the target and prediction.

    Args:
        target (torch.Tensor): The target tensor.
        prediction (torch.Tensor): The prediction tensor.
        data_range (float, optional): If not None, this data range is used as enumerator instead of computing it from the given data. Defaults to None.
        epsilon (float, optional): If not None, this epsilon is added to the denominator of the fraction to avoid infinity as output. Defaults to None.
    """

    if epsilon == None:
        psnr = PeakSignalNoiseRatio(data_range=data_range)
        return psnr(preds=prediction, target=target)
    else: #reimplement torchmetrics RSNR but with epsilon
        mse = mean_squared_error(target, prediction)
        if data_range == None: #compute data_range like torchmetrics if not given
            min_v = 0 if torch.min(target) > 0 else torch.min(target) #look at this line
            max_v = torch.max(target)
            data_range = max_v - min_v
        return 10.0 * torch.log10(data_range) ** 2) / (mse + epsilon)

If you want:

  • The default torchmetrics PSNR version:
    • peak_signal_noise_ratio(target, prediction)
  • A more robust version that does not result in infinity:
    • peak_signal_noise_ratio(target, prediction, epsilon=sys.float_info.epsilon)
  • A different range than what torchmetrics would compute:
    • peak_signal_noise_ratio(target, prediction, data_range=torch.max(prediction)-troch.min(prediction))
  • To use your top-down knowledge about the actual data range (independent of the sample you are currently looking at):
    • peak_signal_noise_ratio(target, prediction, data_range=1.0) (this would be for normalised images, which will be relevant for our challenge)

Does this cover every (use)case we wanted? Does that make everyone happy? Suggestions for improvement?

@FelixSteinbauer FelixSteinbauer marked this pull request as ready for review July 29, 2023 11:09
Copy link
Collaborator

@sarthakpati sarthakpati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor revision requested, but should be good to go otherwise.

GANDLF/metrics/synthesis.py Show resolved Hide resolved
GANDLF/metrics/synthesis.py Outdated Show resolved Hide resolved
@neuronflow
Copy link

neuronflow commented Jul 31, 2023

I made an example that should clarify the behavior of torchmetrics:

import torch
import torchmetrics

print(torchmetrics.__version__)
"1.0.1"
from torchmetrics import MeanSquaredError
from torchmetrics.image import PeakSignalNoiseRatio

mse = MeanSquaredError()
prediction = torch.Tensor([4, 3, 2])
target_0 = torch.Tensor([2, 3, 4])
target_1 = torch.Tensor([0, 3, 4])
target_2 = torch.Tensor([-1, 3, 4])


def psnr(preds, target):
    min_v = 0 if torch.min(target) > 0 else torch.min(target)  # look at this line
    max_v = torch.max(target)
    data_range = max_v - min_v
    result = 10.0 * torch.log10(data_range**2 / mse(preds=preds, target=target))
    print(result)
    return result


def torchmetrics_original_psnr(range, preds, target):
    psnr_computer = PeakSignalNoiseRatio(data_range=range)
    psnr = psnr_computer(
        preds=preds,
        target=target,
    )
    print(psnr)
    return psnr


psnr(preds=prediction, target=target_0)
torchmetrics_original_psnr(range=(0, 4), preds=prediction, target=target_0)
print("for positive values torchmetrics ignores the minimum in the data and chooses 0")
torchmetrics_original_psnr(range=(2, 4), preds=prediction, target=target_0)


psnr(preds=prediction, target=target_1)
torchmetrics_original_psnr(range=(0, 4), preds=prediction, target=target_1)

psnr(preds=prediction, target=target_2)
torchmetrics_original_psnr(range=(-1, 4), preds=prediction, target=target_2)

Copy link
Collaborator

@sarthakpati sarthakpati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor revisions and we should be good to go.

GANDLF/metrics/synthesis.py Outdated Show resolved Hide resolved
FelixSteinbauer and others added 6 commits July 31, 2023 17:06
I don't know why changing this comment broke the testing pipeline. Maybe it was not the comment. I changed the quotation marks hoping that would help (probably does not though...)
Now the code should be in the state when it last worked
I do not know why the pipeline fails To me it seems unrelated to the changes that were made since the last successful run.
@FelixSteinbauer
Copy link
Contributor Author

@sarthakpati Sorry to bother you again, but I really do not get why the pipeline is failing (OpenFL-Test & CI-PyTest).

I already tried to revert to the last state where the checks succeeded, but then the checks still fail. I am not certain what to do now. I don't even understand what the reason for these error are.
Could you please take a look at the error logs?

@sarthakpati
Copy link
Collaborator

Apologies, but due to an unforeseen issue, OpenFL tests are failing [ref]. Please hang on tight while we are coordinating a fix, and we will re-initiate the failing run automatically.

Copy link
Collaborator

@sarthakpati sarthakpati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@sarthakpati sarthakpati merged commit 44e6748 into mlcommons:master Aug 2, 2023
14 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Aug 2, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants