-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
statistical test to determine peak significance #124
Comments
Define a separate stats sub module whose functions take fooof output (slope, amp, etc) and any other needed argument (eg window_n). This leaves the fooof api untouched.
(e)
… On Feb 5, 2019, at 11:37 AM, Richard Gao ***@***.***> wrote:
Currently, peaks are identified based on a combination of user-defined threshold and comparison to the residual variance of the PSD after peak fitting.
One possible way to make this more theoretically grounded in stats is suggested here: https://atmos.washington.edu/~dennis/552_Notes_6b.pdf
pg 167: statistical significance of spectral peaks
Essentially, it compares the peak height to a theoretical null determined under the model of colored noise, which has a exponential power distribution at that frequency. Given the number of windows used to compute the PSD (degrees of freedom), one can compute a p-value under a pre-defined alpha to determine how likely the detected "peak" occurred by chance.
Since fooof is fitting an aperiodic component already, that would be the power under the null model which we would compare against.
Note that this will require additional inputs, namely, number of windows the user averaged over to compute the PSD (if using Welch's method).
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
I agree with this 90% of the way, and the one integrated use case I can think of is to let the significance drive peak detection, i.e. iteratively toss out insignificant peaks and refit slope, though I'm not sure how you would then limit the fitting such that it doesn't find the same peak again. |
If you want to toss or down weight small peaks isn’t it better to use an explicit regularizer instead? Say: min L(x,y) + lamda*|n| where |n| is the peak number?
(e)
… On Feb 5, 2019, at 2:23 PM, Richard Gao ***@***.***> wrote:
I agree with this 90% of the way, and the one integrated use case I can think of is to let the significance drive peak detection, i.e. iteratively toss out insignificant peaks and refit slope, though I'm not sure how you would then limit the fitting such that it doesn't find the same peak again.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Following #154, this has been moved to: |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Currently, peaks are identified based on a combination of user-defined threshold and comparison to the residual variance of the PSD after peak fitting.
One possible way to make this more theoretically grounded in stats is suggested here: https://atmos.washington.edu/~dennis/552_Notes_6b.pdf
pg 167: statistical significance of spectral peaks
Essentially, it compares the peak height to a theoretical null determined under the model of colored noise, which has a exponential power distribution at that frequency. Given the number of windows used to compute the PSD (degrees of freedom), one can compute a p-value under a pre-defined alpha to determine how likely the detected "peak" occurred by chance.
Since fooof is fitting an aperiodic component already, that would be the power under the null model which we would compare against.
Note that this will require additional inputs, namely, number of windows the user averaged over to compute the PSD (if using Welch's method).
The text was updated successfully, but these errors were encountered: