-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ratio Score, score type cannot handle legitimate cases where observation and prediction are of different sign (+,-). #172
Comments
class RatioScore(Score):
"""A ratio of two numbers.
Usually the prediction divided by
the observation.
"""
_allowed_types = (float,)
_description = ('The ratio between the prediction and the observation')
_best = 1.0 # A RatioScore of 1.0 is best
_worst = np.inf
def _check_score(self, score):
if score < 0.0:
raise errors.InvalidScoreError(("RatioScore was initialized with "
"a score of %f, but a RatioScore "
"must be non-negative.") % score)
@classmethod
def compute(cls, observation: dict, prediction: dict, key=None) -> 'RatioScore':
"""Compute a ratio from an observation and a prediction.
Returns:
RatioScore: A RatioScore of ratio from an observation and a prediction.
"""
assert isinstance(observation, (dict, float, int, pq.Quantity))
assert isinstance(prediction, (dict, float, int, pq.Quantity))
obs, pred = cls.extract_means_or_values(observation, prediction,
key=key)
if obs<0 and pred>0:
obs = np.abs(obs)
obs = obs + pred
if obs>0 and pred<0:
pred = np.abs(pred)
pred = pred + obs
value = pred / obs
value = utils.assert_dimensionless(value)
return RatioScore(value) |
The only introduced code is as follows: if obs<0 and pred>0:
obs = np.abs(obs)
obs = obs + pred
if obs>0 and pred<0:
pred = np.abs(pred)
pred = pred + obs This code as is leads to poor optimization results. Proposed change: if obs<0 and pred>0:
obs = np.abs(obs)
obs = obs + pred
value = pred/obs
if obs>0 and pred<0:
pred = np.abs(pred)
pred = pred + obs
value = obs/pred
value = utils.assert_dimensionless(value)
return RatioScore(value) |
Whatever change we make, ideally it would produce scores which a) do not have discontinuities and b) which monotonically decrease as the predicted and observed value move apart from each other, for any observed value. I propose that we add a |
@russelljjarvis I just committed 4584a7d to the dev branch, which adds |
Relative difference score does not cause any syntax problems but it leads to less effective optimization than ZScore for reasons that I don't understand. I can demonstrate sub standard optimization in a NU unit_test in the new optimization branch of NU that is pending pull request if you want? |
@russelljjarvis |
@rgerkin They are running over CI. I can try to get them to run on scidash travis by doing a PR as the optimization pull request to scidash passed unit testing on scidash travis. My overall impression is that Relative Difference Score can be lowered, and it works with optimization, but it is also less effective. |
The notebook now shows that they both effectively work, but the ZScore optimization is faster and can get better matches with fewer NGEN/MU. |
Background:
Ratio Score score type cannot handle legitimate cases where observation and prediction are of different sign (+,-).
In optimization, when fitting with sweep traces, RatioScore is a more appropriate score choice than ZScore since number of observations is
n=1
.@rgerkin
if observation and prediction are of different sign, add the absolute value of the negative one to the positive one.
Pseudo code attempted solution:
Also in optimization lower scores are better, in this context lower sciunit score with get worse with greater distance from 1.0. I need to make sure that is reflected somehow in a derivative sciunit score I can use with optimization.
The text was updated successfully, but these errors were encountered: