-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WFA migration not run for all records #1524
Comments
Root cause for missing updates:The issue, here is that as per acceptance criteria mentioned here, we have only updated z-score where values in the range [< -3.0 and > 3.0 ]. This info was lost due to large number of cards and tickets present for this issue. Fix to be done:As part of this card, we'll only address the second issue, i.e. to update z-scores where values is (>= -3.0 and <= 3.0), to have 2 decimal place values. Support in code for rounding to 2 decimal placesAlso, found that code changes to handle rounding to 2 decimal places has already been done as part of this commit. |
We have to only execute commands part of file https://github.com/avniproject/data-fixes/blob/main/z-score-update/output_round_2.sql Ignore commands in output.sql, as they have already been run as part of release 6.1 |
Note, that, the values for some encounters, could still end up with single decimal precision, this is due to the value being something like '2.10' which numerically, get transformed to 2.1 in code. |
@himeshr I think 2.10 has higher precision than 2.1. If its possible to set it so, then we can |
@himeshr this can be run on prod, so will move it to progress, just to keep track. Things to make sure:
|
Pending deploy to prod.. |
Applied the data-fix to prod: openchs=> select program_encounter.name, observations->>'c5f418f8-af4a-4a10-a70d-54f088c6a8c6' as WFAzscore, manual_update_history, last_modified_date_time from public.program_encounter where individual_id=332143 and encounter_date_time is not null order by id desc;
name | wfazscore | manual_update_history | last_modified_date_time
-------------------------+-----------+--------------------------------------+----------------------------
Growth Monitoring Visit | -2.22 | 04/03/2024 11:03:23 - data-fixes#1.1 | 2024-03-04 11:27:30.771+00
Growth Monitoring Visit | -2.21 | 04/03/2024 11:03:23 - data-fixes#1.1 | 2024-03-04 11:27:23.949+00
Growth Monitoring Visit | -1.89 | 04/03/2024 11:03:23 - data-fixes#1.1 | 2024-03-04 11:27:27.347+00
Growth Monitoring Visit | -2.12 | 04/03/2024 11:03:23 - data-fixes#1.1 | 2024-03-04 11:27:29.102+00
Growth Monitoring Visit | -2.08 | 04/03/2024 11:03:23 - data-fixes#1.1 | 2024-03-04 11:27:32.48+00
Growth Monitoring Visit | -2.22 | 04/03/2024 11:03:23 - data-fixes#1.1 | 2024-03-04 11:27:27.076+00
Growth Monitoring Visit | -2.29 | 04/03/2024 11:03:23 - data-fixes#1.1 | 2024-03-04 11:27:24.594+00
Growth Monitoring Visit | -2.29 | 04/03/2024 11:03:23 - data-fixes#1.1 | 2024-03-04 11:27:33.312+00
Growth Monitoring Visit | -2.24 | 04/03/2024 11:03:23 - data-fixes#1.1 | 2024-03-04 11:27:32.431+00
Growth Monitoring Visit | -4.47 | 22/01/2024 08:01:50 - data-fixes#1 | 2024-01-22 08:28:51.215+00
Growth Monitoring Visit | -4.47 | 22/01/2024 08:01:50 - data-fixes#1 | 2024-01-22 08:28:51.215+00
(11 rows)
|
I see the following ETL logs for JSS org, which seem to indicate ETL completed successfully on Prod:
|
@himeshr yes, yesterday it had failed, not it has passed. |
@himeshr for many ids I found, WFA didnt have double point precision. To find out if it is bcoz of trailing zero, checked one randomly and found that was not the case. This is for program encounter with id, 913973. When calculated via calculator it was found to be, -2.69. Kindly find the attached image below for the calculation. In db it is present as -2.7 with single decimal precision. |
The update file does have the sql query to make this correction. Will re-trigger the psql update in a background process with command logs captured using nohup to validate that the entire sql file was executed correctly on the prod db. |
@vinayvenu @petmongrels In this comment, I feel the following:
For 2nd and 3rd point whoever does it, can make it foolproof to write the test as well. Let me know once you both review. |
To do this following is probably required.
|
@petmongrels I see the above 2 points will address the points 2 and 3 here and I see solution for both will be the same. So I am thinking both can be part of this card. Let me know your thoughts. |
we will need two cards I think.
This card is burdened by context. If I pick up I don't know what to work on, so likely will ignore everything mentioned here and just go by the automated test. The test should be written based on the acceptance criteria of remaining work. oh and there is one more thing. what happens if we run the test and find out that the client is not sending the right numbers to the server. so we perhaps needs a third card to very sometime in between. depending on what we see in the data we will have to take call how to go about fixing them. |
@petmongrels Its implicit that whoever fixes it needs to also make sure its run for all rows. I am not convinced with the reason of context because of which it need to be moved to separate card. I feel for the 2nd point here the AC still remains the same. The comments here can help the person who picks up the card, to be aware of the issues that he might face while fixing. for 3rd point, I will create a separate card, which can be picked up after release. @vinayvenu if you think separate card needs to be created for 2nd point here and if you have bandwidth, you can create. I ve already spent too much time testing this card and dont want to spend additional time for this. |
@vinayvenu @petmongrels For the 2nd point here, It is like duplicate analysis work for me if I need to create separate card, because development was not complete and I need to understand now what developer has completed. If this continues, then the analyst time will not be spent efficiently. |
Will we be able to specify the requirement in form of JS based test? If so then we can first write that test in a separate card and code review it. After that we can play this card. Actually there should be nothing to QA on this card after that. |
@petmongrels I think it is upto the developer how he wants to verify it, whether JS or SQL or any other method. I think testing after development by developer is part of all cards. So I cant understand why it should be part of separate card. Also I think QA process should not be missed because test will be written by developer and it also can have issues. So from a QA perspective whatever needs to be verified can be verified. |
@petmongrels 1st point not an issue(my mistake - checked last_modified_date_time instead of encounter_date_time for identifying user's version), 3rd point can be moved to separate card, this card is just for 2nd point. Have added this: ---- As part of this verification, make sure to check the 2nd case mentioned here to the AC of this card. |
Dry run on Local DB was successful. |
Review notes
|
Successfully executed the Updates for the input.json generated on 26 March 2024 from JSS prod organisation. |
Pending task:After 15 day, perform steps mentioned in Readme to make z-score corrections to "over-written" entries or the "newly inserted entries from old version of app". |
No Additional QA required for now. |
Ran the migration one last time today, for all records which were out of sync as of 15 April 2024, 1:50 PM IST. |
Issue:
The migration here ran for this issue doesn't seem to have run for all rows.
Eg: On running the below SQL,
found that, manual_update_history and last_modified_date_time were not updated for many rows.
Analysis:
Previously this card was not tested. We thought Code review is sufficient. May be we should have QAed it.
Acceptance criteria:
Make sure the SQL to update WFA for existing data is correct and updates all applicable records for the org in focus.
---- As part of this verification, make sure to check the 2nd case mentioned here along with other cases
Comment (Vivek)
The text was updated successfully, but these errors were encountered: