-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Removing the deprecated log_xxx_metadata
calls
#28
Conversation
WalkthroughThe pull request introduces changes to the workflow configuration and updates the logging method in two functions. The Changes
Possibly related PRs
Suggested reviewers
Poem
Warning Rate limit exceeded@bcdurak has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 14 minutes and 16 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (3)
.github/workflows/ci.yml
(1 hunks)template/steps/data_preprocessor.py
(2 hunks)template/steps/model_evaluator.py
(2 hunks)
🔇 Additional comments (4)
template/steps/data_preprocessor.py (2)
10-10
: LGTM: Import statement updated correctly
The import statement has been properly updated to use the new log_metadata
function from zenml.
70-74
: Verify metadata retrieval in inference pipeline
The logging call has been correctly updated to use the new log_metadata
API. However, since this metadata is explicitly used for inference (as noted in the comment "Log metadata so we can load it in the inference pipeline"), we should verify that the inference pipeline can still retrieve this metadata correctly with the new API.
Let's verify the metadata usage in inference-related code:
✅ Verification successful
Metadata retrieval is working correctly in the inference pipeline
The verification shows that the metadata is being correctly retrieved in the inference pipeline. In template/run.py
, the metadata is accessed using the new API format:
random_state = preprocess_pipeline_artifact.run_metadata["random_state"]
target = preprocess_pipeline_artifact.run_metadata["target"]
This matches perfectly with how the metadata is being logged in the preprocessor step using log_metadata
with artifact_name="preprocess_pipeline"
. The infer_artifact=True
parameter ensures the metadata is properly associated with the artifact, making it accessible via the run_metadata
property.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for metadata loading in inference-related files
rg -l "inference" | xargs rg -A 5 "(load_metadata|get_metadata)"
Length of output: 64
Script:
#!/bin/bash
# Let's try a broader search for metadata-related functions in Python files
rg -t py "(load_metadata|get_metadata|log_metadata)" -A 5
# Also search for files containing both "inference" and "metadata" to catch potential connections
rg -l "inference" | xargs rg "metadata"
Length of output: 6430
template/steps/model_evaluator.py (2)
7-7
: LGTM: Import statement correctly updated
The import statement has been properly updated to reflect the new logging function.
82-85
: Verify artifact inference behavior
The implementation looks correct, but let's verify the implications of infer_artifact=True
:
- The metadata structure is preserved correctly
- The
sklearn_classifier
artifact name is still valid
Let's verify the consistency of this change across the codebase:
✅ Verification successful
Based on the results, I can now generate the final response:
Implementation is correct and consistent
The log_metadata
function is imported from zenml and used consistently across the codebase:
- Both usages in
model_evaluator.py
anddata_preprocessor.py
follow the same pattern - The function is properly imported from zenml in both files
- No legacy
log_artifact_metadata
calls remain in the codebase
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent usage of log_metadata with infer_artifact
# across the codebase
# Check for any remaining uses of log_artifact_metadata
echo "Checking for any remaining log_artifact_metadata usage:"
rg "log_artifact_metadata"
# Check for consistent usage of log_metadata with infer_artifact
echo -e "\nChecking log_metadata usage patterns:"
rg -A 2 "log_metadata\("
Length of output: 756
Script:
#!/bin/bash
# Let's check the implementation of log_metadata function and its parameters
ast-grep --pattern 'def log_metadata($$$)'
# Also check for any imports or definitions related to this function
rg "log_metadata" -B 2
Length of output: 880
log_xxx_metadata
calls
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (1)
template/steps/model_evaluator.py (1)
Line range hint
76-81
: Fix warning messages logicThe warning messages are only displayed when test accuracy is below threshold due to incorrect
else
block placement. This means warnings about low training accuracy might be missed.Apply this diff to fix the warning logic:
if tst_acc < min_test_accuracy: messages.append( f"Test accuracy {tst_acc*100:.2f}% is below {min_test_accuracy*100:.2f}% !" ) - else: - for message in messages: - logger.warning(message) + + for message in messages: + logger.warning(message)🧰 Tools
🪛 Ruff (0.8.0)
84-84: Undefined name
log_metadata
(F821)
84-84: Undefined name
metadata
(F821)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
template/steps/model_evaluator.py
(2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.0)
template/steps/model_evaluator.py
7-7: zenml.log_
imported but unused
Remove unused import: zenml.log_
(F401)
84-84: Undefined name log_metadata
(F821)
84-84: Undefined name metadata
(F821)
Summary by CodeRabbit
New Features
Run tests
step to enhance workflow efficiency.Bug Fixes
Documentation