-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add new hash expire cmd to pika #2883
base: unstable
Are you sure you want to change the base?
feat: add new hash expire cmd to pika #2883
Conversation
WalkthroughA new enumerator value Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Outside diff range, codebase verification and nitpick comments (14)
src/storage/src/redis_pkhashes.cc (7)
23-63
: Consider adding logging for better debugging and monitoring.The function is correctly implemented with proper error handling and validation checks. Adding logging can help in better debugging and monitoring.
Consider adding logging statements at key points in the function.
66-142
: Consider adding logging for better debugging and monitoring.The function is correctly implemented with proper error handling and validation checks. Adding logging can help in better debugging and monitoring.
Consider adding logging statements at key points in the function.
144-205
: Consider adding logging for better debugging and monitoring.The function is correctly implemented with proper error handling and validation checks. Adding logging can help in better debugging and monitoring.
Consider adding logging statements at key points in the function.
207-274
: Consider adding logging for better debugging and monitoring.The function is correctly implemented with proper error handling and validation checks. Adding logging can help in better debugging and monitoring.
Consider adding logging statements at key points in the function.
276-333
: Consider adding logging for better debugging and monitoring.The function is correctly implemented with proper error handling and validation checks. Adding logging can help in better debugging and monitoring.
Consider adding logging statements at key points in the function.
335-394
: Consider adding logging for better debugging and monitoring.The function is correctly implemented with proper error handling and validation checks. Adding logging can help in better debugging and monitoring.
Consider adding logging statements at key points in the function.
397-452
: Consider adding logging for better debugging and monitoring.The function is correctly implemented with proper error handling and validation checks. Adding logging can help in better debugging and monitoring.
Consider adding logging statements at key points in the function.
src/storage/src/storage.cc (6)
480-484
: Review Error Handling inGetDBInstance
The
PKHExpire
function includes error handling for invalid TTL values, which is appropriate. However, theGetDBInstance
function lacks explicit error handling, which might be a concern if the slot index retrieval or instance access could fail. Consider adding error handling toGetDBInstance
to ensure robustness.
GetDBInstance
: No explicit error handling observed.PKHExpire
: Handles invalid TTL values correctly.Analysis chain
LGTM! Verify error handling.
The function is correctly implemented. Ensure that
GetDBInstance
andPKHExpire
handle errors correctly.The code changes are approved.
Run the following script to verify error handling:
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify error handling in `GetDBInstance` and `PKHExpire`. # Test: Search for the function implementations. Expect: Proper error handling. rg --type cpp -A 5 $'std::unique_ptr<Redis>& Storage::GetDBInstance' rg --type cpp -A 5 $'Status Redis::PKHExpire'Length of output: 2096
486-490
: Potential Issue: Lack of Error Handling inGetDBInstance
The
GetDBInstance
function does not have explicit error handling, which could lead to issues if an invalid instance is returned. Consider adding error handling to ensure robustness. ThePKHExpireat
function inRedis
correctly handles invalid timestamps.
- Review the
GetDBInstance
function for potential error handling improvements.Analysis chain
LGTM! Verify error handling.
The function is correctly implemented. Ensure that
GetDBInstance
andPKHExpireat
handle errors correctly.The code changes are approved.
Run the following script to verify error handling:
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify error handling in `GetDBInstance` and `PKHExpireat`. # Test: Search for the function implementations. Expect: Proper error handling. rg --type cpp -A 5 $'std::unique_ptr<Redis>& Storage::GetDBInstance' rg --type cpp -A 5 $'Status Redis::PKHExpireat'Length of output: 1159
Script:
#!/bin/bash # Description: Check for error handling related to `GetDBInstance`. # Search for any error handling or checks related to `GetDBInstance`. rg --type cpp 'GetDBInstance' -A 3 | grep -i 'if\|try\|catch\|error\|exception'Length of output: 1873
504-508
: Improve Error Handling inGetDBInstance
The
GetDBInstance
function lacks explicit error handling, which could lead to issues if an invalid index is accessed. Consider adding error checks to handle such cases. ThePKHTTL
function inRedis
has partial error handling for specific conditions, which is a positive aspect.
- File:
src/storage/src/storage.cc
- Function:
GetDBInstance
Analysis chain
LGTM! Verify error handling.
The function is correctly implemented. Ensure that
GetDBInstance
andPKHTTL
handle errors correctly.The code changes are approved.
Run the following script to verify error handling:
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify error handling in `GetDBInstance` and `PKHTTL`. # Test: Search for the function implementations. Expect: Proper error handling. rg --type cpp -A 5 $'std::unique_ptr<Redis>& Storage::GetDBInstance' rg --type cpp -A 5 $'Status Redis::PKHTTL'Length of output: 1084
Script:
#!/bin/bash # Description: Gather more information on error handling in `GetDBInstance` and `PKHTTL`. # Check for error handling in `GetDBInstance`. rg --type cpp -A 10 'std::unique_ptr<Redis>& Storage::GetDBInstance' # Check for error handling in `PKHTTL`. rg --type cpp -A 20 'Status Redis::PKHTTL'Length of output: 2518
510-513
: Ensure Robust Error Handling inGetDBInstance
The
PKHGet
function inRedis
has appropriate error handling for type and existence checks. However,GetDBInstance
lacks explicit error handling, which could lead to issues if the instance retrieval fails. Consider adding error handling to ensure robustness.
GetDBInstance
insrc/storage/src/storage.cc
lacks error handling.PKHGet
insrc/storage/src/redis_pkhashes.cc
has adequate error handling.Analysis chain
LGTM! Verify error handling.
The function is correctly implemented. Ensure that
GetDBInstance
andPKHGet
handle errors correctly.The code changes are approved.
Run the following script to verify error handling:
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify error handling in `GetDBInstance` and `PKHGet`. # Test: Search for the function implementations. Expect: Proper error handling. rg --type cpp -A 5 $'std::unique_ptr<Redis>& Storage::GetDBInstance' rg --type cpp -A 5 $'Status Redis::PKHGet'Length of output: 1049
Script:
#!/bin/bash # Description: Gather more context around `GetDBInstance` and `PKHGet` for error handling. # Search for the full implementation of `GetDBInstance` to check for error handling. rg --type cpp -A 20 'std::unique_ptr<Redis>& Storage::GetDBInstance' # Search for the full implementation of `PKHGet` to check for error handling. rg --type cpp -A 20 'Status Redis::PKHGet'Length of output: 3083
498-502
: Improve Error Handling inGetDBInstance
The
GetDBInstance
function lacks explicit error handling, which could lead to issues if an invalid index is accessed. Consider adding checks to ensure the index is valid before accessing the instance. ThePKHPersist
function has some error handling mechanisms in place, which is good practice.
- File:
src/storage/src/storage.cc
- Function:
GetDBInstance
Analysis chain
LGTM! Verify error handling.
The function is correctly implemented. Ensure that
GetDBInstance
andPKHPersist
handle errors correctly.The code changes are approved.
Run the following script to verify error handling:
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify error handling in `GetDBInstance` and `PKHPersist`. # Test: Search for the function implementations. Expect: Proper error handling. rg --type cpp -A 5 $'std::unique_ptr<Redis>& Storage::GetDBInstance' rg --type cpp -A 5 $'Status Redis::PKHPersist'Length of output: 1096
Script:
#!/bin/bash # Description: Retrieve complete implementations to verify error handling in `GetDBInstance` and `PKHPersist`. # Search for the complete implementation of `GetDBInstance` to check for error handling. rg --type cpp -A 20 $'std::unique_ptr<Redis>& Storage::GetDBInstance' # Search for the complete implementation of `PKHPersist` to check for error handling. rg --type cpp -A 20 $'Status Redis::PKHPersist'Length of output: 3101
515-518
: Add Error Handling toGetDBInstance
and VerifyRedis::PKHSet
GetDBInstance
should include error handling to manage invalid instance indices.- Ensure comprehensive error handling in
Redis::PKHSet
to cover all possible error conditions.Please review and address these points to improve the robustness of the code.
Analysis chain
LGTM! Verify error handling.
The function is correctly implemented. Ensure that
GetDBInstance
andPKHSet
handle errors correctly.The code changes are approved.
Run the following script to verify error handling:
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify error handling in `GetDBInstance` and `PKHSet`. # Test: Search for the function implementations. Expect: Proper error handling. rg --type cpp -A 5 $'std::unique_ptr<Redis>& Storage::GetDBInstance' rg --type cpp -A 5 $'Status Redis::PKHSet'Length of output: 1054
Script:
#!/bin/bash # Description: Gather more lines to verify error handling in `GetDBInstance` and `Redis::PKHSet`. # Get additional context for `GetDBInstance` rg --type cpp -A 10 $'std::unique_ptr<Redis>& Storage::GetDBInstance(const std::string& key)' # Get additional context for `Redis::PKHSet` rg --type cpp -A 10 $'Status Redis::PKHSet(const Slice& key, const Slice& field, const Slice& value, int32_t* res)'Length of output: 211
Script:
#!/bin/bash # Description: Refine search to verify error handling in `GetDBInstance` and `Redis::PKHSet`. # Search for `GetDBInstance` with more context rg --type cpp -A 15 'std::unique_ptr<Redis>& Storage::GetDBInstance' # Search for `Redis::PKHSet` with more context rg --type cpp -A 15 'Status Redis::PKHSet'Length of output: 2388
src/pika_command.cc (1)
19-19
: Add include guard forpika_pkhash.h
.Ensure that the header file
pika_pkhash.h
is included only once to prevent potential redefinition errors.Consider adding an include guard in
pika_pkhash.h
if it doesn't already exist.
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (16)
- include/acl.h (1 hunks)
- include/pika_command.h (6 hunks)
- include/pika_pkhash.h (1 hunks)
- src/pika_client_conn.cc (1 hunks)
- src/pika_command.cc (10 hunks)
- src/pika_pkhash.cc (1 hunks)
- src/storage/include/storage/storage.h (18 hunks)
- src/storage/include/storage/storage_define.h (2 hunks)
- src/storage/src/base_filter.h (6 hunks)
- src/storage/src/base_value_format.h (5 hunks)
- src/storage/src/pkhash_data_value_format.h (1 hunks)
- src/storage/src/redis.cc (12 hunks)
- src/storage/src/redis.h (17 hunks)
- src/storage/src/redis_hashes.cc (37 hunks)
- src/storage/src/redis_pkhashes.cc (1 hunks)
- src/storage/src/storage.cc (36 hunks)
Files skipped from review due to trivial changes (2)
- src/pika_client_conn.cc
- src/storage/src/redis_hashes.cc
Additional comments not posted (147)
src/storage/include/storage/storage_define.h (2)
47-47
: LGTM!The addition of the new enumerator
kPKHashDataCF
is correct and consistent with the existing enumerators.The code changes are approved.
124-132
: LGTM!The formatting changes improve readability without altering the logic of the function.
The code changes are approved.
src/storage/src/base_value_format.h (3)
21-31
: LGTM!The addition of the new enumerators
kPKHashes
andkNones
is correct and consistent with the existing enumerators.The code changes are approved.
34-35
: LGTM!The update to the
DataTypeStrings
array ensures that all enumerators have corresponding string values.The code changes are approved.
52-53
: LGTM!The formatting changes improve readability without altering the logic of the code.
The code changes are approved.
Also applies to: 100-101, 153-153, 163-167
include/pika_pkhash.h (7)
14-38
: LGTM!The class
PKHExpireCmd
is correctly implemented.The code changes are approved.
40-64
: LGTM!The class
PKHExpireatCmd
is correctly implemented.The code changes are approved.
65-89
: LGTM!The class
PKHExpiretimeCmd
is correctly implemented.The code changes are approved.
91-115
: LGTM!The class
PKHPersistCmd
is correctly implemented.The code changes are approved.
117-141
: LGTM!The class
PKHTTLCmd
is correctly implemented.The code changes are approved.
143-164
: LGTM!The class
PKHGetCmd
is correctly implemented.The code changes are approved.
166-187
: LGTM!The class
PKHSetCmd
is correctly implemented.The code changes are approved.
src/pika_pkhash.cc (14)
16-38
: LGTM!The method
PKHExpireCmd::DoInitial
is correctly implemented.The code changes are approved.
40-53
: LGTM!The method
PKHExpireCmd::Do
is correctly implemented.The code changes are approved.
55-80
: LGTM!The method
PKHExpireatCmd::DoInitial
is correctly implemented.The code changes are approved.
81-94
: LGTM!The method
PKHExpireatCmd::Do
is correctly implemented.The code changes are approved.
96-116
: LGTM!The method
PKHExpiretimeCmd::DoInitial
is correctly implemented.The code changes are approved.
117-130
: LGTM!The method
PKHExpiretimeCmd::Do
is correctly implemented.The code changes are approved.
132-150
: LGTM!The method
PKHPersistCmd::DoInitial
is correctly implemented.The code changes are approved.
151-164
: LGTM!The method
PKHPersistCmd::Do
is correctly implemented.The code changes are approved.
166-184
: LGTM!The method
PKHTTLCmd::DoInitial
is correctly implemented.The code changes are approved.
185-198
: LGTM!The method
PKHTTLCmd::Do
is correctly implemented.The code changes are approved.
200-207
: LGTM!The method
PKHGetCmd::DoInitial
is correctly implemented.The code changes are approved.
209-222
: LGTM!The method
PKHGetCmd::Do
is correctly implemented.The code changes are approved.
244-252
: LGTM!The method
PKHSetCmd::DoInitial
is correctly implemented.The code changes are approved.
254-265
: LGTM!The method
PKHSetCmd::Do
is correctly implemented.The code changes are approved.
src/storage/src/base_filter.h (5)
Line range hint
21-74
: LGTM!The class
BaseMetaFilter
is correctly implemented.The code changes are approved.
Line range hint
114-228
: LGTM!The class
BaseDataFilter
is correctly implemented. The constructor reformatting improves readability.The code changes are approved.
250-250
: LGTM!The type alias
PKHashesMetaFilter
is correctly implemented.The code changes are approved.
251-251
: LGTM!The type alias
PKHashesMetaFilterFactory
is correctly implemented.The code changes are approved.
252-252
: LGTM!The type alias
PKHashesDataFilter
is correctly implemented.The code changes are approved.
include/acl.h (1)
55-55
: LGTM!The addition of the
PKHASH
enumerator to theAclCategory
enum is consistent with the PR objectives and summary.The code changes are approved.
include/pika_command.h (14)
141-141
: LGTM!The addition of the
kCmdNamePKHSet
constant is consistent with the PR objectives and summary.The code changes are approved.
142-142
: LGTM!The addition of the
kCmdNamePKHSetex
constant is consistent with the PR objectives and summary.The code changes are approved.
143-143
: LGTM!The addition of the
kCmdNamePKHExpire
constant is consistent with the PR objectives and summary.The code changes are approved.
144-144
: LGTM!The addition of the
kCmdNamePKHExpireat
constant is consistent with the PR objectives and summary.The code changes are approved.
145-145
: LGTM!The addition of the
kCmdNamePKHExpiretime
constant is consistent with the PR objectives and summary.The code changes are approved.
146-146
: LGTM!The addition of the
kCmdNamePKHTTL
constant is consistent with the PR objectives and summary.The code changes are approved.
147-147
: LGTM!The addition of the
kCmdNamePKHPersist
constant is consistent with the PR objectives and summary.The code changes are approved.
148-148
: LGTM!The addition of the
kCmdNamePKHGet
constant is consistent with the PR objectives and summary.The code changes are approved.
149-149
: LGTM!The addition of the
kCmdNamePKHExists
constant is consistent with the PR objectives and summary.The code changes are approved.
150-150
: LGTM!The addition of the
kCmdNamePKHDel
constant is consistent with the PR objectives and summary.The code changes are approved.
151-151
: LGTM!The addition of the
kCmdNamePKHLen
constant is consistent with the PR objectives and summary.The code changes are approved.
152-152
: LGTM!The addition of the
kCmdNamePKHStrlen
constant is consistent with the PR objectives and summary.The code changes are approved.
153-153
: LGTM!The addition of the
kCmdNamePKHIncrby
constant is consistent with the PR objectives and summary.The code changes are approved.
154-154
: LGTM!The addition of the
kCmdNamePKHIncrbyfloat
constant is consistent with the PR objectives and summary.The code changes are approved.
src/storage/src/redis.cc (3)
Line range hint
30-42
: LGTM!The constructor is correctly initializing the new column family options for
pika_hash_data_cf
.The code changes are approved.
102-111
: LGTM!The function is correctly setting up the new column family options for
pika_hash_data_cf
.The code changes are approved.
218-218
: LGTM!The function is correctly including
kPKHashDataCF
in the list of column families to compact.The code changes are approved.
src/storage/src/redis.h (8)
246-248
: LGTM!The function is correctly implemented to retrieve column family handles for PK Hashes.
The code changes are approved.
253-254
: LGTM!The function is correctly implemented to set expiration time for PK Hash fields.
The code changes are approved.
255-256
: LGTM!The function is correctly implemented to set expiration time for PK Hash fields based on a timestamp.
The code changes are approved.
257-258
: LGTM!The function is correctly implemented to retrieve expiration times for PK Hash fields.
The code changes are approved.
259-260
: LGTM!The function is correctly implemented to retrieve TTL for PK Hash fields.
The code changes are approved.
261-262
: LGTM!The function is correctly implemented to make PK Hash fields persistent by removing their expiration.
The code changes are approved.
263-263
: LGTM!The function is correctly implemented to retrieve the value of a PK Hash field.
The code changes are approved.
264-264
: LGTM!The function is correctly implemented to set the value of a PK Hash field.
The code changes are approved.
src/storage/include/storage/storage.h (16)
26-26
: LGTM!The reordering of include statements is acceptable.
The code changes are approved.
28-28
: LGTM!The inclusion of the new header file is necessary for the new functionality.
The code changes are approved.
Line range hint
98-104
: LGTM!The addition of the
operator+
method forKeyInfo
struct is correct and useful for combining key information.The code changes are approved.
111-113
: LGTM!The addition of the
ttl
field toValueStatus
struct is necessary for managing time-to-live information.The code changes are approved.
157-159
: LGTM!The addition of the
Operation
enum is useful for defining various operations in the background task.The code changes are approved.
171-173
: LGTM!The addition of the constructor and destructor for the
Storage
class is necessary for proper initialization and cleanup.The code changes are approved.
256-257
: LGTM!The method signature update ensures consistent formatting and clarity.
The code changes are approved.
262-263
: LGTM!The method signature update ensures consistent formatting and clarity.
The code changes are approved.
272-273
: LGTM!The method signature update ensures consistent formatting and clarity.
The code changes are approved.
482-483
: LGTM!The method signature update ensures consistent formatting and clarity.
The code changes are approved.
506-507
: LGTM!The method signature update ensures consistent formatting and clarity.
The code changes are approved.
516-517
: LGTM!The method signature update ensures consistent formatting and clarity.
The code changes are approved.
590-591
: LGTM!The method signature update ensures consistent formatting and clarity.
The code changes are approved.
753-754
: LGTM!The method signature update ensures consistent formatting and clarity.
The code changes are approved.
1000-1001
: LGTM!The method signature update ensures consistent formatting and clarity.
The code changes are approved.
1035-1036
: LGTM!The method signature update ensures consistent formatting and clarity.
The code changes are approved.
src/storage/src/storage.cc (1)
492-496
: LGTM! Verify error handling.The function is correctly implemented. Ensure that
GetDBInstance
andPKHExpiretime
handle errors correctly.The code changes are approved.
Run the following script to verify error handling:
src/pika_command.cc (73)
486-487
: Correct initialization ofPKHSetCmd
.The command
PKHSetCmd
is correctly initialized and inserted into the command table.The code changes are approved.
490-492
: Correct initialization ofPKHExpireCmd
.The command
PKHExpireCmd
is correctly initialized and inserted into the command table.The code changes are approved.
494-496
: Correct initialization ofPKHExpireatCmd
.The command
PKHExpireatCmd
is correctly initialized and inserted into the command table.The code changes are approved.
498-500
: Correct initialization ofPKHExpiretimeCmd
.The command
PKHExpiretimeCmd
is correctly initialized and inserted into the command table.The code changes are approved.
502-503
: Correct initialization ofPKHTTLCmd
.The command
PKHTTLCmd
is correctly initialized and inserted into the command table.The code changes are approved.
506-508
: Correct initialization ofPKHPersistCmd
.The command
PKHPersistCmd
is correctly initialized and inserted into the command table.The code changes are approved.
510-511
: Correct initialization ofPKHGetCmd
.The command
PKHGetCmd
is correctly initialized and inserted into the command table.The code changes are approved.
56-57
: Consistent formatting forCompactCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
60-61
: Consistent formatting forCompactRangeCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
80-81
: Consistent formatting forFlushallCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
85-86
: Consistent formatting forFlushdbCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
159-160
: Consistent formatting forClearCacheCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
162-163
: Consistent formatting forLastsaveCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
247-248
: Consistent formatting forSetCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
251-253
: Consistent formatting forGetCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
256-258
: Consistent formatting forDelCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
264-265
: Consistent formatting forIncrCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
269-269
: Consistent formatting forIncrbyCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
273-274
: Consistent formatting forIncrbyfloatCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
277-278
: Consistent formatting forDecrCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
282-282
: Consistent formatting forDecrbyCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
286-286
: Consistent formatting forGetsetCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
290-290
: Consistent formatting forAppendCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
293-295
: Consistent formatting forMgetCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
303-303
: Consistent formatting forSetnxCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
306-307
: Consistent formatting forSetexCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
310-311
: Consistent formatting forPsetexCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
315-315
: Consistent formatting forDelvxCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
318-319
: Consistent formatting forMsetCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
322-323
: Consistent formatting forMsetnxCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
327-328
: Consistent formatting forGetrangeCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
332-332
: Consistent formatting forSetrangeCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
335-337
: Consistent formatting forStrlenCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
340-342
: Consistent formatting forExistsCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
346-347
: Consistent formatting forExpireCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
351-352
: Consistent formatting forPexpireCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
355-357
: Consistent formatting forExpireatCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
360-362
: Consistent formatting forPexpireatCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
365-366
: Consistent formatting forTtlCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
369-370
: Consistent formatting forPttlCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
373-375
: Consistent formatting forPersistCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
378-379
: Consistent formatting forTypeCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
391-391
: Consistent formatting forPKSetexAtCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
394-395
: Consistent formatting forPKScanRangeCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
398-399
: Consistent formatting forPKRScanRangeCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
404-405
: Consistent formatting forHDelCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
408-409
: Consistent formatting forHSetCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
412-414
: Consistent formatting forHGetCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
417-419
: Consistent formatting forHGetallCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
422-424
: Consistent formatting forHExistsCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
427-428
: Consistent formatting forHIncrbyCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
431-433
: Consistent formatting forHIncrbyfloatCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
436-438
: Consistent formatting forHKeysCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
441-443
: Consistent formatting forHLenCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
446-448
: Consistent formatting forHMgetCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
451-452
: Consistent formatting forHMsetCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
455-456
: Consistent formatting forHSetnxCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
459-461
: Consistent formatting forHStrlenCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
464-466
: Consistent formatting forHValsCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
469-470
: Consistent formatting forHScanCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
473-474
: Consistent formatting forHScanxCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
477-478
: Consistent formatting forPKHScanRangeCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
481-482
: Consistent formatting forPKHRScanRangeCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
514-516
: Consistent formatting forLIndexCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
518-519
: Consistent formatting forLInsertCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
522-524
: Consistent formatting forLLenCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
526-527
: Consistent formatting forBLPopCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
530-531
: Consistent formatting forLPopCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
535-535
: Consistent formatting forLPushCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
538-539
: Consistent formatting forLPushxCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
543-544
: Consistent formatting forLRangeCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
546-547
: Consistent formatting forLRemCmd
.The formatting change improves readability by aligning parameters across multiple lines.
The code changes are approved.
549-550
: **Consistent formatting for
class PKHashDataValue : public InternalValue { | ||
public: | ||
/* | ||
* The header of the Value field is initially initialized to knulltype | ||
*/ | ||
explicit PKHashDataValue(const rocksdb::Slice& user_value) : InternalValue(DataType::kNones, user_value) {} | ||
virtual ~PKHashDataValue() {} | ||
|
||
virtual rocksdb::Slice Encode() { | ||
size_t usize = user_value_.size(); | ||
size_t needed = usize + kSuffixReserveLength + kTimestampLength * 2; | ||
char* dst = ReAllocIfNeeded(needed); | ||
char* start_pos = dst; | ||
|
||
memcpy(dst, user_value_.data(), user_value_.size()); | ||
dst += user_value_.size(); | ||
memcpy(dst, reserve_, kSuffixReserveLength); | ||
dst += kSuffixReserveLength; | ||
EncodeFixed64(dst, ctime_); | ||
dst += kTimestampLength; | ||
EncodeFixed64(dst, etime_); | ||
dst += kTimestampLength; // todo(DDD) 待确认,看这个是否需要。 | ||
|
||
return rocksdb::Slice(start_pos, needed); | ||
} | ||
|
||
private: | ||
const size_t kDefaultValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! But address the TODO comment.
The PKHashDataValue
class is correctly implemented. However, the TODO comment in the Encode
method needs to be addressed.
The code changes are approved.
Do you want me to help address the TODO comment or open a GitHub issue to track this task?
class ParsedPKHashDataValue : public ParsedInternalValue { | ||
public: | ||
// Use this constructor after rocksdb::DB::Get(), since we use this in | ||
// the implement of user interfaces and may need to modify the | ||
// original value suffix, so the value_ must point to the string | ||
explicit ParsedPKHashDataValue(std::string* value) : ParsedInternalValue(value) { | ||
if (value_->size() >= kPKHashDataValueSuffixLength) { | ||
user_value_ = rocksdb::Slice(value_->data(), value_->size() - kPKHashDataValueSuffixLength); | ||
memcpy(reserve_, value_->data() + user_value_.size(), kSuffixReserveLength); | ||
ctime_ = DecodeFixed64(value_->data() + user_value_.size() + kSuffixReserveLength); | ||
etime_ = DecodeFixed64(value_->data() + user_value_.size() + kSuffixReserveLength + kTimestampLength); | ||
} | ||
} | ||
|
||
// Use this constructor in rocksdb::CompactionFilter::Filter(), | ||
// since we use this in Compaction process, all we need to do is parsing | ||
// the rocksdb::Slice, so don't need to modify the original value, value_ can be | ||
// set to nullptr | ||
explicit ParsedPKHashDataValue(const rocksdb::Slice& value) : ParsedInternalValue(value) { | ||
if (value.size() >= kPKHashDataValueSuffixLength) { | ||
user_value_ = rocksdb::Slice(value.data(), value.size() - kPKHashDataValueSuffixLength); | ||
memcpy(reserve_, value.data() + user_value_.size(), kSuffixReserveLength); | ||
ctime_ = DecodeFixed64(value.data() + user_value_.size() + kSuffixReserveLength); | ||
etime_ = DecodeFixed64(value_->data() + user_value_.size() + kSuffixReserveLength + kTimestampLength); | ||
} | ||
} | ||
|
||
virtual ~ParsedPKHashDataValue() = default; | ||
|
||
void SetEtimeToValue() override { | ||
if (value_) { | ||
char* dst = const_cast<char*>(value_->data()) + value_->size() - kTimestampLength; | ||
EncodeFixed64(dst, etime_); | ||
} | ||
} | ||
|
||
void SetCtimeToValue() override { | ||
if (value_) { | ||
char* dst = const_cast<char*>(value_->data()) + value_->size() - kTimestampLength - kTimestampLength; | ||
EncodeFixed64(dst, ctime_); | ||
} | ||
} | ||
|
||
void SetReserveToValue() { | ||
if (value_) { | ||
char* dst = const_cast<char*>(value_->data()) + value_->size() - kPKHashDataValueSuffixLength; | ||
memcpy(dst, reserve_, kSuffixReserveLength); | ||
} | ||
} | ||
|
||
virtual void StripSuffix() override { | ||
if (value_) { | ||
value_->erase(value_->size() - kPKHashDataValueSuffixLength, kPKHashDataValueSuffixLength); | ||
} | ||
} | ||
|
||
void SetTimestamp(int64_t timestamp) { | ||
etime_ = timestamp; | ||
SetEtimeToValue(); | ||
} | ||
|
||
protected: | ||
virtual void SetVersionToValue() override {}; | ||
|
||
private: | ||
const size_t kPKHashDataValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! But address the empty SetVersionToValue
method.
The ParsedPKHashDataValue
class is correctly implemented. However, the SetVersionToValue
method is empty and might need to be implemented.
The code changes are approved.
Do you want me to help implement the SetVersionToValue
method or open a GitHub issue to track this task?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Outside diff range and nitpick comments (1)
src/storage/tests/pkhashes_test.cc (1)
20-20
: Avoid using 'using namespace' in the global scopeUsing
using namespace storage;
in the global scope can lead to potential name collisions and reduces code maintainability. It's generally better to avoid importing entire namespaces into the global scope. Instead, consider prefixing withstorage::
or using the namespace within a limited scope.
src/storage/tests/pkhashes_test.cc
Outdated
std::map<storage::DataType, rocksdb::Status> type_status; | ||
db.Expire("B_HKEYS_KEY", 1); | ||
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); | ||
std::this_thread::sleep_for(std::chrono::milliseconds(2000)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect error checking using uninitialized type_status
in HKeys
test
In the HKeys
test, the type_status
map is declared but not used. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
is invalid because type_status
is empty and uninitialized.
Apply this diff to correct the error checking:
-std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("B_HKEYS_KEY", 1);
-ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
+ASSERT_TRUE(s.ok());
If db.Expire
returns a status, modify the code to capture and assert it:
-std::map<storage::DataType, rocksdb::Status> type_status;
-storage::Status s = db.Expire("B_HKEYS_KEY", 1);
+storage::Status s = db.Expire("B_HKEYS_KEY", 1);
ASSERT_TRUE(s.ok());
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
std::map<storage::DataType, rocksdb::Status> type_status; | |
db.Expire("B_HKEYS_KEY", 1); | |
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); | |
std::this_thread::sleep_for(std::chrono::milliseconds(2000)); | |
storage::Status s = db.Expire("B_HKEYS_KEY", 1); | |
ASSERT_TRUE(s.ok()); | |
std::this_thread::sleep_for(std::chrono::milliseconds(2000)); |
void SetUp() override { | ||
std::string path = "./db/pkhashes"; | ||
pstd::DeleteDirIfExist(path); | ||
mkdir(path.c_str(), 0755); | ||
storage_options.options.create_if_missing = true; | ||
s = db.Open(storage_options, path); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check the return status of db.Open
in SetUp()
The return status of db.Open(storage_options, path);
is assigned to s
, but there is no check to verify if the operation was successful. Ignoring the status may lead to unexpected behavior if the database fails to open.
Apply this diff to add an assertion to ensure the database opens successfully:
s = db.Open(storage_options, path);
+ASSERT_TRUE(s.ok());
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
void SetUp() override { | |
std::string path = "./db/pkhashes"; | |
pstd::DeleteDirIfExist(path); | |
mkdir(path.c_str(), 0755); | |
storage_options.options.create_if_missing = true; | |
s = db.Open(storage_options, path); | |
} | |
void SetUp() override { | |
std::string path = "./db/pkhashes"; | |
pstd::DeleteDirIfExist(path); | |
mkdir(path.c_str(), 0755); | |
storage_options.options.create_if_missing = true; | |
s = db.Open(storage_options, path); | |
ASSERT_TRUE(s.ok()); | |
} |
std::map<storage::DataType, rocksdb::Status> type_status; | ||
int ret = db->Expire(key, 1); | ||
if ((ret == 0) || !type_status[storage::DataType::kHashes].ok()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused variable type_status
and incorrect error checking in make_expired
The variable type_status
is declared but not used effectively. The condition !type_status[storage::DataType::kHashes].ok()
is invalid because type_status
is empty and uninitialized. This leads to improper error handling in the make_expired
function.
Apply this diff to remove the unused variable and correct the error checking:
-static std::map<storage::DataType, rocksdb::Status> type_status;
int ret = db->Expire(key, 1);
-if ((ret == 0) || !type_status[storage::DataType::kHashes].ok()) {
+if (ret == 0) {
return false;
}
Alternatively, if you need to check the status returned by Expire
, modify the code as follows:
-static std::map<storage::DataType, rocksdb::Status> type_status;
-int ret = db->Expire(key, 1);
+storage::Status s = db->Expire(key, 1);
-if ((ret == 0) || !type_status[storage::DataType::kHashes].ok()) {
+if (!s.ok()) {
return false;
}
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
std::map<storage::DataType, rocksdb::Status> type_status; | |
int ret = db->Expire(key, 1); | |
if ((ret == 0) || !type_status[storage::DataType::kHashes].ok()) { | |
int ret = db->Expire(key, 1); | |
if (ret == 0) { | |
``` | |
This suggestion removes the unused `type_status` variable and simplifies the condition to check only the return value of `db->Expire(key, 1)`. | |
Note: The second diff snippet provided in the review comment is also valid and could be used as an alternative solution. If you prefer that approach, the suggestion would be: | |
```suggestion | |
storage::Status s = db->Expire(key, 1); | |
if (!s.ok()) { |
src/storage/tests/pkhashes_test.cc
Outdated
std::map<storage::DataType, rocksdb::Status> type_status; | ||
db.Expire("HDEL_TIMEOUT_KEY", 1); | ||
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); | ||
std::this_thread::sleep_for(std::chrono::milliseconds(2000)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect error checking using uninitialized type_status
in HDel
test
In the HDel
test, the type_status
map is declared but not populated. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
will not function correctly since type_status
is empty, leading to invalid error checking.
Apply this diff to correct the error checking:
-std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("HDEL_TIMEOUT_KEY", 1);
-ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
+ASSERT_TRUE(s.ok());
If db.Expire
returns a status, capture and check it:
-std::map<storage::DataType, rocksdb::Status> type_status;
-storage::Status s = db.Expire("HDEL_TIMEOUT_KEY", 1);
+storage::Status s = db.Expire("HDEL_TIMEOUT_KEY", 1);
ASSERT_TRUE(s.ok());
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
std::map<storage::DataType, rocksdb::Status> type_status; | |
db.Expire("HDEL_TIMEOUT_KEY", 1); | |
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); | |
std::this_thread::sleep_for(std::chrono::milliseconds(2000)); | |
storage::Status s = db.Expire("HDEL_TIMEOUT_KEY", 1); | |
ASSERT_TRUE(s.ok()); | |
std::this_thread::sleep_for(std::chrono::milliseconds(2000)); |
f6d3d47
to
1c6ef46
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 36
🧹 Outside diff range and nitpick comments (15)
include/pika_command.h (1)
316-316
: New CmdFlags enum value added for PKHashThe addition of
kCmdFlagsPKHash
to theCmdFlags
enum is consistent with the introduction of new PKHash commands. This flag will likely be used to identify and handle PKHash-specific operations.Consider removing the TODO comment
// TODO(DDD)
as it doesn't provide any meaningful information. If there's a specific task related to this flag, it would be better to create a separate issue for tracking.include/pika_pkhash.h (3)
183-183
: Consider translating code comments to English for consistencyThe comment at line 183 is in Chinese:
// 每个命令的参数组成不同。
To maintain consistency and readability across the codebase, it's recommended to use English for code comments.
323-323
: Reminder: Address the TODO comment and consider translating it to EnglishThe comment
// TODO(DDD):接受 ttl 参数。
indicates a pending implementation to accept the TTL parameter. Please implement this functionality and consider translating the comment to English for consistency.Would you like assistance in implementing the TTL parameter handling or opening a GitHub issue to track this task?
483-487
: Inconsistent method declaration forClear()
methodIn
PKHScanCmd
, theClear()
method is declared asvirtual void Clear() {}
, whereas in other classes it is declared asvoid Clear() override {}
. For consistency and to ensure correct overriding of the base class method, consider using theoverride
keyword.src/pika_pkhash.cc (1)
505-505
: Unresolved TODO comment inPKHMSetexCmd::Do
A TODO comment is present:
// TODO(DDD) 这个是干啥的?
indicating uncertainty about the purpose ofAddSlotKey("h", key_, db_);
. This should be addressed to ensure code clarity.Would you like assistance in determining the purpose of
AddSlotKey
here or help in resolving this TODO?src/storage/src/redis.h (3)
86-87
: Align member initializer list with declaration order.In the constructor of
KeyStatisticsDurationGuard
, the initializer list is:KeyStatisticsDurationGuard(Redis* that, const DataType type, const std::string& key) : ctx(that), key(key), start_us(pstd::NowMicros()), dtype(type) {}Consider reordering the initializer list to match the order of member declarations to improve readability and prevent potential compiler warnings.
148-149
: Consistent parameter passing inAppend
method.The
Append
method signature is:Status Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec, std::string& out_new_value);For consistency and clarity, consider passing
out_new_value
as a pointer (std::string*
) instead of a reference, aligning with howexpired_timestamp_sec
is passed.
532-533
: Clean up commented code or provide clarification.The code contains commented lines:
// TODO(wangshaoyi): seperate env for each rocksdb instance // rocksdb::Env* env_ = nullptr;Consider:
- Removing the commented code if it's obsolete.
- Providing a clearer TODO comment or tracking it in an issue if it's a planned future enhancement.
Unresolved TODOs can clutter the codebase and may lead to confusion.
src/storage/include/storage/storage.h (2)
Line range hint
98-102
: Fix the typo ininvaild_keys
toinvalid_keys
The member variable
invaild_keys
is misspelled. Please correct it toinvalid_keys
throughout the code to prevent confusion and maintain code quality.Apply the following diff:
struct KeyInfo { uint64_t keys = 0; uint64_t expires = 0; uint64_t avg_ttl = 0; - uint64_t invaild_keys = 0; + uint64_t invalid_keys = 0; KeyInfo() : keys(0), expires(0), avg_ttl(0), invaild_keys(0) {} KeyInfo(uint64_t k, uint64_t e, uint64_t a, uint64_t i) - : keys(k), expires(e), avg_ttl(a), invaild_keys(i) {} + : keys(k), expires(e), avg_ttl(a), invalid_keys(i) {} KeyInfo operator+(const KeyInfo& info) { KeyInfo res; res.keys = keys + info.keys; res.expires = expires + info.expires; res.avg_ttl = avg_ttl + info.avg_ttl; - res.invaild_keys = invaild_keys + info.invaild_keys; + res.invalid_keys = invalid_keys + info.invalid_keys; return res; } };
Line range hint
98-102
: Correct the calculation ofavg_ttl
when combiningKeyInfo
instancesAdding
avg_ttl
directly may not compute the accurate average when combiningKeyInfo
objects. Sinceavg_ttl
represents an average, you should calculate the combined average based on the total accumulated TTL and the total number of keys.Consider modifying the
KeyInfo
structure to keep track of the total TTL and total key count, then computeavg_ttl
accordingly.src/storage/src/redis_pkhashes.cc (1)
75-77
: Initializemeta_value_buf
appropriatelyThe buffer
meta_value_buf[4]
is initialized with{0}
, but ensure that it's properly set before being used inEncodeFixed32
. Uninitialized or partially initialized buffers can lead to undefined behavior.src/storage/src/storage.cc (1)
540-543
: Remove commented-out code for clarity.The code block for
PKHLenForce
is commented out. If this code is no longer needed, consider removing it to improve readability and maintainability.src/pika_command.cc (3)
555-556
: Translate comments to English and address the questionsThe comments on lines 555-556 are in Chinese and include TODOs:
// TODO(DDD) 为啥vals是慢的命令。 // TODO(DDD) 这些标志位都是啥意思。
Please translate these comments to English and address the concerns raised about why
Vals
is marked as a slow command and clarify the meanings of the command flags.Do you need assistance in determining the command performance classifications and understanding the flag definitions?
558-559
: Clarify command flags forPKHValsCmd
Additionally, there is uncertainty about the meaning of the command flags:
// TODO(DDD) 这些标志位都是啥意思。
Please translate this comment to English and ensure that the flags used for
PKHValsCmd
accurately reflect its behavior and performance characteristics.Would you like assistance in reviewing the command flags and ensuring they align with the intended functionality?
962-963
: Improve formatting of the constructor initialization listThe initialization list in the
Cmd
constructor spans a single line, which can reduce readability:: name_(std::move(name)), arity_(arity), flag_(flag), aclCategory_(aclCategory), cache_missed_in_rtc_(false) {}
Consider formatting each member initialization on a separate line for better readability.
Apply this diff to enhance readability:
- : name_(std::move(name)), arity_(arity), flag_(flag), aclCategory_(aclCategory), cache_missed_in_rtc_(false) {} + : name_(std::move(name)), + arity_(arity), + flag_(flag), + aclCategory_(aclCategory), + cache_missed_in_rtc_(false) {}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (9)
- include/pika_command.h (5 hunks)
- include/pika_pkhash.h (1 hunks)
- src/pika_command.cc (10 hunks)
- src/pika_pkhash.cc (1 hunks)
- src/storage/include/storage/storage.h (18 hunks)
- src/storage/src/redis.h (16 hunks)
- src/storage/src/redis_pkhashes.cc (1 hunks)
- src/storage/src/storage.cc (34 hunks)
- src/storage/tests/pkhashes_test.cc (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/storage/tests/pkhashes_test.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc
[performance] 554-554: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 763-763: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 847-847: Searching before insertion is not necessary.
(stlFindInsert)
🔇 Additional comments (21)
include/pika_command.h (2)
141-160
: New PKHash command constants addedThe addition of these new command constants for PKHash operations enhances the functionality of the Pika database. These constants align with the PR objectives, which mentioned introducing new commands for hash operations.
315-315
: Minor formatting changesThese changes improve code readability by adding a comma at the end of the
kCmdFlagsSlow
enum value and adjusting the indentation of a member variable in theUnblockTaskArgs
struct.Also applies to: 506-506
src/pika_pkhash.cc (9)
63-65
: Potential error in parsing timestamp value inPKHExpireatCmd::DoInitial
As with previous parsing issues, ensure that the condition correctly checks the return value of
pstd::string2int
when parsingtimestamp_
.
68-73
: Unnecessary iterator increments inPKHExpireatCmd::DoInitial
Similar to
PKHExpireCmd
, remove unnecessaryiter++;
statements to improve code clarity.
74-76
: Potential error in parsingnumfields_
inPKHExpireatCmd::DoInitial
Ensure that the parsing condition for
numfields_
correctly checks for successful parsing.
110-113
: Potential error in parsingnumfields_
inPKHExpiretimeCmd::DoInitial
Check that the return value of
pstd::string2int
is correctly evaluated when parsingnumfields_
.
145-147
: Potential error in parsingnumfields_
inPKHPersistCmd::DoInitial
Ensure that you are correctly checking the return value of
pstd::string2int
when parsingnumfields_
.
178-181
: Potential error in parsingnumfields_
inPKHTTLCmd::DoInitial
As in previous comments, verify that the parsing of
numfields_
uses the correct condition for error checking.
182-183
: Assignment tofields_
may be incorrect inPKHTTLCmd::DoInitial
Ensure that
fields_
is assigned correctly after adjusting for any changes toiter
.
216-217
:⚠️ Potential issueResponse for not found key should be
$-1
inPKHGetCmd::Do
When the key or field is not found, the response is currently set to
"$-1"
. According to the Redis protocol, the correct response should be an empty bulk string.Apply this diff to correct the response:
- res_.AppendContent("$-1"); + res_.AppendContent("$-1\r\n");Also, ensure that the response formatting complies with the Redis RESP protocol.
Likely invalid or redundant comment.
619-656
:⚠️ Potential issuePotential response size exceeding limit in
PKHGetAllCmd::Do
When accumulating the
raw
response data, there is a check againstraw_limit
, but if the limit is exceeded, the function returns an error without clearing or resetting the accumulated data. This may lead to inconsistent states.Ensure that when the response size exceeds the limit, appropriate action is taken to handle or truncate the response safely.
Run the following script to verify the handling of large responses:
src/storage/src/redis.h (3)
17-18
: Verify the necessity of the new include statements.The added include statements are:
#include "pstd/include/env.h" #include "pstd/include/pika_codis_slot.h"Please ensure these headers are required for the new functionality. Unnecessary includes can increase compilation time and introduce unintended dependencies.
252-295
: Confirm implementation consistency of PK Hash commands.New PK Hash methods have been added, such as:
Status PKHExpire(...)
Status PKHGet(...)
Status PKHSet(...)
- ...
Please verify that:
- Method signatures are consistent with existing patterns in the class.
- Parameter types and names are appropriate.
- Documentation/comments are provided for new methods.
- Implementation of these methods in the corresponding source files is complete and correct.
471-472
: Review handling ofDataType::kPKHashes
inExpectedStale
method.The
ExpectedStale
method has been updated:case DataType::kZSets: case DataType::kSets: case DataType::kHashes: case DataType::kPKHashes: { ParsedBaseMetaValue parsed_meta_value(meta_value); return (parsed_meta_value.IsStale() || parsed_meta_value.Count() == 0); }Ensure that:
- The logic for
DataType::kPKHashes
correctly assesses staleness.- All other methods that handle data types are updated accordingly to include
kPKHashes
.src/storage/src/redis_pkhashes.cc (1)
128-136
:⚠️ Potential issueEnsure consistent use of
base_meta_key
In the
s.IsNotFound()
block of thePKHSet
method, you're usingbase_meta_key.Encode()
when putting the meta value into the batch. Earlier in the code, you usedkey
directly for the same purpose. For consistency and to prevent potential bugs, consider usingbase_meta_key.Encode()
consistently.Apply this diff to maintain consistency:
- batch.Put(handles_[kMetaCF], key, hashes_meta_value.Encode()); + batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode());Likely invalid or redundant comment.
src/storage/src/storage.cc (1)
7-21
: Include statements and namespace adjustments look good.The additions of header files and namespace usages are appropriate and aid in code clarity.
src/pika_command.cc (5)
19-19
: Approved inclusion of the new header fileThe inclusion of
"include/pika_pkhash.h"
is necessary for the newPKHash
commands and is appropriately added.
486-488
: Ensure consistent command flags and arity forPKHSetCmd
The
PKHSetCmd
is correctly initialized. Please confirm that the arity-4
accurately reflects the command's expected arguments and is consistent with similar commands.
Line range hint
1019-1058
: Check the logic inDoCommand
for potential issuesIn the
Cmd::DoCommand
method, ensure that the conditions and flow for cache reading, updating, and database operations are consistent and handle all necessary cases. Specifically, verify that:
- The conditions for reading from cache (
IsNeedReadCache()
) and updating the cache (IsNeedUpdateCache()
) are correctly applied.- The handling of cache misses (
res().CacheMiss()
) and thecache_missed_in_rtc_
flag are appropriately integrated into the logic.To validate the logic flow, you can review the method and test with various command scenarios.
544-546
: Confirm command parameters and arity forPKHMSetexCmd
The comment indicates that
PKHMSetexCmd
may require TTL and the arity might be-6
. Currently, it's set to-4
. Please verify the required parameters and update the arity and code accordingly.To check if
PKHMSetexCmd
requires TTL and confirm its usage, you can use the following script:#!/bin/bash # Description: Search for definitions and usages of `PKHMSetexCmd` to confirm parameters. # Test: Find all instances where `PKHMSetexCmd` is used. Expect: Correct handling of TTL and arity. rg --type cpp --type h --word-regexp 'PKHMSetexCmd'
516-518
: Clarify arity and parameters forPKHSetexCmd
There is uncertainty about the arity of
PKHSetexCmd
. Currently, it is set to-4
. Please verify if this command requires additional parameters (such as TTL) that might affect the arity, and update it accordingly.To verify the usage of
PKHSetexCmd
, you can search for its usage patterns:
include/pika_pkhash.h
Outdated
|
||
private: | ||
std::string key_; | ||
bool is_force_; // 是否强制从db中获取 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uninitialized member variable is_force_
The member variable is_force_
is declared but not initialized. This may lead to undefined behavior if it is used before being assigned a value. Consider initializing it in the constructor or where appropriate.
int32_t ret = 0; | ||
s_ = db_->storage()->PKHSet(key_, field_, value_, &ret); | ||
if (s_.ok()) { | ||
res_.AppendContent(":" + std::to_string(ret)); | ||
AddSlotKey("h", key_, db_); | ||
} else if (s_.IsInvalidArgument()) { | ||
res_.SetRes(CmdRes::kMultiKey); | ||
} else { | ||
res_.SetRes(CmdRes::kErrOther, s_.ToString()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling in PKHSetCmd::Do
The function lacks error handling for potential database operation failures not covered by existing conditions. Ensure that all possible Status
cases are properly handled.
s_ = db_->storage()->HDel(key_, fields_, &deleted_); | ||
|
||
if (s_.ok() || s_.IsNotFound()) { | ||
res_.AppendInteger(deleted_); | ||
} else if (s_.IsInvalidArgument()) { | ||
res_.SetRes(CmdRes::kMultiKey); | ||
} else { | ||
res_.SetRes(CmdRes::kErrOther, s_.ToString()); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect use of HDel
in PKHDelCmd::Do
The command PKHDelCmd
uses db_->storage()->HDel
, which may not be appropriate for PKHDelCmd
. Verify that the correct database method is used for PK hash deletion.
If a specific method for PKHDel
exists, replace it accordingly.
s_.ToString().substr(0, std::char_traits<char>::length(ErrTypeMessage)) == ErrTypeMessage) { | ||
res_.SetRes(CmdRes::kMultiKey); | ||
} else if (s_.IsCorruption() && s_.ToString() == "Corruption: hash value is not an integer") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Inefficient string comparison in PKHIncrbyCmd::Do
The condition uses substr
and std::char_traits<char>::length
to compare error messages, which can be simplified.
Consider using s_.ToString().starts_with(ErrTypeMessage)
for clarity and efficiency if available in your C++ version. Otherwise, use s_.ToString().compare(0, strlen(ErrTypeMessage), ErrTypeMessage) == 0
.
iter++; | ||
iter++; | ||
iter++; | ||
iter++; | ||
iter++; | ||
if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Unnecessary iterator increments
Multiple iter++;
statements are present, but iter
is not used afterwards. These increments do not affect the parsing logic and can be removed to improve code clarity.
Apply this diff to remove unnecessary increments:
- iter++;
- iter++;
- iter++;
- iter++;
- iter++;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
iter++; | |
iter++; | |
iter++; | |
iter++; | |
iter++; | |
if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) == 0) { | |
if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) == 0) { |
src/storage/src/storage.cc
Outdated
Status Storage::Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec, | ||
std::string& out_new_value) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure consistent parameter passing style for output parameters.
In the Append
method, out_new_value
is passed as a non-const reference (std::string&
). For consistency with other methods that use pointers for output parameters (e.g., std::string* value
), consider changing out_new_value
to a pointer (std::string* out_new_value
).
Apply this diff to update the parameter:
-Status Storage::Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec,
- std::string& out_new_value) {
+Status Storage::Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec,
+ std::string* out_new_value) {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Status Storage::Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec, | |
std::string& out_new_value) { | |
Status Storage::Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec, | |
std::string* out_new_value) { |
src/storage/src/storage.cc
Outdated
Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl) { | ||
auto& inst = GetDBInstance(key); | ||
return inst->PKHSetex(key, field, value, ttl); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use int64_t
for TTL parameter to accommodate larger expiration times.
In the PKHSetex
method, the TTL parameter ttl
is currently an int32_t
. For consistency and to support larger TTL values, consider changing it to int64_t
.
Apply this diff:
-Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl) {
+Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl) {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl) { | |
auto& inst = GetDBInstance(key); | |
return inst->PKHSetex(key, field, value, ttl); | |
} | |
Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl) { | |
auto& inst = GetDBInstance(key); | |
return inst->PKHSetex(key, field, value, ttl); | |
} |
Status Storage::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl) { | ||
auto& inst = GetDBInstance(key); | ||
return inst->PKHIncrby(key, field, value, ret, ttl); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use int64_t
for TTL parameter to support larger values.
In the PKHIncrby
method, the TTL parameter ttl
is of type int32_t
. To handle larger TTL values and maintain consistency with other methods like Setex
, consider changing ttl
to int64_t
.
Apply this diff:
-Status Storage::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl) {
+Status Storage::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int64_t ttl) {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Status Storage::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl) { | |
auto& inst = GetDBInstance(key); | |
return inst->PKHIncrby(key, field, value, ret, ttl); | |
} | |
Status Storage::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int64_t ttl) { | |
auto& inst = GetDBInstance(key); | |
return inst->PKHIncrby(key, field, value, ret, ttl); | |
} |
Status Storage::PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields, | ||
std::vector<int32_t>* rets) { | ||
auto& inst = GetDBInstance(key); | ||
return inst->PKHExpire(key, ttl, numfields, fields, rets); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider using int64_t
for TTL parameter in PKHExpire
.
The PKHExpire
method uses int32_t ttl
as the TTL parameter. To support larger TTL values and align with other methods, consider changing ttl
to int64_t
.
Apply this diff:
-Status Storage::PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields,
+Status Storage::PKHExpire(const Slice& key, int64_t ttl, int32_t numfields, const std::vector<std::string>& fields,
std::vector<int32_t>* rets) {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Status Storage::PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields, | |
std::vector<int32_t>* rets) { | |
auto& inst = GetDBInstance(key); | |
return inst->PKHExpire(key, ttl, numfields, fields, rets); | |
} | |
Status Storage::PKHExpire(const Slice& key, int64_t ttl, int32_t numfields, const std::vector<std::string>& fields, | |
std::vector<int32_t>* rets) { | |
auto& inst = GetDBInstance(key); | |
return inst->PKHExpire(key, ttl, numfields, fields, rets); | |
} |
std::unique_ptr<Cmd> ehvalsptr = std::make_unique<PKHValsCmd>( | ||
kCmdNamePKHVals, 2, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsSlow); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHVals, std::move(ehvalsptr))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Address the TODO and clarify command performance for PKHValsCmd
The comment questions why Vals
is marked as a slow command:
// TODO(DDD) 为啥vals是慢的命令。
Please translate the comment to English and verify whether PKHValsCmd
should be marked with kCmdFlagsSlow
. If the PKHVals
operation is time-consuming due to data size or processing complexity, marking it as slow may be appropriate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
🧹 Outside diff range and nitpick comments (17)
CMakeLists.txt (3)
39-39
: Consider making the build type configurable.Setting
CMAKE_BUILD_TYPE
to "Debug" is useful for development but may not be suitable for all build scenarios. This setting enables debug symbols and disables optimizations, which can impact performance in production environments.Consider making this configurable, for example:
if(NOT CMAKE_BUILD_TYPE) set(CMAKE_BUILD_TYPE "Debug") endif()This allows users to override the build type when running CMake, while defaulting to "Debug" if not specified.
40-40
: Consider conditionally setting compiler flags based on build type.The current setup applies debug-oriented compiler flags globally, which may not be suitable for all build scenarios, especially for release builds.
Consider setting these flags conditionally based on the build type. For example:
if(CMAKE_BUILD_TYPE STREQUAL "Debug") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O0 -Wall -g3 -ggdb -fno-inline -fno-builtin-memcmp") else() set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O2 -Wall") endif()This approach allows for different compiler flags for debug and release builds, providing better performance in release mode while maintaining debugging capabilities in debug mode.
38-40
: Review the impact of global debug settings on build process.The changes made to
CMAKE_BUILD_TYPE
andCMAKE_CXX_FLAGS
are geared towards debugging. While this is beneficial for development, it may have unintended consequences:
- Performance impact in non-debug scenarios.
- Potential issues when building for different environments (e.g., production).
Consider implementing a more flexible configuration system that allows for easy switching between debug and release builds. This could involve:
- Using CMake options to control build type.
- Implementing separate flag sets for debug and release builds.
- Allowing for easy overrides of these settings from the command line or a configuration file.
These changes will make the build system more robust and adaptable to different use cases.
src/storage/src/redis.h (4)
116-120
: LGTM: Consistent addition of optionalprefetch_meta
parameter.The addition of the
std::string&& prefetch_meta = {}
parameter to multiple methods is a good optimization, allowing pre-fetched metadata to be passed in. This change is consistent across various methods and maintains backwards compatibility.Consider adding documentation comments for the
prefetch_meta
parameter to explain its purpose and usage.Also applies to: 386-387, 396-397
255-297
: LGTM: New PK Hash Commands added consistently.The addition of PK Hash Commands (PKHExpire, PKHGet, PKHSet, etc.) extends the existing Hash functionality. The method signatures are consistent with the existing codebase style and follow a clear naming convention.
Consider adding documentation comments for these new PK Hash methods to explain their purpose, how they differ from regular Hash commands, and any specific usage guidelines.
278-278
: Clarify the status of the commented-outPKHLenForce
method.The
PKHLenForce
method is currently commented out. If this method is planned for future implementation, consider adding a TODO comment explaining its purpose and when it will be implemented. If it's no longer needed, it may be better to remove it entirely to avoid confusion.
299-300
: Clarify the status of the commented-outPKHScanx
method.The
PKHScanx
method is currently commented out. If this method is planned for future implementation, consider adding a TODO comment explaining its purpose and when it will be implemented. If it's no longer needed, it may be better to remove it entirely to avoid confusion.src/storage/src/redis_pkhashes.cc (5)
146-207
: Remove unused variable and commented-out codeThe
PKHExpire
function is well-implemented, correctly handling the expiration setting for specific fields in a hash. However, there are a few minor issues to address:
- The
is_stale
variable (line 156) is declared but never used.- There are commented-out lines (160-161) that seem to be leftover from development.
Consider applying the following changes:
- Remove the unused
is_stale
variable:- bool is_stale = false;
- Remove the commented-out code:
- // const rocksdb::Snapshot* snapshot; - // ScopeSnapshot ss(db_, &snapshot);These changes will improve code cleanliness and remove potential confusion for future developers.
209-276
: Remove unused variableThe
PKHExpireat
function is well-implemented, correctly handling the expiration setting for specific fields in a hash using an absolute timestamp. However, there's an unused variable that should be removed:The
is_stale
variable (line 226) is declared but never used.Consider removing the unused variable:
- bool is_stale = false;
This change will improve code cleanliness and remove potential confusion for future developers.
278-335
: Remove unused variable and unnecessary WriteBatchThe
PKHExpiretime
function is well-implemented, correctly retrieving expiration times for specific fields in a hash. However, there are two minor issues to address:
- The
is_stale
variable (line 283) is declared but never used.- A
WriteBatch batch
(line 280) is declared but never used in this read-only operation.Consider applying the following changes:
- Remove the unused
is_stale
variable:- bool is_stale = false;
- Remove the unnecessary
WriteBatch
declaration:- rocksdb::WriteBatch batch;
These changes will improve code cleanliness and remove potential confusion for future developers.
337-397
: Remove unused variable and unnecessary WriteBatchThe
PKHTTL
function is well-implemented, correctly retrieving TTL (Time To Live) for specific fields in a hash. However, there are two minor issues to address:
- The
is_stale
variable (line 342) is declared but never used.- A
WriteBatch batch
(line 339) is declared but never used in this read-only operation.Consider applying the following changes:
- Remove the unused
is_stale
variable:- bool is_stale = false;
- Remove the unnecessary
WriteBatch
declaration:- rocksdb::WriteBatch batch;
These changes will improve code cleanliness and remove potential confusion for future developers.
399-454
: Remove unused variableThe
PKHPersist
function is well-implemented, correctly removing expiration for specific fields in a hash. However, there's one minor issue to address:The
is_stale
variable (line 404) is declared but never used.Consider removing the unused variable:
- bool is_stale = false;
This change will improve code cleanliness and remove potential confusion for future developers.
src/storage/src/redis_strings.cc (1)
Line range hint
1-1746
: Overall code improvements with room for further enhancementsThe changes in this file primarily focus on improving code readability through better formatting of function signatures and code blocks. These changes are positive and make the code easier to understand and maintain.
However, there's a consistent pattern of using string concatenation for error messages throughout the file. While this works, it's recommended to use a formatting library like
fmt
orstd::format
(C++20) for better performance and maintainability of error messages.Consider applying the formatting library suggestion consistently across all error messages in this file and potentially throughout the entire codebase for uniformity.
src/storage/src/pkhash_data_value_format.h (2)
47-47
: Mixed language in code comment.The comment on line 47 contains Chinese characters:
dst += kTimestampLength; // todo(DDD) 待确认,看这个是否需要。To maintain consistency and ensure readability for all contributors, please provide comments in English.
Update the comment to English or address the TODO if it's no longer necessary:
- dst += kTimestampLength; // todo(DDD) 待确认,看这个是否需要。 + dst += kTimestampLength; // TODO(DDD): Confirm if this increment is necessary.
53-53
: Remove unused private member variablekDefaultValueSuffixLength
.The private member
kDefaultValueSuffixLength
declared on line 53 is not used within thePKHashDataValue
class. Unused code can lead to confusion and should be removed to improve code clarity.Apply this diff to remove the unused variable:
- const size_t kDefaultValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
src/storage/tests/pkhashes_test.cc (2)
104-106
: Translate comments to English for consistencyThere are several comments written in Chinese. For better maintainability and to ensure all team members can understand the code, please translate these comments to English.
Example lines:
- Lines 104-106: Multiline comment in Chinese.
- Line 328:
// 从这里开始有问题。
- Line 344:
// 实际上是 key value 过期了。
- Line 375:
// TODO(DDD:expire)这里可以仿着写 expire的相关测试。
Also applies to: 328-328, 344-344, 375-375
375-375
: Implement expire-related tests as indicated by TODOThere is a TODO comment on line 375 suggesting that expire-related tests should be written. Implementing these tests will help ensure the correctness of the expiration functionality.
Would you like me to assist in generating these test cases or open a new GitHub issue to track this task?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (6)
- CMakeLists.txt (1 hunks)
- src/storage/src/pkhash_data_value_format.h (1 hunks)
- src/storage/src/redis.h (17 hunks)
- src/storage/src/redis_pkhashes.cc (1 hunks)
- src/storage/src/redis_strings.cc (40 hunks)
- src/storage/tests/pkhashes_test.cc (1 hunks)
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc
[performance] 554-554: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 763-763: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 847-847: Searching before insertion is not necessary.
(stlFindInsert)
🔇 Additional comments (17)
src/storage/src/redis.h (2)
17-19
: LGTM: Include statements updated appropriately.The new include statements for
pstd/include/env.h
,pstd/include/pika_codis_slot.h
, andsrc/custom_comparator.h
have been added, which are likely necessary for the new PK Hash functionality. Thesrc/redis_streams.h
include has been moved, which is a minor organizational change.Also applies to: 24-24
474-475
: LGTM:ExpectedStale
method updated to includeDataType::kPKHashes
.The
ExpectedStale
method has been correctly updated to include the newDataType::kPKHashes
in its switch statement. This ensures consistent handling of the new PK Hash type alongside existing types.src/storage/src/redis_pkhashes.cc (6)
23-64
: LGTM: Well-structured implementation of PKHGetThe
PKHGet
function is implemented correctly, with proper error handling, type checking, and use of snapshots for consistency. It efficiently retrieves the hash field value while handling various edge cases such as stale data and non-existent keys.
542-545
: LGTM: Efficient implementation of PKHExistsThe
PKHExists
function is implemented efficiently by reusing thePKHGet
function. This approach reduces code duplication and maintains consistency in behavior between the two operations.
620-654
: LGTM: Efficient implementation of PKHLenThe
PKHLen
function is well-implemented, efficiently handling prefetched metadata and various error cases. It correctly returns the number of fields in a hash while properly managing different scenarios such as stale or non-existent keys.
658-667
: LGTM: Efficient implementation of PKHStrlenThe
PKHStrlen
function is implemented efficiently by reusing thePKHGet
function to retrieve the value and then returning its length. This approach reduces code duplication and maintains consistency in behavior between the two operations.
942-1274
: LGTM: Well-implemented hash operationsThe remaining functions (PKHMGet, PKHKeys, PKHVals, PKHGetall, PKHScan, PKHashesExpire) are well-implemented, correctly handling various operations on hash data structures. They follow consistent patterns, handle error cases appropriately, and use suitable data structures for their operations.
Minor suggestions:
- Consider adding more detailed comments for complex logic, especially in the PKHScan function.
- In the PKHashesExpire function, the TODO comment on line 1263 should be addressed in a future update.
Overall, these functions provide a robust implementation of the required hash operations.
66-144
:⚠️ Potential issueConsider updating TTL when values are equal
The
PKHSet
function is well-implemented overall, using a WriteBatch for atomic operations and handling various scenarios correctly. However, there's a potential issue with TTL handling when updating an existing field.In the case where the new value is equal to the existing value (lines 105-107), the function returns early without updating the TTL or other metadata. If updating the TTL is desired even when the value hasn't changed, you should adjust the logic to handle this case.
Consider modifying the code as follows:
if (data_value == value.ToString()) { - return Status::OK(); + PKHashDataValue internal_value(value); + batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode()); + statistic++; } else { PKHashDataValue internal_value(value); batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode()); statistic++; }This change ensures that the TTL is updated even when the value remains the same.
Likely invalid or redundant comment.
src/storage/src/redis_strings.cc (9)
8-8
: Approved: Improved include statement ordering.Moving the include to the top of the list improves consistency and follows common C++ coding standards.
66-67
: Approved: Improved function signature formatting.The multi-line format for the function signature enhances readability, especially for functions with many parameters.
126-126
: Approved: Improved function signature formatting.The multi-line format for the function signature enhances readability, especially for functions with many parameters.
222-223
: Approved: Improved function signature formatting.The multi-line format for the function signature enhances readability, especially for functions with many parameters.
234-234
: Approved: Use of range-based for loop.The change to a range-based for loop is a good modernization of the code. It improves readability and reduces the potential for errors associated with manual index management.
526-527
: Approved: Improved function signature formatting.The multi-line format for the function signature enhances readability, especially for functions with many parameters.
562-574
: Approved: Improved code formatting.The changes in indentation and line breaks improve the overall readability of the code.
1671-1673
: Approved: Improved function signature formatting.The multi-line format for the function signature enhances readability, especially for functions with many parameters.
1695-1696
: Approved: Improved code formatting.The changes in line breaks improve the overall readability of the code, especially for long conditions in if statements.
Also applies to: 1713-1714
src/storage/src/redis_pkhashes.cc
Outdated
Status Redis::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl) { | ||
*ret = 0; | ||
rocksdb::WriteBatch batch; | ||
ScopeRecordLock l(lock_mgr_, key); | ||
|
||
uint64_t version = 0; | ||
uint32_t statistic = 0; | ||
std::string old_value; | ||
std::string meta_value; | ||
|
||
BaseMetaKey base_meta_key(key); | ||
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value); | ||
char value_buf[32] = {0}; | ||
char meta_value_buf[4] = {0}; | ||
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) { | ||
if (ExpectedStale(meta_value)) { | ||
s = Status::NotFound(); | ||
} else { | ||
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | ||
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] + | ||
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | ||
} | ||
} | ||
if (s.ok()) { | ||
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value); | ||
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) { | ||
version = parsed_hashes_meta_value.UpdateVersion(); | ||
parsed_hashes_meta_value.SetCount(1); | ||
parsed_hashes_meta_value.SetEtime(0); | ||
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | ||
HashesDataKey hashes_data_key(key, version, field); | ||
Int64ToStr(value_buf, 32, value); | ||
PKHashDataValue internal_value(value_buf); | ||
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode()); | ||
*ret = value; | ||
} else { | ||
version = parsed_hashes_meta_value.Version(); | ||
HashesDataKey hashes_data_key(key, version, field); | ||
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &old_value); | ||
if (s.ok()) { | ||
ParsedPKHashDataValue parsed_internal_value(&old_value); | ||
parsed_internal_value.StripSuffix(); | ||
int64_t ival = 0; | ||
if (StrToInt64(old_value.data(), old_value.size(), &ival) == 0) { | ||
return Status::Corruption("hash value is not an integer"); | ||
} | ||
if ((value >= 0 && LLONG_MAX - value < ival) || (value < 0 && LLONG_MIN - value > ival)) { | ||
return Status::InvalidArgument("Overflow"); | ||
} | ||
*ret = ival + value; | ||
Int64ToStr(value_buf, 32, *ret); | ||
PKHashDataValue internal_value(value_buf); | ||
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode()); | ||
statistic++; | ||
} else if (s.IsNotFound()) { | ||
Int64ToStr(value_buf, 32, value); | ||
if (!parsed_hashes_meta_value.CheckModifyCount(1)) { | ||
return Status::InvalidArgument("hash size overflow"); | ||
} | ||
PKHashDataValue internal_value(value_buf); | ||
parsed_hashes_meta_value.ModifyCount(1); | ||
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | ||
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode()); | ||
*ret = value; | ||
} else { | ||
return s; | ||
} | ||
} | ||
} else if (s.IsNotFound()) { | ||
EncodeFixed32(meta_value_buf, 1); | ||
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4)); | ||
version = hashes_meta_value.UpdateVersion(); | ||
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode()); | ||
HashesDataKey hashes_data_key(key, version, field); | ||
|
||
Int64ToStr(value_buf, 32, value); | ||
PKHashDataValue internal_value(value_buf); | ||
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode()); | ||
*ret = value; | ||
} else { | ||
return s; | ||
} | ||
s = db_->Write(default_write_options_, &batch); | ||
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic); | ||
return s; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Address unused TTL parameter
The PKHIncrby
function is well-implemented, correctly handling the increment operation for hash fields, including various scenarios and overflow cases. However, there's one issue to address:
The ttl
parameter (line 669) is not used anywhere in the function. This might be an oversight, as the function doesn't apply any TTL to the incremented field.
Consider one of the following options:
- If TTL should be applied, modify the function to use the
ttl
parameter:
PKHashDataValue internal_value(value_buf);
+internal_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode());
Apply this change in all places where PKHashDataValue
is created and stored.
- If TTL is not needed for this operation, remove the parameter:
-Status Redis::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl) {
+Status Redis::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret) {
Make sure to update any calls to this function accordingly.
Implementing one of these changes will ensure that the function behaves as expected with regard to TTL handling.
Committable suggestion was skipped due to low confidence.
Status Redis::PKHDel(const Slice& key, const std::vector<std::string>& fields, int32_t* ret) { | ||
uint32_t statistic = 0; | ||
std::vector<std::string> filtered_fields; | ||
std::unordered_set<std::string> field_set; | ||
for (const auto& iter : fields) { | ||
const std::string& field = iter; | ||
if (field_set.find(field) == field_set.end()) { | ||
field_set.insert(field); | ||
filtered_fields.push_back(iter); | ||
} | ||
} | ||
|
||
rocksdb::WriteBatch batch; | ||
rocksdb::ReadOptions read_options; | ||
const rocksdb::Snapshot* snapshot; | ||
|
||
std::string meta_value; | ||
int32_t del_cnt = 0; | ||
uint64_t version = 0; | ||
ScopeRecordLock l(lock_mgr_, key); | ||
ScopeSnapshot ss(db_, &snapshot); | ||
read_options.snapshot = snapshot; | ||
|
||
BaseMetaKey base_meta_key(key); | ||
Status s = db_->Get(read_options, handles_[kMetaCF], base_meta_key.Encode(), &meta_value); | ||
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) { | ||
if (ExpectedStale(meta_value)) { | ||
s = Status::NotFound(); | ||
} else { | ||
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | ||
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] + | ||
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | ||
} | ||
} | ||
if (s.ok()) { | ||
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value); | ||
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) { | ||
*ret = 0; | ||
return Status::OK(); | ||
} else { | ||
std::string data_value; | ||
version = parsed_hashes_meta_value.Version(); | ||
for (const auto& field : filtered_fields) { | ||
HashesDataKey hashes_data_key(key, version, field); | ||
s = db_->Get(read_options, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value); | ||
if (s.ok()) { | ||
del_cnt++; | ||
statistic++; | ||
batch.Delete(handles_[kPKHashDataCF], hashes_data_key.Encode()); | ||
} else if (s.IsNotFound()) { | ||
continue; | ||
} else { | ||
return s; | ||
} | ||
} | ||
*ret = del_cnt; | ||
if (!parsed_hashes_meta_value.CheckModifyCount(-del_cnt)) { | ||
return Status::InvalidArgument("hash size overflow"); | ||
} | ||
parsed_hashes_meta_value.ModifyCount(-del_cnt); | ||
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | ||
} | ||
} else if (s.IsNotFound()) { | ||
*ret = 0; | ||
return Status::OK(); | ||
} else { | ||
return s; | ||
} | ||
s = db_->Write(default_write_options_, &batch); | ||
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic); | ||
return s; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Optimize field filtering process
The PKHDel
function is well-implemented, correctly handling the deletion of specified fields from a hash. However, there's an opportunity to optimize the field filtering process:
Consider using std::unordered_set::insert
directly instead of searching before insertion. This can be more efficient, especially for larger input sizes. Here's a suggested optimization:
std::vector<std::string> filtered_fields;
std::unordered_set<std::string> field_set;
-for (const auto& iter : fields) {
- const std::string& field = iter;
- if (field_set.find(field) == field_set.end()) {
- field_set.insert(field);
- filtered_fields.push_back(iter);
- }
+for (const auto& field : fields) {
+ auto result = field_set.insert(field);
+ if (result.second) {
+ filtered_fields.push_back(field);
+ }
}
This change eliminates the need for an explicit search before insertion, potentially improving performance.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Status Redis::PKHDel(const Slice& key, const std::vector<std::string>& fields, int32_t* ret) { | |
uint32_t statistic = 0; | |
std::vector<std::string> filtered_fields; | |
std::unordered_set<std::string> field_set; | |
for (const auto& iter : fields) { | |
const std::string& field = iter; | |
if (field_set.find(field) == field_set.end()) { | |
field_set.insert(field); | |
filtered_fields.push_back(iter); | |
} | |
} | |
rocksdb::WriteBatch batch; | |
rocksdb::ReadOptions read_options; | |
const rocksdb::Snapshot* snapshot; | |
std::string meta_value; | |
int32_t del_cnt = 0; | |
uint64_t version = 0; | |
ScopeRecordLock l(lock_mgr_, key); | |
ScopeSnapshot ss(db_, &snapshot); | |
read_options.snapshot = snapshot; | |
BaseMetaKey base_meta_key(key); | |
Status s = db_->Get(read_options, handles_[kMetaCF], base_meta_key.Encode(), &meta_value); | |
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) { | |
if (ExpectedStale(meta_value)) { | |
s = Status::NotFound(); | |
} else { | |
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | |
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] + | |
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | |
} | |
} | |
if (s.ok()) { | |
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value); | |
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) { | |
*ret = 0; | |
return Status::OK(); | |
} else { | |
std::string data_value; | |
version = parsed_hashes_meta_value.Version(); | |
for (const auto& field : filtered_fields) { | |
HashesDataKey hashes_data_key(key, version, field); | |
s = db_->Get(read_options, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value); | |
if (s.ok()) { | |
del_cnt++; | |
statistic++; | |
batch.Delete(handles_[kPKHashDataCF], hashes_data_key.Encode()); | |
} else if (s.IsNotFound()) { | |
continue; | |
} else { | |
return s; | |
} | |
} | |
*ret = del_cnt; | |
if (!parsed_hashes_meta_value.CheckModifyCount(-del_cnt)) { | |
return Status::InvalidArgument("hash size overflow"); | |
} | |
parsed_hashes_meta_value.ModifyCount(-del_cnt); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | |
} | |
} else if (s.IsNotFound()) { | |
*ret = 0; | |
return Status::OK(); | |
} else { | |
return s; | |
} | |
s = db_->Write(default_write_options_, &batch); | |
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic); | |
return s; | |
} | |
Status Redis::PKHDel(const Slice& key, const std::vector<std::string>& fields, int32_t* ret) { | |
uint32_t statistic = 0; | |
std::vector<std::string> filtered_fields; | |
std::unordered_set<std::string> field_set; | |
for (const auto& field : fields) { | |
auto result = field_set.insert(field); | |
if (result.second) { | |
filtered_fields.push_back(field); | |
} | |
} | |
rocksdb::WriteBatch batch; | |
rocksdb::ReadOptions read_options; | |
const rocksdb::Snapshot* snapshot; | |
std::string meta_value; | |
int32_t del_cnt = 0; | |
uint64_t version = 0; | |
ScopeRecordLock l(lock_mgr_, key); | |
ScopeSnapshot ss(db_, &snapshot); | |
read_options.snapshot = snapshot; | |
BaseMetaKey base_meta_key(key); | |
Status s = db_->Get(read_options, handles_[kMetaCF], base_meta_key.Encode(), &meta_value); | |
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) { | |
if (ExpectedStale(meta_value)) { | |
s = Status::NotFound(); | |
} else { | |
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | |
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] + | |
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | |
} | |
} | |
if (s.ok()) { | |
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value); | |
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) { | |
*ret = 0; | |
return Status::OK(); | |
} else { | |
std::string data_value; | |
version = parsed_hashes_meta_value.Version(); | |
for (const auto& field : filtered_fields) { | |
HashesDataKey hashes_data_key(key, version, field); | |
s = db_->Get(read_options, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value); | |
if (s.ok()) { | |
del_cnt++; | |
statistic++; | |
batch.Delete(handles_[kPKHashDataCF], hashes_data_key.Encode()); | |
} else if (s.IsNotFound()) { | |
continue; | |
} else { | |
return s; | |
} | |
} | |
*ret = del_cnt; | |
if (!parsed_hashes_meta_value.CheckModifyCount(-del_cnt)) { | |
return Status::InvalidArgument("hash size overflow"); | |
} | |
parsed_hashes_meta_value.ModifyCount(-del_cnt); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | |
} | |
} else if (s.IsNotFound()) { | |
*ret = 0; | |
return Status::OK(); | |
} else { | |
return s; | |
} | |
s = db_->Write(default_write_options_, &batch); | |
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic); | |
return s; | |
} |
🧰 Tools
🪛 cppcheck
[performance] 554-554: Searching before insertion is not necessary.
(stlFindInsert)
Status Redis::PKHMSet(const Slice& key, const std::vector<FieldValue>& fvs) { | ||
uint32_t statistic = 0; | ||
std::unordered_set<std::string> fields; | ||
std::vector<FieldValue> filtered_fvs; | ||
for (auto iter = fvs.rbegin(); iter != fvs.rend(); ++iter) { | ||
std::string field = iter->field; | ||
if (fields.find(field) == fields.end()) { | ||
fields.insert(field); | ||
filtered_fvs.push_back(*iter); | ||
} | ||
} | ||
|
||
rocksdb::WriteBatch batch; | ||
ScopeRecordLock l(lock_mgr_, key); | ||
|
||
uint64_t version = 0; | ||
std::string meta_value; | ||
|
||
BaseMetaKey base_meta_key(key); | ||
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value); | ||
char meta_value_buf[4] = {0}; | ||
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) { | ||
if (ExpectedStale(meta_value)) { | ||
s = Status::NotFound(); | ||
} else { | ||
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | ||
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] + | ||
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | ||
} | ||
} | ||
if (s.ok()) { | ||
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value); | ||
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) { | ||
version = parsed_hashes_meta_value.InitialMetaValue(); | ||
if (!parsed_hashes_meta_value.check_set_count(static_cast<int32_t>(filtered_fvs.size()))) { | ||
return Status::InvalidArgument("hash size overflow"); | ||
} | ||
parsed_hashes_meta_value.SetCount(static_cast<int32_t>(filtered_fvs.size())); | ||
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | ||
for (const auto& fv : filtered_fvs) { | ||
HashesDataKey hashes_data_key(key, version, fv.field); | ||
PKHashDataValue inter_value(fv.value); | ||
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode()); | ||
} | ||
} else { | ||
int32_t count = 0; | ||
std::string data_value; | ||
version = parsed_hashes_meta_value.Version(); | ||
for (const auto& fv : filtered_fvs) { | ||
HashesDataKey hashes_data_key(key, version, fv.field); | ||
PKHashDataValue inter_value(fv.value); | ||
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value); | ||
if (s.ok()) { | ||
statistic++; | ||
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode()); | ||
} else if (s.IsNotFound()) { | ||
count++; | ||
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode()); | ||
} else { | ||
return s; | ||
} | ||
} | ||
if (!parsed_hashes_meta_value.CheckModifyCount(count)) { | ||
return Status::InvalidArgument("hash size overflow"); | ||
} | ||
parsed_hashes_meta_value.ModifyCount(count); | ||
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | ||
} | ||
} else if (s.IsNotFound()) { | ||
EncodeFixed32(meta_value_buf, filtered_fvs.size()); | ||
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4)); | ||
version = hashes_meta_value.UpdateVersion(); | ||
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode()); | ||
for (const auto& fv : filtered_fvs) { | ||
HashesDataKey hashes_data_key(key, version, fv.field); | ||
PKHashDataValue inter_value(fv.value); | ||
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode()); | ||
} | ||
} | ||
s = db_->Write(default_write_options_, &batch); | ||
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic); | ||
return s; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Optimize field filtering process
The PKHMSet
function is well-implemented, correctly handling the setting of multiple fields in a hash. However, there's an opportunity to optimize the field filtering process:
Consider using std::unordered_set::insert
directly instead of searching before insertion. This can be more efficient, especially for larger input sizes. Here's a suggested optimization:
std::vector<FieldValue> filtered_fvs;
std::unordered_set<std::string> fields;
-for (auto iter = fvs.rbegin(); iter != fvs.rend(); ++iter) {
- std::string field = iter->field;
- if (fields.find(field) == fields.end()) {
- fields.insert(field);
- filtered_fvs.push_back(*iter);
- }
+for (auto iter = fvs.rbegin(); iter != fvs.rend(); ++iter) {
+ auto result = fields.insert(iter->field);
+ if (result.second) {
+ filtered_fvs.push_back(*iter);
+ }
}
This change eliminates the need for an explicit search before insertion, potentially improving performance.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Status Redis::PKHMSet(const Slice& key, const std::vector<FieldValue>& fvs) { | |
uint32_t statistic = 0; | |
std::unordered_set<std::string> fields; | |
std::vector<FieldValue> filtered_fvs; | |
for (auto iter = fvs.rbegin(); iter != fvs.rend(); ++iter) { | |
std::string field = iter->field; | |
if (fields.find(field) == fields.end()) { | |
fields.insert(field); | |
filtered_fvs.push_back(*iter); | |
} | |
} | |
rocksdb::WriteBatch batch; | |
ScopeRecordLock l(lock_mgr_, key); | |
uint64_t version = 0; | |
std::string meta_value; | |
BaseMetaKey base_meta_key(key); | |
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value); | |
char meta_value_buf[4] = {0}; | |
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) { | |
if (ExpectedStale(meta_value)) { | |
s = Status::NotFound(); | |
} else { | |
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | |
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] + | |
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | |
} | |
} | |
if (s.ok()) { | |
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value); | |
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) { | |
version = parsed_hashes_meta_value.InitialMetaValue(); | |
if (!parsed_hashes_meta_value.check_set_count(static_cast<int32_t>(filtered_fvs.size()))) { | |
return Status::InvalidArgument("hash size overflow"); | |
} | |
parsed_hashes_meta_value.SetCount(static_cast<int32_t>(filtered_fvs.size())); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | |
for (const auto& fv : filtered_fvs) { | |
HashesDataKey hashes_data_key(key, version, fv.field); | |
PKHashDataValue inter_value(fv.value); | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode()); | |
} | |
} else { | |
int32_t count = 0; | |
std::string data_value; | |
version = parsed_hashes_meta_value.Version(); | |
for (const auto& fv : filtered_fvs) { | |
HashesDataKey hashes_data_key(key, version, fv.field); | |
PKHashDataValue inter_value(fv.value); | |
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value); | |
if (s.ok()) { | |
statistic++; | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode()); | |
} else if (s.IsNotFound()) { | |
count++; | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode()); | |
} else { | |
return s; | |
} | |
} | |
if (!parsed_hashes_meta_value.CheckModifyCount(count)) { | |
return Status::InvalidArgument("hash size overflow"); | |
} | |
parsed_hashes_meta_value.ModifyCount(count); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | |
} | |
} else if (s.IsNotFound()) { | |
EncodeFixed32(meta_value_buf, filtered_fvs.size()); | |
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4)); | |
version = hashes_meta_value.UpdateVersion(); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode()); | |
for (const auto& fv : filtered_fvs) { | |
HashesDataKey hashes_data_key(key, version, fv.field); | |
PKHashDataValue inter_value(fv.value); | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode()); | |
} | |
} | |
s = db_->Write(default_write_options_, &batch); | |
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic); | |
return s; | |
} | |
Status Redis::PKHMSet(const Slice& key, const std::vector<FieldValue>& fvs) { | |
uint32_t statistic = 0; | |
std::unordered_set<std::string> fields; | |
std::vector<FieldValue> filtered_fvs; | |
for (auto iter = fvs.rbegin(); iter != fvs.rend(); ++iter) { | |
auto result = fields.insert(iter->field); | |
if (result.second) { | |
filtered_fvs.push_back(*iter); | |
} | |
} | |
rocksdb::WriteBatch batch; | |
ScopeRecordLock l(lock_mgr_, key); | |
uint64_t version = 0; | |
std::string meta_value; | |
BaseMetaKey base_meta_key(key); | |
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value); | |
char meta_value_buf[4] = {0}; | |
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) { | |
if (ExpectedStale(meta_value)) { | |
s = Status::NotFound(); | |
} else { | |
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | |
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] + | |
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | |
} | |
} | |
if (s.ok()) { | |
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value); | |
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) { | |
version = parsed_hashes_meta_value.InitialMetaValue(); | |
if (!parsed_hashes_meta_value.check_set_count(static_cast<int32_t>(filtered_fvs.size()))) { | |
return Status::InvalidArgument("hash size overflow"); | |
} | |
parsed_hashes_meta_value.SetCount(static_cast<int32_t>(filtered_fvs.size())); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | |
for (const auto& fv : filtered_fvs) { | |
HashesDataKey hashes_data_key(key, version, fv.field); | |
PKHashDataValue inter_value(fv.value); | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode()); | |
} | |
} else { | |
int32_t count = 0; | |
std::string data_value; | |
version = parsed_hashes_meta_value.Version(); | |
for (const auto& fv : filtered_fvs) { | |
HashesDataKey hashes_data_key(key, version, fv.field); | |
PKHashDataValue inter_value(fv.value); | |
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value); | |
if (s.ok()) { | |
statistic++; | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode()); | |
} else if (s.IsNotFound()) { | |
count++; | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode()); | |
} else { | |
return s; | |
} | |
} | |
if (!parsed_hashes_meta_value.CheckModifyCount(count)) { | |
return Status::InvalidArgument("hash size overflow"); | |
} | |
parsed_hashes_meta_value.ModifyCount(count); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | |
} | |
} else if (s.IsNotFound()) { | |
EncodeFixed32(meta_value_buf, filtered_fvs.size()); | |
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4)); | |
version = hashes_meta_value.UpdateVersion(); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode()); | |
for (const auto& fv : filtered_fvs) { | |
HashesDataKey hashes_data_key(key, version, fv.field); | |
PKHashDataValue inter_value(fv.value); | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), inter_value.Encode()); | |
} | |
} | |
s = db_->Write(default_write_options_, &batch); | |
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic); | |
return s; | |
} |
🧰 Tools
🪛 cppcheck
[performance] 763-763: Searching before insertion is not necessary.
(stlFindInsert)
src/storage/src/redis_pkhashes.cc
Outdated
Status Redis::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl) { | ||
if (ttl <= 0) { | ||
return Status::InvalidArgument("invalid expire time"); | ||
} | ||
|
||
rocksdb::WriteBatch batch; | ||
ScopeRecordLock l(lock_mgr_, key); | ||
|
||
int32_t version = 0; | ||
std::string meta_value; | ||
uint32_t statistic = 0; | ||
|
||
BaseMetaKey base_meta_key(key); | ||
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value); | ||
char meta_value_buf[4] = {0}; | ||
// 1. 判断类型是否匹配和key是否过期。 | ||
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) { | ||
if (ExpectedStale(meta_value)) { | ||
s = Status::NotFound(); | ||
} else { | ||
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | ||
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] + | ||
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | ||
} | ||
} | ||
|
||
if (s.ok()) { | ||
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value); | ||
if (parsed_hashes_meta_value.Count() == 0 || parsed_hashes_meta_value.IsStale()) { | ||
version = parsed_hashes_meta_value.InitialMetaValue(); | ||
parsed_hashes_meta_value.SetCount(1); | ||
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | ||
HashesDataKey data_key(key, version, field); | ||
PKHashDataValue ehashes_value(value); | ||
ehashes_value.SetRelativeTimestamp(ttl); | ||
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), ehashes_value.Encode()); | ||
// *res = 1; // TODO: add return value | ||
} else { | ||
version = parsed_hashes_meta_value.Version(); | ||
std::string data_value; | ||
HashesDataKey hashes_data_key(key, version, field); | ||
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value); | ||
if (s.ok()) { | ||
// *res = 0 ;// TODO: add return value | ||
// 判断data_value 和 value 是否相等。 | ||
if (data_value == value.ToString()) { | ||
// TODO(DDD): 更新ttl | ||
return Status::OK(); | ||
} else { | ||
PKHashDataValue ehashes_value(value); | ||
ehashes_value.SetRelativeTimestamp(ttl); | ||
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | ||
statistic++; | ||
} | ||
|
||
} else if (s.IsNotFound()) { | ||
if (!parsed_hashes_meta_value.CheckModifyCount(1)) { | ||
return Status::InvalidArgument("hash size overflow"); | ||
} | ||
parsed_hashes_meta_value.ModifyCount(1); | ||
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | ||
PKHashDataValue ehashes_value(value); | ||
ehashes_value.SetRelativeTimestamp(ttl); | ||
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | ||
// *res = 1; // TODO: add return value | ||
} else { | ||
return s; | ||
} | ||
} | ||
} else if (s.IsNotFound()) { | ||
// char str[4]; | ||
EncodeFixed32(meta_value_buf, 1); | ||
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4)); | ||
version = hashes_meta_value.UpdateVersion(); | ||
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode()); | ||
HashesDataKey data_key(key, version, field); | ||
PKHashDataValue ehashes_value(value); | ||
ehashes_value.SetRelativeTimestamp(ttl); | ||
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), ehashes_value.Encode()); | ||
} else { | ||
return s; | ||
} | ||
|
||
return db_->Write(default_write_options_, &batch); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Address TODO comments and potential TTL issue
The PKHSetex
function is generally well-implemented, handling various scenarios correctly. However, there are a few issues to address:
-
There are TODO comments that need to be resolved:
- Line 492:
// *res = 1; // TODO: add return value
- Line 499:
// *res = 0 ;// TODO: add return value
- Line 502:
// TODO(DDD): 更新ttl
- Line 492:
-
There's a potential issue with TTL handling when updating an existing field. When the new value is equal to the existing value (lines 501-503), the function returns early without updating the TTL.
Consider addressing these issues:
-
Implement the TODO items, adding a return value mechanism if needed.
-
Update the TTL even when the value remains the same:
if (data_value == value.ToString()) {
- // TODO(DDD): 更新ttl
- return Status::OK();
+ PKHashDataValue ehashes_value(value);
+ ehashes_value.SetRelativeTimestamp(ttl);
+ batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
+ statistic++;
} else {
PKHashDataValue ehashes_value(value);
ehashes_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode());
statistic++;
}
These changes will ensure that the TTL is always updated and improve the overall functionality of the method.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Status Redis::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl) { | |
if (ttl <= 0) { | |
return Status::InvalidArgument("invalid expire time"); | |
} | |
rocksdb::WriteBatch batch; | |
ScopeRecordLock l(lock_mgr_, key); | |
int32_t version = 0; | |
std::string meta_value; | |
uint32_t statistic = 0; | |
BaseMetaKey base_meta_key(key); | |
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value); | |
char meta_value_buf[4] = {0}; | |
// 1. 判断类型是否匹配和key是否过期。 | |
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) { | |
if (ExpectedStale(meta_value)) { | |
s = Status::NotFound(); | |
} else { | |
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | |
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] + | |
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | |
} | |
} | |
if (s.ok()) { | |
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value); | |
if (parsed_hashes_meta_value.Count() == 0 || parsed_hashes_meta_value.IsStale()) { | |
version = parsed_hashes_meta_value.InitialMetaValue(); | |
parsed_hashes_meta_value.SetCount(1); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | |
HashesDataKey data_key(key, version, field); | |
PKHashDataValue ehashes_value(value); | |
ehashes_value.SetRelativeTimestamp(ttl); | |
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), ehashes_value.Encode()); | |
// *res = 1; // TODO: add return value | |
} else { | |
version = parsed_hashes_meta_value.Version(); | |
std::string data_value; | |
HashesDataKey hashes_data_key(key, version, field); | |
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value); | |
if (s.ok()) { | |
// *res = 0 ;// TODO: add return value | |
// 判断data_value 和 value 是否相等。 | |
if (data_value == value.ToString()) { | |
// TODO(DDD): 更新ttl | |
return Status::OK(); | |
} else { | |
PKHashDataValue ehashes_value(value); | |
ehashes_value.SetRelativeTimestamp(ttl); | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | |
statistic++; | |
} | |
} else if (s.IsNotFound()) { | |
if (!parsed_hashes_meta_value.CheckModifyCount(1)) { | |
return Status::InvalidArgument("hash size overflow"); | |
} | |
parsed_hashes_meta_value.ModifyCount(1); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | |
PKHashDataValue ehashes_value(value); | |
ehashes_value.SetRelativeTimestamp(ttl); | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | |
// *res = 1; // TODO: add return value | |
} else { | |
return s; | |
} | |
} | |
} else if (s.IsNotFound()) { | |
// char str[4]; | |
EncodeFixed32(meta_value_buf, 1); | |
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4)); | |
version = hashes_meta_value.UpdateVersion(); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode()); | |
HashesDataKey data_key(key, version, field); | |
PKHashDataValue ehashes_value(value); | |
ehashes_value.SetRelativeTimestamp(ttl); | |
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), ehashes_value.Encode()); | |
} else { | |
return s; | |
} | |
return db_->Write(default_write_options_, &batch); | |
} | |
Status Redis::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl) { | |
if (ttl <= 0) { | |
return Status::InvalidArgument("invalid expire time"); | |
} | |
rocksdb::WriteBatch batch; | |
ScopeRecordLock l(lock_mgr_, key); | |
int32_t version = 0; | |
std::string meta_value; | |
uint32_t statistic = 0; | |
BaseMetaKey base_meta_key(key); | |
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value); | |
char meta_value_buf[4] = {0}; | |
// 1. 判断类型是否匹配和key是否过期。 | |
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) { | |
if (ExpectedStale(meta_value)) { | |
s = Status::NotFound(); | |
} else { | |
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | |
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] + | |
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | |
} | |
} | |
if (s.ok()) { | |
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value); | |
if (parsed_hashes_meta_value.Count() == 0 || parsed_hashes_meta_value.IsStale()) { | |
version = parsed_hashes_meta_value.InitialMetaValue(); | |
parsed_hashes_meta_value.SetCount(1); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | |
HashesDataKey data_key(key, version, field); | |
PKHashDataValue ehashes_value(value); | |
ehashes_value.SetRelativeTimestamp(ttl); | |
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), ehashes_value.Encode()); | |
// *res = 1; // TODO: add return value | |
} else { | |
version = parsed_hashes_meta_value.Version(); | |
std::string data_value; | |
HashesDataKey hashes_data_key(key, version, field); | |
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value); | |
if (s.ok()) { | |
// *res = 0 ;// TODO: add return value | |
// 判断data_value 和 value 是否相等。 | |
if (data_value == value.ToString()) { | |
PKHashDataValue ehashes_value(value); | |
ehashes_value.SetRelativeTimestamp(ttl); | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | |
statistic++; | |
} else { | |
PKHashDataValue ehashes_value(value); | |
ehashes_value.SetRelativeTimestamp(ttl); | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | |
statistic++; | |
} | |
} else if (s.IsNotFound()) { | |
if (!parsed_hashes_meta_value.CheckModifyCount(1)) { | |
return Status::InvalidArgument("hash size overflow"); | |
} | |
parsed_hashes_meta_value.ModifyCount(1); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | |
PKHashDataValue ehashes_value(value); | |
ehashes_value.SetRelativeTimestamp(ttl); | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | |
// *res = 1; // TODO: add return value | |
} else { | |
return s; | |
} | |
} | |
} else if (s.IsNotFound()) { | |
// char str[4]; | |
EncodeFixed32(meta_value_buf, 1); | |
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4)); | |
version = hashes_meta_value.UpdateVersion(); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode()); | |
HashesDataKey data_key(key, version, field); | |
PKHashDataValue ehashes_value(value); | |
ehashes_value.SetRelativeTimestamp(ttl); | |
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), ehashes_value.Encode()); | |
} else { | |
return s; | |
} | |
return db_->Write(default_write_options_, &batch); | |
} |
src/storage/src/redis_pkhashes.cc
Outdated
uint32_t statistic = 0; | ||
std::unordered_set<std::string> fields; | ||
std::vector<FieldValueTTL> filtered_fvs; | ||
for (auto iter = fvts.rbegin(); iter != fvts.rend(); ++iter) { | ||
std::string field = iter->field; | ||
if (fields.find(field) == fields.end()) { | ||
fields.insert(field); | ||
filtered_fvs.push_back(*iter); | ||
} | ||
} | ||
|
||
rocksdb::WriteBatch batch; | ||
ScopeRecordLock l(lock_mgr_, key); | ||
|
||
int32_t version = 0; | ||
std::string meta_value; | ||
|
||
BaseMetaKey base_meta_key(key); | ||
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value); | ||
char meta_value_buf[4] = {0}; | ||
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) { | ||
if (ExpectedStale(meta_value)) { | ||
s = Status::NotFound(); | ||
} else { | ||
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | ||
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] + | ||
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | ||
} | ||
} | ||
|
||
if (s.ok()) { | ||
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value); | ||
if (parsed_hashes_meta_value.Count() == 0 || parsed_hashes_meta_value.IsStale()) { | ||
version = parsed_hashes_meta_value.InitialMetaValue(); | ||
if (!parsed_hashes_meta_value.check_set_count(static_cast<int32_t>(filtered_fvs.size()))) { | ||
return Status::InvalidArgument("hash size overflow"); | ||
} | ||
parsed_hashes_meta_value.SetCount(static_cast<int32_t>(filtered_fvs.size())); | ||
// parsed_hashes_meta_value.set_timestamp(0); | ||
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | ||
for (const auto& fv : filtered_fvs) { | ||
HashesDataKey hashes_data_key(key, version, fv.field); | ||
PKHashDataValue ehashes_value(fv.value); | ||
if (fv.ttl > 0) { | ||
ehashes_value.SetRelativeTimestamp(fv.ttl); | ||
} | ||
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | ||
} | ||
} else { | ||
int32_t count = 0; | ||
std::string data_value; | ||
version = parsed_hashes_meta_value.Version(); | ||
for (const auto& fv : filtered_fvs) { | ||
HashesDataKey hashes_data_key(key, version, fv.field); | ||
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value); | ||
if (s.ok()) { | ||
statistic++; | ||
PKHashDataValue ehashes_value(fv.value); | ||
if (fv.ttl > 0) { | ||
ehashes_value.SetRelativeTimestamp(fv.ttl); | ||
} | ||
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | ||
} else if (s.IsNotFound()) { | ||
count++; | ||
PKHashDataValue ehashes_value(fv.value); | ||
if (fv.ttl > 0) { | ||
ehashes_value.SetRelativeTimestamp(fv.ttl); | ||
} | ||
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | ||
} else { | ||
return s; | ||
} | ||
} | ||
|
||
if (!parsed_hashes_meta_value.CheckModifyCount(count)) { | ||
return Status::InvalidArgument("hash size overflow"); | ||
} | ||
|
||
parsed_hashes_meta_value.ModifyCount(count); | ||
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | ||
} | ||
} else if (s.IsNotFound()) { | ||
// char str[4]; | ||
EncodeFixed32(meta_value_buf, filtered_fvs.size()); | ||
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4)); | ||
version = hashes_meta_value.UpdateVersion(); | ||
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode()); | ||
for (const auto& fv : filtered_fvs) { | ||
HashesDataKey hashes_data_key(key, version, fv.field); | ||
PKHashDataValue ehashes_value(fv.value); | ||
if (fv.ttl > 0) { | ||
ehashes_value.SetRelativeTimestamp(fv.ttl); | ||
} | ||
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | ||
} | ||
} | ||
s = db_->Write(default_write_options_, &batch); | ||
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic); | ||
return s; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Optimize field filtering process
The PKHMSetex
function is well-implemented, correctly handling the setting of multiple fields with TTLs in a hash. However, there's an opportunity to optimize the field filtering process:
Consider using std::unordered_set::insert
directly instead of searching before insertion. This can be more efficient, especially for larger input sizes. Here's a suggested optimization:
std::vector<FieldValueTTL> filtered_fvs;
std::unordered_set<std::string> fields;
-for (auto iter = fvts.rbegin(); iter != fvts.rend(); ++iter) {
- std::string field = iter->field;
- if (fields.find(field) == fields.end()) {
- fields.insert(field);
- filtered_fvs.push_back(*iter);
- }
+for (auto iter = fvts.rbegin(); iter != fvts.rend(); ++iter) {
+ auto result = fields.insert(iter->field);
+ if (result.second) {
+ filtered_fvs.push_back(*iter);
+ }
}
This change eliminates the need for an explicit search before insertion, potentially improving performance.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Status Redis::PKHMSetex(const Slice& key, const std::vector<FieldValueTTL>& fvts) { | |
uint32_t statistic = 0; | |
std::unordered_set<std::string> fields; | |
std::vector<FieldValueTTL> filtered_fvs; | |
for (auto iter = fvts.rbegin(); iter != fvts.rend(); ++iter) { | |
std::string field = iter->field; | |
if (fields.find(field) == fields.end()) { | |
fields.insert(field); | |
filtered_fvs.push_back(*iter); | |
} | |
} | |
rocksdb::WriteBatch batch; | |
ScopeRecordLock l(lock_mgr_, key); | |
int32_t version = 0; | |
std::string meta_value; | |
BaseMetaKey base_meta_key(key); | |
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value); | |
char meta_value_buf[4] = {0}; | |
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) { | |
if (ExpectedStale(meta_value)) { | |
s = Status::NotFound(); | |
} else { | |
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | |
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] + | |
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | |
} | |
} | |
if (s.ok()) { | |
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value); | |
if (parsed_hashes_meta_value.Count() == 0 || parsed_hashes_meta_value.IsStale()) { | |
version = parsed_hashes_meta_value.InitialMetaValue(); | |
if (!parsed_hashes_meta_value.check_set_count(static_cast<int32_t>(filtered_fvs.size()))) { | |
return Status::InvalidArgument("hash size overflow"); | |
} | |
parsed_hashes_meta_value.SetCount(static_cast<int32_t>(filtered_fvs.size())); | |
// parsed_hashes_meta_value.set_timestamp(0); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | |
for (const auto& fv : filtered_fvs) { | |
HashesDataKey hashes_data_key(key, version, fv.field); | |
PKHashDataValue ehashes_value(fv.value); | |
if (fv.ttl > 0) { | |
ehashes_value.SetRelativeTimestamp(fv.ttl); | |
} | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | |
} | |
} else { | |
int32_t count = 0; | |
std::string data_value; | |
version = parsed_hashes_meta_value.Version(); | |
for (const auto& fv : filtered_fvs) { | |
HashesDataKey hashes_data_key(key, version, fv.field); | |
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value); | |
if (s.ok()) { | |
statistic++; | |
PKHashDataValue ehashes_value(fv.value); | |
if (fv.ttl > 0) { | |
ehashes_value.SetRelativeTimestamp(fv.ttl); | |
} | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | |
} else if (s.IsNotFound()) { | |
count++; | |
PKHashDataValue ehashes_value(fv.value); | |
if (fv.ttl > 0) { | |
ehashes_value.SetRelativeTimestamp(fv.ttl); | |
} | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | |
} else { | |
return s; | |
} | |
} | |
if (!parsed_hashes_meta_value.CheckModifyCount(count)) { | |
return Status::InvalidArgument("hash size overflow"); | |
} | |
parsed_hashes_meta_value.ModifyCount(count); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | |
} | |
} else if (s.IsNotFound()) { | |
// char str[4]; | |
EncodeFixed32(meta_value_buf, filtered_fvs.size()); | |
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4)); | |
version = hashes_meta_value.UpdateVersion(); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode()); | |
for (const auto& fv : filtered_fvs) { | |
HashesDataKey hashes_data_key(key, version, fv.field); | |
PKHashDataValue ehashes_value(fv.value); | |
if (fv.ttl > 0) { | |
ehashes_value.SetRelativeTimestamp(fv.ttl); | |
} | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | |
} | |
} | |
s = db_->Write(default_write_options_, &batch); | |
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic); | |
return s; | |
} | |
Status Redis::PKHMSetex(const Slice& key, const std::vector<FieldValueTTL>& fvts) { | |
uint32_t statistic = 0; | |
std::unordered_set<std::string> fields; | |
std::vector<FieldValueTTL> filtered_fvs; | |
for (auto iter = fvts.rbegin(); iter != fvts.rend(); ++iter) { | |
auto result = fields.insert(iter->field); | |
if (result.second) { | |
filtered_fvs.push_back(*iter); | |
} | |
} | |
rocksdb::WriteBatch batch; | |
ScopeRecordLock l(lock_mgr_, key); | |
int32_t version = 0; | |
std::string meta_value; | |
BaseMetaKey base_meta_key(key); | |
Status s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value); | |
char meta_value_buf[4] = {0}; | |
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) { | |
if (ExpectedStale(meta_value)) { | |
s = Status::NotFound(); | |
} else { | |
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | |
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] + | |
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | |
} | |
} | |
if (s.ok()) { | |
ParsedHashesMetaValue parsed_hashes_meta_value(&meta_value); | |
if (parsed_hashes_meta_value.Count() == 0 || parsed_hashes_meta_value.IsStale()) { | |
version = parsed_hashes_meta_value.InitialMetaValue(); | |
if (!parsed_hashes_meta_value.check_set_count(static_cast<int32_t>(filtered_fvs.size()))) { | |
return Status::InvalidArgument("hash size overflow"); | |
} | |
parsed_hashes_meta_value.SetCount(static_cast<int32_t>(filtered_fvs.size())); | |
// parsed_hashes_meta_value.set_timestamp(0); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | |
for (const auto& fv : filtered_fvs) { | |
HashesDataKey hashes_data_key(key, version, fv.field); | |
PKHashDataValue ehashes_value(fv.value); | |
if (fv.ttl > 0) { | |
ehashes_value.SetRelativeTimestamp(fv.ttl); | |
} | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | |
} | |
} else { | |
int32_t count = 0; | |
std::string data_value; | |
version = parsed_hashes_meta_value.Version(); | |
for (const auto& fv : filtered_fvs) { | |
HashesDataKey hashes_data_key(key, version, fv.field); | |
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], hashes_data_key.Encode(), &data_value); | |
if (s.ok()) { | |
statistic++; | |
PKHashDataValue ehashes_value(fv.value); | |
if (fv.ttl > 0) { | |
ehashes_value.SetRelativeTimestamp(fv.ttl); | |
} | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | |
} else if (s.IsNotFound()) { | |
count++; | |
PKHashDataValue ehashes_value(fv.value); | |
if (fv.ttl > 0) { | |
ehashes_value.SetRelativeTimestamp(fv.ttl); | |
} | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | |
} else { | |
return s; | |
} | |
} | |
if (!parsed_hashes_meta_value.CheckModifyCount(count)) { | |
return Status::InvalidArgument("hash size overflow"); | |
} | |
parsed_hashes_meta_value.ModifyCount(count); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); | |
} | |
} else if (s.IsNotFound()) { | |
// char str[4]; | |
EncodeFixed32(meta_value_buf, filtered_fvs.size()); | |
HashesMetaValue hashes_meta_value(DataType::kPKHashes, Slice(meta_value_buf, 4)); | |
version = hashes_meta_value.UpdateVersion(); | |
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), hashes_meta_value.Encode()); | |
for (const auto& fv : filtered_fvs) { | |
HashesDataKey hashes_data_key(key, version, fv.field); | |
PKHashDataValue ehashes_value(fv.value); | |
if (fv.ttl > 0) { | |
ehashes_value.SetRelativeTimestamp(fv.ttl); | |
} | |
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), ehashes_value.Encode()); | |
} | |
} | |
s = db_->Write(default_write_options_, &batch); | |
UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic); | |
return s; | |
} |
🧰 Tools
🪛 cppcheck
[performance] 847-847: Searching before insertion is not necessary.
(stlFindInsert)
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + ", expect type: " + | ||
DataTypeStrings[static_cast<int>(DataType::kStrings)] + ", get type: " + | ||
DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider using a formatting library for error messages.
As suggested earlier, using a formatting library like fmt
or std::format
(C++20) would improve performance and maintainability of error messages throughout the codebase.
Example using fmt:
return Status::InvalidArgument(fmt::format("WRONGTYPE, key: {}, expect type: {}, get type: {}",
key.ToString(),
DataTypeStrings[static_cast<int>(DataType::kStrings)],
DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]));
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | ||
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kStrings)] + | ||
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(value))]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider using a formatting library for error messages.
As suggested earlier, using a formatting library like fmt
or std::format
(C++20) would improve performance and maintainability of error messages throughout the codebase.
Example using fmt:
return Status::InvalidArgument(fmt::format("WRONGTYPE, key: {}, expect type: {}, get type: {}",
key.ToString(),
DataTypeStrings[static_cast<int>(DataType::kStrings)],
DataTypeStrings[static_cast<int>(GetMetaValueType(value))]));
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | ||
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kStrings)] + | ||
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider using a formatting library for error messages.
As suggested earlier, using a formatting library like fmt
or std::format
(C++20) would improve performance and maintainability of error messages throughout the codebase.
Example using fmt:
return Status::InvalidArgument(fmt::format("WRONGTYPE, key: {}, expect type: {}, get type: {}",
key.ToString(),
DataTypeStrings[static_cast<int>(DataType::kStrings)],
DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]));
}; | ||
|
||
} // namespace storage | ||
#endif // SRC_BASE_VALUE_FORMAT_H_ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent header guard ending comment.
The #endif
at the end of the file does not match the header guard defined at the beginning. The mismatch can lead to confusion and potential issues with conditional compilation.
-
Header guard starts with:
#ifndef SRC_PKHASH_DATA_VALUE_FORMAT_H_ #define SRC_PKHASH_DATA_VALUE_FORMAT_H_
-
Header guard ends with:
#endif // SRC_BASE_VALUE_FORMAT_H_
Apply this diff to correct the header guard ending comment:
-#endif // SRC_BASE_VALUE_FORMAT_H_
+#endif // SRC_PKHASH_DATA_VALUE_FORMAT_H_
src/storage/tests/pkhashes_test.cc
Outdated
// fields.clear(); | ||
// std::map<storage::DataType, rocksdb::Status> type_status; | ||
// db.Expire("B_HKEYS_KEY", 1); | ||
// ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); | ||
// std::this_thread::sleep_for(std::chrono::milliseconds(2000)); | ||
// s = db.PKHKeys("B_HKEYS_KEY", &fields); | ||
// ASSERT_TRUE(s.IsNotFound()); | ||
// ASSERT_EQ(fields.size(), 0); | ||
|
||
// // PKHKeys not exist hash table | ||
// fields.clear(); | ||
// s = db.PKHKeys("HKEYS_NOT_EXIST_KEY", &fields); | ||
// ASSERT_TRUE(s.IsNotFound()); | ||
// ASSERT_EQ(fields.size(), 0); | ||
// } | ||
|
||
// // PKHVals | ||
// TEST_F(PKHashesTest, HVals) { | ||
// int32_t ret = 0; | ||
// std::vector<storage::FieldValue> mid_fvs_in; | ||
// mid_fvs_in.push_back({"MID_TEST_FIELD1", "MID_TEST_VALUE1"}); | ||
// mid_fvs_in.push_back({"MID_TEST_FIELD2", "MID_TEST_VALUE2"}); | ||
// mid_fvs_in.push_back({"MID_TEST_FIELD3", "MID_TEST_VALUE3"}); | ||
// s = db.PKHMSet("B_HVALS_KEY", mid_fvs_in); | ||
// ASSERT_TRUE(s.ok()); | ||
|
||
// std::vector<std::string> values; | ||
// s = db.PKHVals("B_HVALS_KEY", &values); | ||
// ASSERT_TRUE(s.ok()); | ||
// ASSERT_EQ(values.size(), 3); | ||
// ASSERT_EQ(values[0], "MID_TEST_VALUE1"); | ||
// ASSERT_EQ(values[1], "MID_TEST_VALUE2"); | ||
// ASSERT_EQ(values[2], "MID_TEST_VALUE3"); | ||
|
||
// // Insert some kv who's position above "mid kv" | ||
// std::vector<storage::FieldValue> pre_fvs_in; | ||
// pre_fvs_in.push_back({"PRE_TEST_FIELD1", "PRE_TEST_VALUE1"}); | ||
// pre_fvs_in.push_back({"PRE_TEST_FIELD2", "PRE_TEST_VALUE2"}); | ||
// pre_fvs_in.push_back({"PRE_TEST_FIELD3", "PRE_TEST_VALUE3"}); | ||
// s = db.PKHMSet("A_HVALS_KEY", pre_fvs_in); | ||
// ASSERT_TRUE(s.ok()); | ||
// values.clear(); | ||
// s = db.PKHVals("B_HVALS_KEY", &values); | ||
// ASSERT_TRUE(s.ok()); | ||
// ASSERT_EQ(values.size(), 3); | ||
// ASSERT_EQ(values[0], "MID_TEST_VALUE1"); | ||
// ASSERT_EQ(values[1], "MID_TEST_VALUE2"); | ||
// ASSERT_EQ(values[2], "MID_TEST_VALUE3"); | ||
|
||
// // Insert some kv who's position below "mid kv" | ||
// std::vector<storage::FieldValue> suf_fvs_in; | ||
// suf_fvs_in.push_back({"SUF_TEST_FIELD1", "SUF_TEST_VALUE1"}); | ||
// suf_fvs_in.push_back({"SUF_TEST_FIELD2", "SUF_TEST_VALUE2"}); | ||
// suf_fvs_in.push_back({"SUF_TEST_FIELD3", "SUF_TEST_VALUE3"}); | ||
// s = db.PKHMSet("C_HVALS_KEY", suf_fvs_in); | ||
// ASSERT_TRUE(s.ok()); | ||
// values.clear(); | ||
// s = db.PKHVals("B_HVALS_KEY", &values); | ||
// ASSERT_TRUE(s.ok()); | ||
// ASSERT_EQ(values.size(), 3); | ||
// ASSERT_EQ(values[0], "MID_TEST_VALUE1"); | ||
// ASSERT_EQ(values[1], "MID_TEST_VALUE2"); | ||
// ASSERT_EQ(values[2], "MID_TEST_VALUE3"); | ||
|
||
// // PKHVals timeout hash table | ||
// values.clear(); | ||
// std::map<storage::DataType, rocksdb::Status> type_status; | ||
// db.Expire("B_HVALS_KEY", 1); | ||
// ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); | ||
// std::this_thread::sleep_for(std::chrono::milliseconds(2000)); | ||
// s = db.PKHVals("B_HVALS_KEY", &values); | ||
// ASSERT_TRUE(s.IsNotFound()); | ||
// ASSERT_EQ(values.size(), 0); | ||
|
||
// // PKHVals not exist hash table | ||
// values.clear(); | ||
// s = db.PKHVals("HVALS_NOT_EXIST_KEY", &values); | ||
// ASSERT_TRUE(s.IsNotFound()); | ||
// ASSERT_EQ(values.size(), 0); | ||
// } | ||
|
||
// // PKHStrlen | ||
// TEST_F(PKHashesTest, HStrlenTest) { | ||
// int32_t ret = 0; | ||
// int32_t len = 0; | ||
// s = db.PKHSet("HSTRLEN_KEY", "HSTRLEN_TEST_FIELD", "HSTRLEN_TEST_VALUE", &ret); | ||
// ASSERT_TRUE(s.ok()); | ||
// ASSERT_EQ(ret, 1); | ||
|
||
// s = db.PKHStrlen("HSTRLEN_KEY", "HSTRLEN_TEST_FIELD", &len); | ||
// ASSERT_TRUE(s.ok()); | ||
// ASSERT_EQ(len, 18); | ||
|
||
// // If the key or the field do not exist, 0 is returned | ||
// s = db.PKHStrlen("HSTRLEN_KEY", "HSTRLEN_NOT_EXIST_FIELD", &len); | ||
// ASSERT_TRUE(s.IsNotFound()); | ||
// ASSERT_EQ(len, 0); | ||
// } | ||
|
||
// PKHScan |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Uncomment or remove inactive test cases
A large block of test cases from lines 109 to 836 is commented out. If these tests are intended to be part of the test suite, consider uncommenting them. If they are obsolete or not ready for inclusion, consider removing them or adding explanations for future reference.
87eee4b
to
378f5a3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post
Actionable comments posted: 30
🧹 Outside diff range and nitpick comments (20)
src/pika_pkhash.cc (1)
182-184
: Address TODO comment and adjust fields assignmentThere's a TODO comment that needs to be addressed. Additionally, the
fields_
assignment needs to be adjusted due to the removal of unnecessary iterator increments.Consider implementing the TODO and modifying the assignment:
- // TODO(DDD) fields_.assign(argv_.begin() + 4, argv_.end()); - fields_.assign(iter, argv_.end()); + fields_.assign(argv_.begin() + 4, argv_.end());Would you like me to create a GitHub issue to track this TODO?
src/pika_command.cc (2)
Line range hint
1110-1137
: Suggest performance testing and documentation for cache control changesThe
IsNeedCacheDo
method has been updated with more granular control over caching for different data types. While this is likely an improvement in cache management, please ensure the following:
- Perform performance testing to verify that these additional checks don't introduce significant overhead, especially in scenarios with mixed data types.
- Update the documentation to reflect these new caching controls, including how to configure caching for each data type and the impact of the
IsCacheDisabledTemporarily
flag.- Consider adding a comment in the code explaining the overall caching strategy and how these checks fit into it.
Consider refactoring this method to use a lookup table or switch statement if the number of data types continues to grow, to improve readability and maintainability.
Line range hint
1-1210
: Overall review summary and suggestionsThe changes to
src/pika_command.cc
primarily focus on adding PKHash commands and enhancing caching functionality. While the implementation generally looks good, here are some final suggestions:
- Ensure consistency in command flags across all new and modified commands.
- Resolve all TODO comments and translate any remaining non-English comments to English.
- Add or update documentation for the new PKHash commands and caching behavior.
- Consider adding more inline comments to explain complex logic, especially in caching-related methods.
- Perform thorough testing of the new PKHash commands and caching functionality, including edge cases and performance testing.
- Review error handling and edge cases for the new functionality to ensure robustness.
- Update any relevant external documentation or README files to reflect these new features and changes.
As the command table continues to grow, consider if there's an opportunity to refactor the
InitCmdTable
function to make it more maintainable, possibly by grouping commands by type or using a data-driven approach to initialize commands.src/storage/src/redis_pkhashes.cc (1)
853-853
: Remove outdated TODO comment inPKHMSetex
.The TODO comment indicates adding expiration times to fields, but the
PKHMSetex
function already implements this functionality. Removing the comment will keep the code clean and up-to-date.Apply this diff to remove the unnecessary comment:
-// TODO(DDD) field 带有过期时间。
src/storage/tests/pkhashes_test.cc (1)
575-575
: Translate comments to English for consistencyThe comment
// 从这里开始有问题。
is in Chinese. To maintain code readability and inclusivity for all team members, please translate comments to English.src/storage/src/storage.cc (15)
Line range hint
191-203
: Refactor the loop to improve readability and maintainability.The loop body contains repetitive code for handling different cases based on the status. Consider extracting the common logic into a separate function or using a more expressive construct like a switch statement.
Here's a suggested refactoring:
for (const auto& key : keys) { auto& inst = GetDBInstance(key); std::string value; Status s = inst->MGet(key, &value); if (s.ok()) { vss->push_back({value, Status::OK()}); } else { vss->push_back({std::string(), s}); if (!s.IsNotFound()) { vss->clear(); return s; } } }
Line range hint
210-225
: Refactor the loop to improve readability and maintainability.The loop body contains repetitive code for handling different cases based on the status. Consider extracting the common logic into a separate function or using a more expressive construct like a switch statement.
Here's a suggested refactoring:
for (const auto& key : keys) { auto& inst = GetDBInstance(key); std::string value; int64_t ttl; Status s = inst->MGetWithTTL(key, &value, &ttl); if (s.ok()) { vss->push_back({value, Status::OK(), ttl}); } else { vss->push_back({std::string(), s, ttl}); if (!s.IsNotFound()) { vss->clear(); return s; } } }
278-282
: Ensure consistent parameter passing style for output parameters.The
GetrangeWithValue
method uses a mix of pointers and references for output parameters. For consistency with other methods, consider using pointers for all output parameters.Apply this diff to update the parameter types:
-Status Storage::GetrangeWithValue(const Slice& key, int64_t start_offset, int64_t end_offset, std::string* ret, - std::string& out_new_value, int64_t* ttl) { +Status Storage::GetrangeWithValue(const Slice& key, int64_t start_offset, int64_t end_offset, std::string* ret, + std::string* out_new_value, int64_t* ttl) {
492-496
: Returnstd::vector<int64_t>
instead of using an output parameter.To improve code clarity and align with modern C++ practices, consider returning the
timestamps
vector directly instead of using an output parameter.Apply this diff:
-Status Storage::PKHExpiretime(const Slice& key, int32_t numfields, const std::vector<std::string>& fields, - std::vector<int64_t>* timestamps) { +std::vector<int64_t> Storage::PKHExpiretime(const Slice& key, int32_t numfields, const std::vector<std::string>& fields) { + std::vector<int64_t> timestamps; auto& inst = GetDBInstance(key); - return inst->PKHExpiretime(key, numfields, fields, timestamps); + inst->PKHExpiretime(key, numfields, fields, ×tamps); + return timestamps;
498-502
: Returnstd::vector<int32_t>
instead of using an output parameter.To improve code clarity and align with modern C++ practices, consider returning the
rets
vector directly instead of using an output parameter.Apply this diff:
-Status Storage::PKHPersist(const Slice& key, int32_t numfields, const std::vector<std::string>& fields, - std::vector<int32_t>* rets) { +std::vector<int32_t> Storage::PKHPersist(const Slice& key, int32_t numfields, const std::vector<std::string>& fields) { + std::vector<int32_t> rets; auto& inst = GetDBInstance(key); - return inst->PKHPersist(key, numfields, fields, rets); + inst->PKHPersist(key, numfields, fields, &rets); + return rets;
504-508
: Returnstd::vector<int64_t>
instead of using an output parameter.To improve code clarity and align with modern C++ practices, consider returning the
ttls
vector directly instead of using an output parameter.Apply this diff:
-Status Storage::PKHTTL(const Slice& key, int32_t numfields, const std::vector<std::string>& fields, - std::vector<int64_t>* ttls) { +std::vector<int64_t> Storage::PKHTTL(const Slice& key, int32_t numfields, const std::vector<std::string>& fields) { + std::vector<int64_t> ttls; auto& inst = GetDBInstance(key); - return inst->PKHTTL(key, numfields, fields, ttls); + inst->PKHTTL(key, numfields, fields, &ttls); + return ttls;
510-513
: Returnstd::string
instead of using an output parameter.To improve code clarity and align with modern C++ practices, consider returning the
value
string directly instead of using an output parameter.Apply this diff:
-Status Storage::PKHGet(const Slice& key, const Slice& field, std::string* value) { +std::string Storage::PKHGet(const Slice& key, const Slice& field) { + std::string value; auto& inst = GetDBInstance(key); - return inst->PKHGet(key, field, value); + inst->PKHGet(key, field, &value); + return value;
515-518
: Consider returning theres
value directly.Since the
res
parameter is an output parameter, consider returning its value directly instead of returning aStatus
object. This would align with the typical convention of returning the result value for methods that perform a simple operation.Apply this diff:
-Status Storage::PKHSet(const Slice& key, const Slice& field, const Slice& value, int32_t* res) { +int32_t Storage::PKHSet(const Slice& key, const Slice& field, const Slice& value) { + int32_t res; auto& inst = GetDBInstance(key); - return inst->PKHSet(key, field, value, res); + inst->PKHSet(key, field, value, &res); + return res;
525-528
: Consider returning a bool value instead of a Status.Since the
PKHExists
method checks for the existence of a field in a hash key, it would be more intuitive to return a boolean value indicating the existence rather than aStatus
object. This would align with the typical convention of returning a boolean for existence checks.Apply this diff:
-Status Storage::PKHExists(const Slice& key, const Slice& field) { +bool Storage::PKHExists(const Slice& key, const Slice& field) { auto& inst = GetDBInstance(key); - return inst->PKHExists(key, field); + return inst->PKHExists(key, field).ok();
530-533
: Consider returning theret
value directly.Since the
ret
parameter is an output parameter, consider returning its value directly instead of returning aStatus
object. This would align with the typical convention of returning the result value for methods that perform a simple operation.Apply this diff:
-Status Storage::PKHDel(const Slice& key, const std::vector<std::string>& fields, int32_t* ret) { +int32_t Storage::PKHDel(const Slice& key, const std::vector<std::string>& fields) { + int32_t ret; auto& inst = GetDBInstance(key); - return inst->PKHDel(key, fields, ret); + inst->PKHDel(key, fields, &ret); + return ret;
535-538
: Consider returning theret
value directly.Since the
ret
parameter is an output parameter, consider returning its value directly instead of returning aStatus
object. This would align with the typical convention of returning the result value for methods that perform a simple operation.Apply this diff:
-Status Storage::PKHLen(const Slice& key, int32_t* ret) { +int32_t Storage::PKHLen(const Slice& key) { + int32_t ret; auto& inst = GetDBInstance(key); - return inst->PKHLen(key, ret); + inst->PKHLen(key, &ret); + return ret;
545-548
: Consider returning thelen
value directly.Since the
len
parameter is an output parameter, consider returning its value directly instead of returning aStatus
object. This would align with the typical convention of returning the result value for methods that perform a simple operation.Apply this diff:
-Status Storage::PKHStrlen(const Slice& key, const Slice& field, int32_t* len) { +int32_t Storage::PKHStrlen(const Slice& key, const Slice& field) { + int32_t len; auto& inst = GetDBInstance(key); - return inst->PKHStrlen(key, field, len); + inst->PKHStrlen(key, field, &len); + return len;
570-573
: Returnstd::vector<std::string>
instead of using an output parameter.To improve code clarity and align with modern C++ practices, consider returning the
fields
vector directly instead of using an output parameter.Apply this diff:
-Status Storage::PKHKeys(const Slice& key, std::vector<std::string>* fields) { +std::vector<std::string> Storage::PKHKeys(const Slice& key) { + std::vector<std::string> fields; auto& inst = GetDBInstance(key); - return inst->PKHKeys(key, fields); + inst->PKHKeys(key, &fields); + return fields;
575-578
: Returnstd::vector<std::string>
instead of using an output parameter.To improve code clarity and align with modern C++ practices, consider returning the
values
vector directly instead of using an output parameter.Apply this diff:
-Status Storage::PKHVals(const Slice& key, std::vector<std::string>* values) { +std::vector<std::string> Storage::PKHVals(const Slice& key) { + std::vector<std::string> values; auto& inst = GetDBInstance(key); - return inst->PKHVals(key, values); + inst->PKHVals(key, &values); + return values;
580-583
: Returnstd::vector<FieldValueTTL>
instead of using an output parameter.To improve code clarity and align with modern C++ practices, consider returning the
fvts
vector directly instead of using an output parameter.Apply this diff:
-Status Storage::PKHGetall(const Slice& key, std::vector </blockquote></details> </blockquote></details> <details> <summary>🛑 Comments failed to post (30)</summary><blockquote> <details> <summary>src/pika_pkhash.cc (15)</summary><blockquote> 174-178: _:hammer_and_wrench: Refactor suggestion_ **Remove unnecessary iterator increments** The iterator `iter` is incremented multiple times without being used, which doesn't affect the logic but reduces code clarity. Consider removing these unnecessary increments: ```diff -iter++; -iter++; -iter++; -iter++;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {
140-144: 🛠️ Refactor suggestion
Remove unnecessary iterator increments
The iterator
iter
is incremented multiple times without being used, which doesn't affect the logic but reduces code clarity.Consider removing these unnecessary increments:
-iter++; -iter++; -iter++; -iter++;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {
105-109: 🛠️ Refactor suggestion
Remove unnecessary iterator increments
The iterator
iter
is incremented multiple times without being used, which doesn't affect the logic but reduces code clarity.Consider removing these unnecessary increments:
-iter++; -iter++; -iter++; -iter++;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
68-73: 🛠️ Refactor suggestion
Remove unnecessary iterator increments
The iterator
iter
is incremented multiple times without being used, which doesn't affect the logic but reduces code clarity.Consider removing these unnecessary increments:
-iter++; -iter++; -iter++; -iter++; -iter++;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
28-32: 🛠️ Refactor suggestion
Remove unnecessary iterator increments
The iterator
iter
is incremented multiple times without being used, which doesn't affect the logic but reduces code clarity.Consider removing these unnecessary increments:
-iter++; -iter++; -iter++; -iter++; -iter++;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
79-80:
⚠️ Potential issueAdjust fields assignment
Due to the removal of unnecessary iterator increments, the
fields_
assignment needs to be adjusted to ensure correct parsing.Consider modifying the assignment to:
-fields_.assign(iter, argv_.end()); +fields_.assign(argv_.begin() + 5, argv_.end());📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.fields_.assign(argv_.begin() + 5, argv_.end()); }
115-116:
⚠️ Potential issueAdjust fields assignment
Due to the removal of unnecessary iterator increments, the
fields_
assignment needs to be adjusted to ensure correct parsing.Consider modifying the assignment to:
-fields_.assign(iter, argv_.end()); +fields_.assign(argv_.begin() + 4, argv_.end());📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.fields_.assign(argv_.begin() + 4, argv_.end()); }
149-150:
⚠️ Potential issueAdjust fields assignment
Due to the removal of unnecessary iterator increments, the
fields_
assignment needs to be adjusted to ensure correct parsing.Consider modifying the assignment to:
-fields_.assign(iter, argv_.end()); +fields_.assign(argv_.begin() + 4, argv_.end());📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.fields_.assign(argv_.begin() + 4, argv_.end()); }
63-66:
⚠️ Potential issueImprove error handling in timestamp parsing
The condition for parsing the timestamp should be adjusted to correctly detect parsing errors.
Consider modifying the condition to:
-if (pstd::string2int(argv_[2].data(), argv_[2].size(), ×tamp_) == 0) { +if (pstd::string2int(argv_[2].data(), argv_[2].size(), ×tamp_) != 1) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if (pstd::string2int(argv_[2].data(), argv_[2].size(), ×tamp_) != 1) { res_.SetRes(CmdRes::kInvalidInt); return; }
74-77:
⚠️ Potential issueImprove error handling in numfields parsing
Similar to the timestamp parsing, the condition for parsing
numfields_
should be adjusted to correctly detect parsing errors.Consider modifying the condition to:
-if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) == 0) { +if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) != 1) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) != 1) { res_.SetRes(CmdRes::kInvalidInt); return; }
144-147:
⚠️ Potential issueImprove error handling in numfields parsing
The condition for parsing
numfields_
should be adjusted to correctly detect parsing errors.Consider modifying the condition to:
-if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) { +if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) { res_.SetRes(CmdRes::kInvalidInt); return; }
178-181:
⚠️ Potential issueImprove error handling in numfields parsing
The condition for parsing
numfields_
should be adjusted to correctly detect parsing errors.Consider modifying the condition to:
-if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) { +if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) { res_.SetRes(CmdRes::kInvalidInt); return; }
110-113:
⚠️ Potential issueImprove error handling in numfields parsing
The condition for parsing
numfields_
should be adjusted to correctly detect parsing errors.Consider modifying the condition to:
-if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) { +if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) { res_.SetRes(CmdRes::kInvalidInt); return; }
271-279:
⚠️ Potential issueAdd expiration time parsing to PKHSetexCmd::DoInitial
The
PKHSetexCmd::DoInitial
function is missing the parsing of the expiration time, which is a crucial part of the Setex command.Consider adding the expiration time parsing:
void PKHSetexCmd::DoInitial() { if (!CheckArg(argv_.size())) { res_.SetRes(CmdRes::kWrongNum, kCmdNamePKHSetex); return; } key_ = argv_[1]; - field_ = argv_[2]; - value_ = argv_[3]; + if (pstd::string2int(argv_[2].data(), argv_[2].size(), &ttl_) != 1) { + res_.SetRes(CmdRes::kInvalidInt); + return; + } + field_ = argv_[3]; + value_ = argv_[4]; }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.void PKHSetexCmd::DoInitial() { if (!CheckArg(argv_.size())) { res_.SetRes(CmdRes::kWrongNum, kCmdNamePKHSetex); return; } key_ = argv_[1]; if (pstd::string2int(argv_[2].data(), argv_[2].size(), &ttl_) != 1) { res_.SetRes(CmdRes::kInvalidInt); return; } field_ = argv_[3]; value_ = argv_[4]; }
281-292:
⚠️ Potential issueImplement proper PKHSetex functionality
The
PKHSetexCmd::Do
function is not implementing the Setex functionality correctly. It's usingPKHSet
instead of aPKHSetex
method, and it's not using the expiration time.Consider implementing the correct Setex functionality:
- Implement a
PKHSetex
method in the storage engine if it doesn't exist.- Use the
PKHSetex
method instead ofPKHSet
.- Pass the expiration time to the method.
Example implementation:
void PKHSetexCmd::Do() { int32_t ret = 0; - s_ = db_->storage()->PKHSet(key_, field_, value_, &ret); + s_ = db_->storage()->PKHSetex(key_, ttl_, field_, value_, &ret); if (s_.ok()) { res_.AppendContent(":" + std::to_string(ret)); AddSlotKey("h", key_, db_); } else if (s_.IsInvalidArgument()) { res_.SetRes(CmdRes::kMultiKey); + } else if (s_.IsNotFound()) { + res_.SetRes(CmdRes::kNotFound); + } else if (s_.IsCorruption()) { + res_.SetRes(CmdRes::kErrOther, "Corruption error: " + s_.ToString()); } else { res_.SetRes(CmdRes::kErrOther, s_.ToString()); } }Also, ensure that the
ttl_
member is added to thePKHSetexCmd
class and properly initialized in theDoInitial
method.Committable suggestion was skipped due to low confidence.
src/storage/include/storage/storage.h (3)
415-463: 🛠️ Refactor suggestion
New PKH methods look good, but consider removing redundant parameter
The new Pika Hash (PKH) methods are a comprehensive addition to the storage system, providing a wide range of operations for hash data structures. The naming convention is consistent, and the functionality appears to cover all necessary operations.
However, there's a potential optimization:
Consider removing the
numfields
parameter from methods likePKHExpire
,PKHExpireat
, etc. This parameter is redundant as the number of fields can be obtained fromfields.size()
. This change would simplify the method signatures and prevent potential inconsistencies.For example, change:
Status PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields, std::vector<int32_t>* rets);to:
Status PKHExpire(const Slice& key, int32_t ttl, const std::vector<std::string>& fields, std::vector<int32_t>* rets);Apply this change to all similar method signatures.
269-270: 🛠️ Refactor suggestion
Consider using pointers for output parameters consistently
The
Append
method has been modified to use a reference for theout_new_value
parameter. For consistency across the codebase, consider using pointers for output parameters instead of references.Change the signature of
Append
to use a pointer:Status Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec, std::string* out_new_value);This change would make it consistent with other methods in the class that use pointers for output parameters.
279-280: 🛠️ Refactor suggestion
Consider using pointers for output parameters consistently
Similar to the
Append
method, theBitOp
method uses a reference for thevalue_to_dest
parameter. For consistency, consider changing this to a pointer.Modify the
BitOp
method signature:Status BitOp(BitOpType op, const std::string& dest_key, const std::vector<std::string>& src_keys, std::string* value_to_dest, int64_t* ret);This change would improve consistency with other methods in the class.
src/pika_command.cc (4)
515-515:
⚠️ Potential issueTranslate comment to English and address the TODO
Please translate the comment to English and resolve the TODO by confirming the correct arity for the
PKHSetexCmd
.Replace the current comment with:
// TODO: Verify if the arity should be -6
After verifying the correct arity, update the command initialization accordingly and remove the TODO comment.
543-543:
⚠️ Potential issueTranslate comment to English and clarify command parameters
Please translate the comment to English and verify the correct parameters for the
PKHMSetexCmd
.Replace the current comment with:
// TODO: Verify if the arity should be -6 and if TTL is required
After verifying the correct arity and parameters, update the command initialization accordingly and remove the TODO comment.
555-559:
⚠️ Potential issueAddress TODOs and clarify command performance for
PKHValsCmd
Please address the TODO comments and clarify why the
PKHVals
command is marked as slow.
Translate and address the first TODO:
Replace// TODO(DDD) 为啥vals是慢的命令。
with an explanation in English about why this command is marked as slow.Translate and address the second TODO:
Replace// TODO(DDD) 这些标志位都是啥意思。
with a comment explaining the meaning of the flags used.After addressing these TODOs, remove the comments if they are no longer necessary.
499-501:
⚠️ Potential issueCorrect command flags for
PKHExpiretimeCmd
The
PKHExpiretime
command is currently flagged withkCmdFlagsWrite
, but it should be a read operation. Please update the flags to usekCmdFlagsRead
instead.Apply this diff to correct the flags:
std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>( - kCmdNamePKHExpiretime, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); + kCmdNamePKHExpiretime, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>( kCmdNamePKHExpiretime, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExpiretime, std::move(ehexpiretimeptr)));
src/storage/src/redis_pkhashes.cc (4)
220-223:
⚠️ Potential issueClarify the error message in
PKHExpireat
for invalid timestamps.The condition
if (timestamp <= 0)
checks for timestamps less than or equal to zero, but the error message states "must be >= 0", which is inconsistent. It should state "must be > 0" to match the condition.Apply this diff to correct the error message:
- if (timestamp <= 0) { - rets->assign(numfields, 2); - return Status::InvalidArgument("invalid expire time, must be >= 0"); - } + if (timestamp <= 0) { + rets->assign(numfields, 2); + return Status::InvalidArgument("invalid expire time, must be > 0"); + }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if (timestamp <= 0) { rets->assign(numfields, 2); return Status::InvalidArgument("invalid expire time, must be > 0"); }
158-160:
⚠️ Potential issueCorrect the TTL validation error message in
PKHExpire
.The condition
if (ttl <= 0)
invalidatesttl
values less than or equal to zero, but the error message says "must be >= 0", which is misleading. Sincettl = 0
is considered invalid, the error message should state "must be > 0" to accurately reflect the condition.Apply this diff to fix the error message:
- if (ttl <= 0) { - return Status::InvalidArgument("invalid expire time, must be >= 0"); - } + if (ttl <= 0) { + return Status::InvalidArgument("invalid expire time, must be > 0"); + }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if (ttl <= 0) { return Status::InvalidArgument("invalid expire time, must be > 0"); }
466-468:
⚠️ Potential issueEnhance the error message in
PKHSetex
for TTL validation.When
ttl
is less than or equal to zero, the function returns anInvalidArgument
status with a generic message "invalid expire time". To aid users, the error message should specify the valid range forttl
.Apply this diff to improve the error message:
- if (ttl <= 0) { - return Status::InvalidArgument("invalid expire time"); - } + if (ttl <= 0) { + return Status::InvalidArgument("invalid expire time, ttl must be > 0"); + }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if (ttl <= 0) { return Status::InvalidArgument("invalid expire time, ttl must be > 0"); }
672-681: 🛠️ Refactor suggestion
Handle non-existent fields consistently in
PKHStrlen
.When the field does not exist,
PKHStrlen
sets*len
to0
but returns the error status fromPKHGet
, which may beNotFound
. Consider returningStatus::OK()
in this case to indicate that the operation was successful, and the length is0
.Apply this diff to adjust the return status:
Status Redis::PKHStrlen(const Slice& key, const Slice& field, int32_t* len) { std::string value; Status s = PKHGet(key, field, &value); if (s.ok()) { *len = static_cast<int32_t>(value.size()); + return Status::OK(); } else { *len = 0; - return s; + return Status::OK(); } }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.Status Redis::PKHStrlen(const Slice& key, const Slice& field, int32_t* len) { std::string value; Status s = PKHGet(key, field, &value); if (s.ok()) { *len = static_cast<int32_t>(value.size()); return Status::OK(); } else { *len = 0; return Status::OK(); } }
src/storage/tests/pkhashes_test.cc (3)
95-103:
⚠️ Potential issueAvoid using hardcoded sleep durations in tests
Using
std::this_thread::sleep_for
with fixed durations can lead to flaky tests on systems with varying performance characteristics. Tests may intermittently fail if the sleep duration is insufficient due to system load.Consider using time mocking techniques or condition variables to simulate the passage of time more reliably. This will enhance test robustness and reduce false negatives.
115-126:
⚠️ Potential issueEnsure return statuses are checked after function calls
In multiple test cases, the return status
s
from function calls such asdb.PKHExpire
,db.PKHExpireat
,db.PKHPersist
,db.PKHTTL
, and others is not consistently checked before proceeding. Ignoring the return status may lead to undetected errors and misleading test results.Please ensure that you verify
s.ok()
after each function call to handle potential errors appropriately.Also applies to: 152-176, 190-231, 242-272, 285-314, 339-383, 386-403, 406-469, 473-487, 491-526, 530-627, 630-721, 725-762, 765-836, 839-926, 930-991, 994-1056, 1059-1074, 1077-1478
877-878: 🛠️ Refactor suggestion
Use safer string-to-integer conversion methods
The use of
atoll(str_value.data())
for string-to-integer conversion is considered unsafe and less idiomatic in modern C++. It does not provide proper error handling and may lead to undefined behavior if the string is not a valid representation of an integer.Consider using
std::stoll(str_value)
instead, which throws an exception on invalid input and is more in line with C++ best practices.Apply this diff to update the conversions:
- ASSERT_EQ(atoll(str_value.data()), 1000); + ASSERT_EQ(std::stoll(str_value), 1000); - ASSERT_EQ(atoll(str_value.data()), 100); + ASSERT_EQ(std::stoll(str_value), 100); - ASSERT_EQ(atoll(str_value.data()), 200); + ASSERT_EQ(std::stoll(str_value), 200);Also applies to: 892-893, 904-905
src/storage/src/storage.cc (1)
128-128:
⚠️ Potential issueEnsure the method is thread-safe.
Since this method returns a reference to a shared resource (
insts_
), it should be made thread-safe to avoid potential data races or undefined behavior in concurrent scenarios.Consider adding proper synchronization mechanisms, such as locks or atomic operations, to ensure thread safety when accessing and modifying the
insts_
vector.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
🧹 Outside diff range and nitpick comments (16)
src/storage/include/storage/storage_define.h (1)
124-132
: Improved formatting and retained TODO comment.The reformatting of the
SeekUserkeyDelim
function enhances readability without altering its functionality. This is a positive change.However, the retained TODO comment suggests an area for potential improvement:
// TODO: handle invalid format
Would you like assistance in addressing this TODO comment? I can help draft a solution for handling invalid formats or create a GitHub issue to track this task.
src/storage/src/base_value_format.h (2)
21-31
: LGTM! Consider adding a comment forkPKHashes
.The changes to the
DataType
enum are consistent with the PR objectives. The addition ofkPKHashes
and the updates tokNones
andkAll
maintain the enum's integrity.Consider adding a brief comment explaining the purpose of
kPKHashes
for better code documentation:enum class DataType : uint8_t { kStrings = 0, kHashes = 1, kSets = 2, kLists = 3, kZSets = 4, kStreams = 5, + // Persistent Key-Hash type for new hash commands kPKHashes = 6, kNones = 7, kAll = 8, };
Line range hint
100-167
: LGTM! Consider consistent brace style for single-line methods.The formatting changes improve code consistency and readability. The condensed
IsValid
method is more concise while maintaining the same functionality.For consistency with other single-line methods in the class, consider using braces for the
IsValid
method:- virtual bool IsValid() { return !IsStale(); } + virtual bool IsValid() { return !IsStale(); }src/storage/src/redis.h (2)
254-294
: LGTM: New PK Hash Commands added.The new PK Hash Commands are well-structured and consistent with the existing codebase. They provide a comprehensive set of operations for PK Hash data structures.
Consider adding inline documentation for the new parameters in these methods, especially for
numfields
andfields
in methods likePKHExpire
, to clarify their purpose and usage.
275-275
: LGTM: Consistent addition ofprefetch_meta
parameter.The addition of the
prefetch_meta
parameter to multiple methods is consistent and maintains backward compatibility with its default value.Consider adding documentation to explain the purpose and usage of the
prefetch_meta
parameter. This will help developers understand when and how to use this new feature.Also applies to: 380-380, 391-391
src/storage/include/storage/storage.h (2)
415-463
: New Pika Hash (PKH) methods addedSeveral new methods for Pika Hash operations have been added, including:
- PKHExpire, PKHExpireat, PKHExpiretime, PKHPersist, PKHTTL
- PKHSet, PKHGet, PKHSetex, PKHExists, PKHDel, PKHLen, PKHStrlen
- PKHIncrby, PKHMSet, PKHMSetex, PKHMGet, PKHKeys, PKHVals, PKHGetall, PKHScan
These methods provide a comprehensive set of operations for working with hash data structures, including support for field-level expiration.
However, there are a few commented-out method declarations (PKHLenForce and PKHScanx). Consider removing these if they are not needed, or uncomment and implement them if they are required.
Line range hint
1-1154
: General improvements and considerations
File organization: Consider splitting this large header file into smaller, more focused files (e.g., one for each data type like strings, hashes, sets, etc.). This would improve maintainability and readability.
Documentation: Many methods lack comments explaining their purpose, parameters, and return values. Consider adding consistent documentation for all public methods to improve code understandability.
Consistent naming: Ensure consistent naming conventions across all methods. For example, some methods use camelCase (e.g.,
ZRangebyscore
), while others use snake_case (e.g.,PKH_expire
). Stick to one convention throughout the file.Error handling: Consider using a consistent approach for error handling across all methods. Some methods return Status, while others return int32_t or int64_t for error conditions.
Use of modern C++ features: Consider using more modern C++ features like
nullptr
instead ofNULL
,override
for virtual functions, and[[nodiscard]]
attribute for methods returning important values.Const correctness: Ensure that methods that don't modify the object are marked as
const
.Parameter passing: For large objects, consider passing by const reference instead of by value to improve performance.
src/storage/src/redis_hashes.cc (3)
Line range hint
66-71
: Improved field filtering logic.The changes improve the efficiency of filtering unique fields. However, we can further optimize it by reserving space for the vectors.
Consider reserving space for
filtered_fields
to avoid potential reallocations:std::vector<std::string> filtered_fields; std::unordered_set<std::string> field_set; +filtered_fields.reserve(fields.size()); +field_set.reserve(fields.size()); for (const auto& iter : fields) { const std::string& field = iter; if (field_set.find(field) == field_set.end()) {🧰 Tools
🪛 cppcheck
[performance] 69-69: Searching before insertion is not necessary.
(stlFindInsert)
91-93
: Improved error message formatting.The error message has been reformatted for better readability. However, consider using a formatting library like
fmt
for improved performance and maintainability.Consider using the
fmt
library for string formatting:- return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + - ", expect type: " + DataTypeStrings[static_cast<int>(DataType::kHashes)] + - ", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); + return Status::InvalidArgument(fmt::format("WRONGTYPE, key: {}, expect type: {}, get type: {}", + key.ToString(), + DataTypeStrings[static_cast<int>(DataType::kHashes)], + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]));
Line range hint
1-1354
: Overall assessment of changes in redis_hashes.ccThe modifications in this file primarily focus on two areas:
- Improving error message formatting for consistency and readability.
- Adding checks for hash size overflow to prevent potential issues.
These changes enhance the overall quality and robustness of the code. However, there are opportunities for further improvement:
- Consider using a formatting library like
fmt
for string formatting throughout the file. This would improve performance and maintainability.- In some functions, such as
HDel
, consider reserving space for vectors to optimize memory allocation.The consistent application of these changes across multiple functions demonstrates a systematic approach to code improvement. Overall, these changes are approved and contribute positively to the codebase.
To further improve the code:
- Implement a centralized error message formatting function using the
fmt
library.- Consider creating a utility function for checking hash size overflow, as this check is repeated in multiple places.
- Review the entire codebase for similar patterns that could benefit from these improvements.
src/pika_command.cc (2)
56-57
: Improve code formatting for consistencyThere are some inconsistencies in the formatting of command initializations. Consider aligning the parameters for better readability.
Apply this formatting to all command initializations for consistency:
std::unique_ptr<Cmd> cmdptr = std::make_unique<CmdType>( kCmdNameCmd, arity, flags);
Also applies to: 60-61, 80-81, 85-86
Line range hint
1019-1058
: Simplify cache-related logic in DoCommand methodThe cache-related logic in the DoCommand method is a bit complex and could be simplified for better readability and maintainability.
Consider refactoring this section to reduce nesting and improve clarity. Here's a suggested structure:
void Cmd::DoCommand(const HintKeys& hint_keys) { if (!IsNeedCacheDo() || PIKA_CACHE_NONE == g_pika_conf->cache_mode() || db_->cache()->CacheStatus() != PIKA_CACHE_STATUS_OK) { Do(); return; } if (!cache_missed_in_rtc_ && IsNeedReadCache()) { ReadCache(); } if (is_read() && (res().CacheMiss() || cache_missed_in_rtc_)) { pstd::lock::MultiScopeRecordLock record_lock(db_->LockMgr(), current_key()); DoThroughDB(); if (IsNeedUpdateCache()) { DoUpdateCache(); } } else if (is_write()) { DoThroughDB(); if (IsNeedUpdateCache()) { DoUpdateCache(); } } }src/storage/src/storage.cc (4)
Line range hint
1100-1109
: Potential misuse of auto in lambda parameterIn the usage of
std::for_each
, the lambda function capturesauto kv
. Depending on the C++ standard your project adheres to, usingauto
in lambda parameters requires C++14 or later. If your codebase targets an earlier standard like C++11, this may cause compatibility issues.Consider specifying the type explicitly:
-std::for_each(value_to_dest.begin(), value_to_dest.end(), - [&score_members](auto kv) { score_members.emplace_back(kv.second, kv.first); }); +std::for_each(value_to_dest.begin(), value_to_dest.end(), + [&score_members](const std::pair<std::string, double>& kv) { score_members.emplace_back(kv.second, kv.first); });
Line range hint
1457-1462
: Possible incorrect comparison in loop conditionIn the
PKScanRange
method, the loop condition comparesmiter.Key()
withkey_end.ToString()
using<=
, which may not function as intended ifkey_end
is empty (indicating no limit). Also, the use ofToString()
may create unnecessary copies.Ensure that the loop condition correctly handles empty
key_end
and optimizes string comparisons:-while (miter.Valid() && limit > 0 && (end_no_limit || miter.Key().compare(key_end.ToString()) <= 0)) { +while (miter.Valid() && limit > 0 && (end_no_limit || miter.Key() <= key_end)) {This change assumes that
key_end
is astd::string
and that comparison operators are overloaded appropriately.
Line range hint
1519-1529
: Inefficient passing of integer by const referenceIn the
PKPatternMatchDelWithRemoveKeys
method, the parameterconst int64_t& max_count
passes a primitive type by const reference. Sinceint64_t
is a simple data type, it's more efficient to pass it by value.Consider changing the parameter to pass by value:
-Status Storage::PKPatternMatchDelWithRemoveKeys(const std::string& pattern, int64_t* ret, - std::vector<std::string>* remove_keys, const int64_t& max_count) { +Status Storage::PKPatternMatchDelWithRemoveKeys(const std::string& pattern, int64_t* ret, + std::vector<std::string>* remove_keys, int64_t max_count) {
540-544
: Remove commented-out code or explain its necessityThere is commented-out code for the
PKHLenForce
method. Leaving unused code can clutter the codebase and may cause confusion.Consider removing the commented code if it's no longer needed, or add a comment explaining why it's retained.
-// Status Storage::PKHLenForce(const Slice& key, int32_t* ret) { -// auto& inst = GetDBInstance(key); -// return inst->PKHLenForce(key, ret); -// }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (19)
- CMakeLists.txt (1 hunks)
- include/acl.h (1 hunks)
- include/pika_command.h (5 hunks)
- include/pika_pkhash.h (1 hunks)
- src/pika_client_conn.cc (1 hunks)
- src/pika_command.cc (10 hunks)
- src/pika_pkhash.cc (1 hunks)
- src/storage/include/storage/storage.h (18 hunks)
- src/storage/include/storage/storage_define.h (2 hunks)
- src/storage/src/base_filter.h (6 hunks)
- src/storage/src/base_value_format.h (5 hunks)
- src/storage/src/pkhash_data_value_format.h (1 hunks)
- src/storage/src/redis.cc (12 hunks)
- src/storage/src/redis.h (17 hunks)
- src/storage/src/redis_hashes.cc (35 hunks)
- src/storage/src/redis_pkhashes.cc (1 hunks)
- src/storage/src/redis_strings.cc (40 hunks)
- src/storage/src/storage.cc (34 hunks)
- src/storage/tests/pkhashes_test.cc (1 hunks)
✅ Files skipped from review due to trivial changes (1)
- src/pika_client_conn.cc
🚧 Files skipped from review as they are similar to previous changes (7)
- CMakeLists.txt
- include/pika_command.h
- include/pika_pkhash.h
- src/pika_pkhash.cc
- src/storage/src/pkhash_data_value_format.h
- src/storage/src/redis_strings.cc
- src/storage/tests/pkhashes_test.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc
[performance] 568-568: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 777-777: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 860-860: Searching before insertion is not necessary.
(stlFindInsert)
🔇 Additional comments (69)
src/storage/include/storage/storage_define.h (2)
47-47
: LGTM: New column family index added for PKHash data.The addition of
kPKHashDataCF = 7
to theColumnFamilyIndex
enum is consistent with the PR objectives and follows the correct sequential numbering.
135-135
: LGTM: Added newline at end of file.Adding a blank line at the end of the file is a good practice. It ensures the file ends with a newline character, which can prevent issues with certain tools and adheres to common coding standards.
src/storage/src/base_value_format.h (2)
34-35
: LGTM! Arrays updated correctly.The
DataTypeTag
andDataTypeStrings
arrays have been properly updated to include the newkPKHashes
type. The changes maintain consistency with theDataType
enum modifications.
52-55
: LGTM! Improved formatting and initialization.The formatting changes enhance code readability. The initialization of
ctime_
in the constructor ensures that the creation time is set correctly for eachInternalValue
instance.src/storage/src/base_filter.h (8)
60-60
: LGTM. Improved logging consistency.The changes to the DEBUG logging statements improve readability and ensure consistent formatting across different data types.
Also applies to: 64-66
115-115
: LGTM. Minor formatting improvement.The condensed constructor initialization list improves code conciseness without affecting functionality.
158-160
: LGTM. Added support for PKHashes data type.The addition of PKHashes to the list of supported data types is consistent with the existing code structure and extends the filter's functionality.
Consider adding a test case to verify the correct handling of PKHashes in the BaseDataFilter.
230-231
: LGTM. Improved constructor readability.The reformatting of the BaseDataFilterFactory constructor parameters improves readability without affecting functionality.
250-253
: LGTM. Added type aliases for PKHashes filters.The addition of type aliases for PKHashes filters is consistent with the existing pattern for other data types and provides convenient shorthand for PKHashes-specific filters.
Consider adding a brief comment explaining the purpose of these aliases, especially if they're intended to be used in other parts of the codebase.
254-254
: LGTM. Improved readability.The addition of a blank line after the new type aliases improves code readability by clearly separating different groups of aliases.
Line range hint
1-274
: Overall LGTM. PKHashes support added successfully.The changes in this file successfully introduce support for the new PKHashes data type while maintaining consistency with the existing code structure. The modifications also include several formatting improvements that enhance code readability. All changes have been reviewed and approved.
Key points:
- New include statements added (verify dependencies).
- Support for PKHashes added to BaseDataFilter.
- New type aliases for PKHashes filters introduced.
- Various formatting improvements throughout the file.
Consider adding a test case for PKHashes handling in BaseDataFilter and brief documentation for the new type aliases.
16-19
: LGTM. Verify new dependencies.The addition of new include statements looks good. It suggests the introduction of new dependencies or code reorganization.
Please ensure that these new dependencies are necessary and don't introduce any circular dependencies. Run the following script to verify the includes:
✅ Verification successful
Verification Successful: New Dependencies are Required
The new include statements are necessary as the symbols
BaseKey
,BaseValue
, andDEBUG
are actively used in the code.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the necessity of new include statements # Test: Check for the usage of symbols from the new headers rg --type cpp -e 'BaseKey' -e 'BaseValue' -e 'DEBUG' src/storage/src/base_filter.hLength of output: 920
include/acl.h (1)
55-55
: LGTM: New AclCategory enum value added correctly.The addition of the
PKHASH
enum value to theAclCategory
enum class is implemented correctly. It follows the existing pattern and uses the next available bit in the sequence.To ensure this new category is properly handled throughout the codebase, please run the following script:
This script will help identify areas where the new
PKHASH
category should be considered or handled.src/storage/src/redis.h (4)
17-19
: LGTM: New includes and PKHash handle retrieval method added.The new include statements and the
GetPKHashCFHandles()
method are consistent with the existing codebase structure and support the newly added PK Hash functionality.Also applies to: 248-250
468-469
: LGTM: PK Hashes added to stale data check.The
ExpectedStale
method has been correctly updated to include the newDataType::kPKHashes
, ensuring consistent behavior for stale data checks across all data types.
Line range hint
416-447
: Verify if PK Hashes require iterator support.The
CreateIterator
method hasn't been updated to include support for PK Hashes. While it's possible that PK Hashes don't require a separate iterator, it would be good to confirm this to ensure consistency across all data types.Could you please clarify if PK Hashes require iterator support? If they do, consider adding a case for PK Hashes in the
CreateIterator
method, similar to other data types.
Line range hint
1-557
: Summary of changes: PK Hash support added with minor enhancements.The changes in this file primarily introduce support for PK Hash operations, including new method declarations and updates to existing methods. The additions are well-structured and consistent with the existing codebase. A few suggestions for documentation improvements have been made, and clarification is needed regarding iterator support for PK Hashes. Overall, these changes enhance the functionality of the system while maintaining consistency with the existing design.
src/storage/include/storage/storage.h (3)
124-129
: New structFieldValueTTL
addedThe new
FieldValueTTL
struct extends the existingFieldValue
struct by adding attl
field. This is a good addition for supporting expiration-related operations on hash fields.
269-270
: Updated method signaturesSeveral method signatures have been updated:
Append
: Addedexpired_timestamp_sec
andout_new_value
parameters.BitOp
: Changedint64_t* ret
tostd::string& value_to_dest, int64_t* ret
.SDiffstore
,SInterstore
,SUnionstore
: Addedstd::vector<std::string>& value_to_dest
parameter.SMembersWithTTL
: Addedint64_t *ttl
parameter.LRangeWithTTL
: Addedint64_t *ttl
parameter.ZRangeWithTTL
: Addedint64_t *ttl
parameter.XInfo
: ChangedStreamInfoResult &result
toStreamInfoResult* result
.PKPatternMatchDelWithRemoveKeys
: Addedconst int64_t& max_count
parameter.These updates improve the functionality of the methods by allowing for more precise control and additional data retrieval. The changes seem appropriate and consistent with the overall design.
Also applies to: 279-280, 495-496, 519-520, 529-529, 579-580, 603-603, 1013-1013, 1048-1049
1151-1152
:⚠️ Potential issueCorrect the spelling of
EnableDymayticOptions
toEnableDynamicOptions
The method name
EnableDymayticOptions
contains a typographical error. Please correct it toEnableDynamicOptions
to improve code readability and prevent confusion.Apply the following change:
-Status EnableDymayticOptions(const OptionType& option_type, const std::string& db_type, +Status EnableDynamicOptions(const OptionType& option_type, const std::string& db_type, const std::unordered_map<std::string, std::string>& options);Ensure all references to this method are updated accordingly.
Likely invalid or redundant comment.
src/storage/src/redis_hashes.cc (18)
14-15
: Include statements added appropriately.The addition of
base_data_key_format.h
andbase_data_value_format.h
headers is appropriate and likely necessary for the functionality in this file.
118-121
: Improved error handling for hash size overflow.The addition of a check for hash size overflow enhances the robustness of the
HDel
function. This prevents potential issues with hash size manipulation.
154-156
: Consistent error message formatting.The error message formatting has been improved for consistency. As mentioned earlier, consider using the
fmt
library for string formatting throughout the codebase.
193-195
: Consistent error message formatting.The error message formatting has been improved for consistency. As previously suggested, consider using the
fmt
library for string formatting throughout the codebase.
235-237
: Consistent error message formatting.The error message formatting has been improved for consistency. As previously suggested, consider using the
fmt
library for string formatting throughout the codebase.
291-293
: Consistent error message formatting.The error message formatting has been improved for consistency. As previously suggested, consider using the
fmt
library for string formatting throughout the codebase.
Line range hint
329-333
: Improved error handling for hash size overflow in HIncrby.The addition of a check for hash size overflow enhances the robustness of the
HIncrby
function. This prevents potential issues with hash size manipulation.
382-384
: Consistent error message formatting.The error message formatting has been improved for consistency. As previously suggested, consider using the
fmt
library for string formatting throughout the codebase.
Line range hint
421-426
: Improved error handling for hash size overflow in HIncrbyfloat.The addition of a check for hash size overflow enhances the robustness of the
HIncrbyfloat
function. This prevents potential issues with hash size manipulation.
465-467
: Consistent error message formatting.The error message formatting has been improved for consistency. As previously suggested, consider using the
fmt
library for string formatting throughout the codebase.
506-508
: Consistent error message formatting.The error message formatting has been improved for consistency. As previously suggested, consider using the
fmt
library for string formatting throughout the codebase.
545-547
: Consistent error message formatting.The error message formatting has been improved for consistency. As previously suggested, consider using the
fmt
library for string formatting throughout the codebase.
608-610
: Consistent error message formatting.The error message formatting has been improved for consistency. As previously suggested, consider using the
fmt
library for string formatting throughout the codebase.
645-648
: Improved error handling for hash size overflow in HMSet.The addition of a check for hash size overflow enhances the robustness of the
HMSet
function. This prevents potential issues with hash size manipulation.
682-684
: Consistent error message formatting.The error message formatting has been improved for consistency. As previously suggested, consider using the
fmt
library for string formatting throughout the codebase.
Line range hint
712-717
: Improved error handling for hash size overflow in HSet.The addition of a check for hash size overflow enhances the robustness of the
HSet
function. This prevents potential issues with hash size manipulation.
756-758
: Consistent error message formatting.The error message formatting has been improved for consistency. As previously suggested, consider using the
fmt
library for string formatting throughout the codebase.
Line range hint
778-783
: Improved error handling for hash size overflow in HSetnx.The addition of a check for hash size overflow enhances the robustness of the
HSetnx
function. This prevents potential issues with hash size manipulation.src/pika_command.cc (18)
485-489
: New PKHash commands addedThe PKHSet command has been added to the command table. This is part of the new "Pika Expire Hash" functionality.
However, consider adding a brief comment explaining the purpose of the "Pika Expire Hash" commands for better code documentation.
490-493
: PKHExpire command addedThe PKHExpire command has been implemented, allowing for expiration of hash keys.
494-497
: PKHExpireat command addedThe PKHExpireat command has been implemented, allowing for setting expiration times for hash keys.
498-501
: PKHExpiretime command addedThe PKHExpiretime command has been implemented, likely for retrieving the expiration time of hash keys.
507-510
: PKHPersist command addedThe PKHPersist command has been implemented, likely for removing the expiration from a hash key.
511-514
: PKHGet command addedThe PKHGet command has been implemented for retrieving values from the hash.
516-518
: PKHSetex command addedThe PKHSetex command has been implemented, likely for setting a value in the hash with an expiration time.
520-522
: PKHExists command addedThe PKHExists command has been implemented for checking if a field exists in the hash.
524-526
: PKHDel command addedThe PKHDel command has been implemented for deleting fields from the hash.
528-530
: PKHLen command addedThe PKHLen command has been implemented for getting the number of fields in the hash.
532-534
: PKHStrLen command addedThe PKHStrLen command has been implemented for getting the length of a hash field's value.
536-538
: PKHIncrby command addedThe PKHIncrby command has been implemented for incrementing the value of a hash field.
540-542
: PKHMSet command addedThe PKHMSet command has been implemented for setting multiple hash fields.
548-550
: PKHMGet command addedThe PKHMGet command has been implemented for retrieving multiple hash fields.
552-554
: PKHKeys command addedThe PKHKeys command has been implemented for retrieving all field names in the hash.
561-563
: PKHGetAll command addedThe PKHGetAll command has been implemented for retrieving all fields and values in the hash.
565-567
: PKHScan command addedThe PKHScan command has been implemented for iterating over the hash.
Line range hint
1-1170
: Summary of changes and recommendationsThe file
src/pika_command.cc
has been updated with new PKHash (Pika Expire Hash) commands, significantly expanding the functionality of the Pika database. Here are the key points and recommendations:
New PKHash commands have been successfully implemented, including PKHSet, PKHExpire, PKHExpireat, PKHExpiretime, PKHTTLCmd, PKHPersist, PKHGet, PKHSetex, PKHExists, PKHDel, PKHLen, PKHStrLen, PKHIncrby, PKHMSet, PKHMSetex, PKHMGet, PKHKeys, PKHVals, PKHGetAll, and PKHScan.
The PKHTTLCmd is currently set with write flags, which should be changed to read flags as it's typically a read operation.
There are some TODO comments in Chinese that need to be translated and addressed, particularly for the PKHVals and PKHMSetex commands.
Consider adding brief comments explaining the purpose of the "Pika Expire Hash" commands for better code documentation.
There are minor formatting inconsistencies in command initializations. Consider aligning parameters for better readability.
The cache-related logic in the DoCommand method could be simplified for better readability and maintainability.
Recommendations:
- Address the issues mentioned in the review comments, particularly the flag change for PKHTTLCmd and the TODO comments.
- Improve code formatting for consistency.
- Consider refactoring the DoCommand method to simplify the cache-related logic.
- Add more comments explaining the purpose and functionality of the new PKHash commands.
Overall, the changes appear to be a solid implementation of the new PKHash functionality, but addressing these minor issues will improve the code quality and maintainability.
src/storage/src/redis.cc (7)
30-31
: Constructor Initialization Formatting Improves ReadabilityThe adjustment of the constructor's initialization list enhances code readability and follows standard C++ formatting conventions.
181-182
: Method Signature Formatting Enhances ReadabilityReformatting the parameters of
GetScanStartPoint
improves readability and conforms to the project's coding style for line breaks in method signatures.
218-218
: Include 'pika_hash_data_cf' in Compaction RangeAdding
handles_[kPKHashDataCF]
to theCompactRange
method ensures that the newpika_hash_data_cf
column family is included in compaction operations. This is essential for maintaining database performance and storage efficiency for the new hash data type.
265-266
: Method Signature Formatting Improves ClarityThe reformatting of
AddCompactKeyTaskIfNeeded
enhances the clarity of the method signature, making it easier to read and understand the parameters.
293-495
: Enhanced Retrieval of RocksDB StatisticsThe restructuring of
GetRocksDBInfo
significantly improves the collection and presentation of RocksDB statistics. The use of lambda functions likewrite_aggregated_int_property
,write_property
, andwrite_ticker_count
promotes code reusability and cleaner organization. Including detailed metrics on blob files, compaction, memtables, and various cache usages provides valuable insights for monitoring and debugging.
498-498
: Inlined 'SetWriteWalOptions' Method Enhances ConcisenessInlining the
SetWriteWalOptions
method improves code conciseness without compromising readability. It simplifies the method definition for a straightforward assignment operation.
102-111
: Ensure Proper Integration of 'pika_hash_data_cf' Column FamilyThe addition of the
'pika_hash_data_cf'
column family is a significant change. Please ensure that:
- The
DataType
used inHashesDataFilterFactory
corresponds to the new data type forPKHash
, if applicable.- The
handles_
vector correctly accommodates the new column family, and any indices used elsewhere in the code (e.g.,handles_[kPKHashDataCF]
) match the order of column families added.- All necessary configurations specific to
pika_hash_data_cf
are properly set to avoid runtime issues.Run the following script to confirm that
kPKHashDataCF
is properly defined and used consistently:src/storage/src/storage.cc (6)
284-285
: Ensure consistent parameter passing style for output parametersIn the
Append
method,out_new_value
is passed as a non-const reference (std::string&
). For consistency with other methods that use pointers for output parameters (e.g.,std::string* value
), consider changingout_new_value
to a pointer (std::string* out_new_value
).
480-484
: Useint64_t
for TTL parameter to support larger valuesThe
PKHExpire
method usesint32_t ttl
as the TTL parameter. To handle larger TTL values and maintain consistency with other methods, consider changingttl
toint64_t
.
550-553
: Useint64_t
for TTL parameter inPKHIncrby
The
PKHIncrby
method usesint32_t ttl
as the TTL parameter. To support larger TTL values and ensure consistency with other methods likeSetex
, consider changingttl
toint64_t
.
1289-1291
: Ensure consistent parameter passing style for output parametersThe
XInfo
method uses a non-const reference (StreamInfoResult& result
) as an output parameter. For consistency across the codebase, consider changingresult
to a pointer (StreamInfoResult* result
).
2006-2007
: Typographical error in method nameSimilarly, the method
EnableDymayticOptions
has a typo. It should beEnableDynamicOptions
.
1988-1989
:⚠️ Potential issueTypographical error in method name
The method
EnableDymayticOptions
seems to have a typo in its name. It should likely beEnableDynamicOptions
.Correct the method name to improve code clarity and prevent confusion.
-Status Storage::EnableDymayticOptions(const OptionType& option_type, const std::string& db_type, +Status Storage::EnableDynamicOptions(const OptionType& option_type, const std::string& db_type,Ensure that all calls to this method are updated accordingly.
Likely invalid or redundant comment.
src/pika_command.cc
Outdated
// 参数可能是 -6 需要ttl | ||
std::unique_ptr<Cmd> ehmsetexptr = std::make_unique<PKHMSetexCmd>( | ||
kCmdNamePKHMSetex, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHMSetex, std::move(ehmsetexptr))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PKHMSetex command added with a TODO comment
The PKHMSetex command has been implemented, but there's a TODO comment in Chinese about the number of parameters.
Translate the TODO comment to English and clarify the parameter count issue. Consider adding a comment explaining the expected parameter structure for this command.
////Ehttl | ||
std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>( | ||
kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHTTL, std::move(ehttlptr))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PKHTTLCmd command added with incorrect flags
The PKHTTLCmd has been added, but it's currently set with write flags. This command is typically used to get the time-to-live for a key, which is a read operation.
Change the flags for PKHTTLCmd to use read flags instead of write flags. Apply this diff:
- std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
- kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
+ std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
+ kCmdNamePKHTTL, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
////Ehttl | |
std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>( | |
kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | |
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHTTL, std::move(ehttlptr))); | |
////Ehttl | |
std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>( | |
kCmdNamePKHTTL, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | |
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHTTL, std::move(ehttlptr))); |
// pika hash CF | ||
column_families.emplace_back("pika_hash_data_cf", pika_hash_data_cf_ops); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update Column Family Indices After Adding New Column Family
With the addition of 'pika_hash_data_cf'
to the column_families
vector, the indices of existing column families in the handles_
array may have shifted. Please ensure that all references to handles_
use the correct indices corresponding to the updated order of column families to prevent out-of-bounds errors or misreferenced handles.
Consider updating the enumeration or constants that represent the indices of each column family in handles_
.
db_->rocksdb::DB::GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues); | ||
mapToString(mapvalues, prefix); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct Method Invocation of 'GetMapProperty'
At line 493, the method call to GetMapProperty
incorrectly qualifies the method with rocksdb::DB::
. Since db_
is a pointer to rocksdb::DB
, the correct invocation should be:
db_->GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues);
Apply this diff to fix the method call:
- db_->rocksdb::DB::GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues);
+ db_->GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues);
hash_data_cf_ops.compaction_filter_factory = | ||
std::make_shared<HashesDataFilterFactory>(&db_, &handles_, DataType::kHashes); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confirm Correct DataType for HashesDataFilterFactory
The HashesDataFilterFactory
is initialized with DataType::kHashes
. Since a new hash data type is being introduced, please verify that this is the intended DataType
for pika_hash_data_cf_ops
. If DataType::kPKHashes
(or an equivalent) exists for the new PKHash
commands, it should be used instead to ensure the compaction filter operates correctly on the new data type.
Apply this diff if a new DataType
is defined for PKHash
:
hash_data_cf_ops.compaction_filter_factory =
- std::make_shared<HashesDataFilterFactory>(&db_, &handles_, DataType::kHashes);
+ std::make_shared<HashesDataFilterFactory>(&db_, &handles_, DataType::kPKHashes);
Committable suggestion was skipped due to low confidence.
src/storage/src/redis_pkhashes.cc
Outdated
// 1. 判断类型是否匹配和key是否过期【这里通过meta key来实现过期功能】。 | ||
if (s.ok() && !ExpectedMetaValue(DataType::kPKHashes, meta_value)) { | ||
if (ExpectedStale(meta_value)) { | ||
s = Status::NotFound(); | ||
} else { | ||
return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + | ||
", expect type: " + DataTypeStrings[static_cast<int>(DataType::kPKHashes)] + | ||
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | ||
} | ||
} | ||
// 2. 健存在,并且没有过期,则更新。 | ||
if (s.ok()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Translate code comments to English for consistency
Several code comments are written in Chinese (e.g., lines 77-81, 87-88, 136, 480-481). To maintain consistency and enhance readability for all contributors, please translate these comments into English.
Also applies to: 136-136, 480-481
Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl, int32_t* ret) { | ||
auto& inst = GetDBInstance(key); | ||
return inst->PKHSetex(key, field, value, ttl, ret); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use int64_t
for TTL parameter in PKHSetex
In the PKHSetex
method, the TTL parameter ttl
is currently an int32_t
. Consider changing it to int64_t
to support larger TTL values and maintain consistency across similar methods.
Apply this diff:
-Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl, int32_t* ret) {
+Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl, int32_t* ret) {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl, int32_t* ret) { | |
auto& inst = GetDBInstance(key); | |
return inst->PKHSetex(key, field, value, ttl, ret); | |
} | |
Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl, int32_t* ret) { | |
auto& inst = GetDBInstance(key); | |
return inst->PKHSetex(key, field, value, ttl, ret); | |
} |
std::for_each(value_to_dest.begin(), value_to_dest.end(), | ||
[&score_members](auto kv) { score_members.emplace_back(kv.second, kv.first); }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure compatibility with C++ standards when using generic lambdas
The lambda function in std::for_each
uses auto
in its parameter list. This feature requires C++14 or newer. If the codebase targets C++11, this will cause compilation errors.
Specify the parameter type explicitly for C++11 compatibility:
-std::for_each(value_to_dest.begin(), value_to_dest.end(),
- [&score_members](auto kv) { score_members.emplace_back(kv.second, kv.first); });
+std::for_each(value_to_dest.begin(), value_to_dest.end(),
+ [&score_members](const std::pair<std::string, double>& kv) { score_members.emplace_back(kv.second, kv.first); });
This change ensures that the code remains compatible with earlier C++ standards.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
std::for_each(value_to_dest.begin(), value_to_dest.end(), | |
[&score_members](auto kv) { score_members.emplace_back(kv.second, kv.first); }); | |
std::for_each(value_to_dest.begin(), value_to_dest.end(), | |
[&score_members](const std::pair<std::string, double>& kv) { score_members.emplace_back(kv.second, kv.first); }); |
378f5a3
to
9d208a6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 32
🧹 Outside diff range and nitpick comments (1)
src/storage/src/redis.h (1)
254-294
: Approve new PK Hash Commands, suggest documentationThe new PK Hash Commands methods are a welcome addition to the Redis class. They follow a consistent naming convention and provide a comprehensive set of operations for PK Hash data structures.
Consider adding documentation comments for these new methods to explain their purpose, parameters, and return values. This will help other developers understand and use the new functionality correctly.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (19)
- include/acl.h (1 hunks)
- include/pika_command.h (5 hunks)
- include/pika_pkhash.h (1 hunks)
- src/pika_client_conn.cc (1 hunks)
- src/pika_command.cc (10 hunks)
- src/pika_pkhash.cc (1 hunks)
- src/storage/include/storage/storage.h (18 hunks)
- src/storage/include/storage/storage_define.h (2 hunks)
- src/storage/src/base_filter.h (6 hunks)
- src/storage/src/base_value_format.h (5 hunks)
- src/storage/src/pkhash_data_value_format.h (1 hunks)
- src/storage/src/redis.cc (12 hunks)
- src/storage/src/redis.h (17 hunks)
- src/storage/src/redis_hashes.cc (35 hunks)
- src/storage/src/redis_pkhashes.cc (1 hunks)
- src/storage/src/redis_strings.cc (39 hunks)
- src/storage/src/storage.cc (34 hunks)
- src/storage/tests/hashes_test.cc (3 hunks)
- src/storage/tests/pkhashes_test.cc (1 hunks)
✅ Files skipped from review due to trivial changes (1)
- src/storage/tests/hashes_test.cc
🚧 Files skipped from review as they are similar to previous changes (14)
- include/acl.h
- include/pika_command.h
- include/pika_pkhash.h
- src/pika_client_conn.cc
- src/storage/include/storage/storage.h
- src/storage/include/storage/storage_define.h
- src/storage/src/base_filter.h
- src/storage/src/base_value_format.h
- src/storage/src/pkhash_data_value_format.h
- src/storage/src/redis.cc
- src/storage/src/redis_hashes.cc
- src/storage/src/redis_strings.cc
- src/storage/src/storage.cc
- src/storage/tests/pkhashes_test.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc
[performance] 565-565: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 774-774: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 857-857: Searching before insertion is not necessary.
(stlFindInsert)
🔇 Additional comments (12)
src/pika_pkhash.cc (5)
28-33
: 🛠️ Refactor suggestionRemove unnecessary iterator increments
The iterator increments are unnecessary and don't affect the parsing logic.
Apply this diff to remove the unnecessary increments:
-iter++; -iter++; -iter++; -iter++; -iter++;Likely invalid or redundant comment.
37-38
:⚠️ Potential issuePotential error in assigning fields
The assignment of fields may be incorrect due to the unnecessary iterator increments.
Apply this diff to correctly assign fields:
-fields_.assign(iter, argv_.end()); +fields_.assign(argv_.begin() + 5, argv_.end());Likely invalid or redundant comment.
33-35
:⚠️ Potential issueFix incorrect parsing of numfields value
Similar to the TTL parsing, the condition for parsing numfields is incorrect.
Apply this diff to fix the parsing:
-if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) == 0) { +if (pstd::string2int(argv_[4].data(), argv_[4].size(), &numfields_) != 1) {Likely invalid or redundant comment.
458-468
: 🛠️ Refactor suggestionSimplify error handling in PKHMSetCmd::Do()
The error handling can be simplified by combining the ok and IsInvalidArgument cases.
Apply this diff to simplify the error handling:
void PKHMSetCmd::Do() { s_ = db_->storage()->HMSet(key_, fvs_); - if (s_.ok()) { + if (s_.ok() || s_.IsInvalidArgument()) { res_.SetRes(CmdRes::kOk); AddSlotKey("h", key_, db_); - } else if (s_.IsInvalidArgument()) { - res_.SetRes(CmdRes::kMultiKey); } else { res_.SetRes(CmdRes::kErrOther, s_.ToString()); } }Likely invalid or redundant comment.
252-262
: 🛠️ Refactor suggestionSimplify error handling in PKHSetCmd::Do()
The error handling can be simplified by combining the ok and IsInvalidArgument cases.
Apply this diff to simplify the error handling:
void PKHSetCmd::Do() { int32_t ret = 0; s_ = db_->storage()->PKHSet(key_, field_, value_, &ret); - if (s_.ok()) { + if (s_.ok() || s_.IsInvalidArgument()) { res_.AppendContent(":" + std::to_string(ret)); AddSlotKey("h", key_, db_); - } else if (s_.IsInvalidArgument()) { - res_.SetRes(CmdRes::kMultiKey); } else { res_.SetRes(CmdRes::kErrOther, s_.ToString()); } }Likely invalid or redundant comment.
src/storage/src/redis.h (2)
17-19
: LGTM: New includes and GetPKHashCFHandles method addedThe new include statements and the GetPKHashCFHandles method are appropriate additions for the new PK Hash functionality. The method follows the established pattern for other data types in the class.
Also applies to: 248-250
468-469
: Verify complete integration of PK Hash functionalityThe ExpectedStale method has been correctly updated to include PKHashes. However, it's important to ensure that all relevant parts of the codebase have been updated to fully support the new PK Hash operations.
Please run the following script to check for any other occurrences of "Hash" or "Hashes" that might need to be updated:
Review the results to determine if any other parts of the code need to be updated to fully support PK Hash operations.
src/storage/src/redis_pkhashes.cc (5)
477-478
: Translate code comments to English for consistencyThe code comment at lines 477-478 is in Chinese:
// 1. 判断类型是否匹配和key是否过期。
To maintain consistency and readability for all contributors, please translate this comment into English.
562-568
: Eliminate redundant check before inserting intofield_set
The explicit check
field_set.find(field) == field_set.end()
before insertion is unnecessary. You can rely on the result offield_set.insert(field)
to determine if the field was newly inserted, which optimizes performance.🧰 Tools
🪛 cppcheck
[performance] 565-565: Searching before insertion is not necessary.
(stlFindInsert)
771-777
: Eliminate redundant check before inserting intofields
Similarly, in this loop, you can remove the explicit search before insertion into
fields
. Using the result offields.insert(field)
enhances efficiency.🧰 Tools
🪛 cppcheck
[performance] 774-774: Searching before insertion is not necessary.
(stlFindInsert)
854-860
: Remove unnecessary check before inserting intofields
As with the previous instances, eliminate the redundant check before inserting into
fields
to optimize the field filtering process.🧰 Tools
🪛 cppcheck
[performance] 857-857: Searching before insertion is not necessary.
(stlFindInsert)
680-765
: Address the unusedttl
parameter inPKHIncrby
The
ttl
parameter in thePKHIncrby
function is not used. If TTL functionality is intended for this operation, please implement the usage ofttl
. Otherwise, consider removing the parameter to avoid confusion.
iter++; | ||
iter++; | ||
iter++; | ||
iter++; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove unnecessary iterator increments
The iterator increments are unnecessary and don't affect the parsing logic.
Apply this diff to remove the unnecessary increments:
-iter++;
-iter++;
-iter++;
-iter++;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
iter++; | |
iter++; | |
iter++; | |
iter++; |
iter++; | ||
iter++; | ||
iter++; | ||
iter++; | ||
iter++; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove unnecessary iterator increments
The iterator increments are unnecessary and don't affect the parsing logic.
Apply this diff to remove the unnecessary increments:
-iter++;
-iter++;
-iter++;
-iter++;
-iter++;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
iter++; | |
iter++; | |
iter++; | |
iter++; | |
iter++; |
iter++; | ||
iter++; | ||
iter++; | ||
iter++; | ||
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove unnecessary iterator increments
The iterator increments are unnecessary and don't affect the parsing logic.
Apply this diff to remove the unnecessary increments:
-iter++;
-iter++;
-iter++;
-iter++;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
iter++; | |
iter++; | |
iter++; | |
iter++; | |
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) { | |
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) { |
iter++; | ||
iter++; | ||
iter++; | ||
iter++; | ||
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove unnecessary iterator increments
The iterator increments are unnecessary and don't affect the parsing logic.
Apply this diff to remove the unnecessary increments:
-iter++;
-iter++;
-iter++;
-iter++;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
iter++; | |
iter++; | |
iter++; | |
iter++; | |
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) { | |
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) { |
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) { | ||
res_.SetRes(CmdRes::kInvalidInt); | ||
return; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect parsing of numfields value
The condition for parsing numfields is incorrect.
Apply this diff to fix the parsing:
-if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) {
+if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) == 0) { | |
res_.SetRes(CmdRes::kInvalidInt); | |
return; | |
} | |
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &numfields_) != 1) { | |
res_.SetRes(CmdRes::kInvalidInt); | |
return; | |
} |
// TODO(DDD) 为啥vals是慢的命令。 | ||
// TODO(DDD) 这些标志位都是啥意思。 | ||
std::unique_ptr<Cmd> ehvalsptr = std::make_unique<PKHValsCmd>( | ||
kCmdNamePKHVals, 2, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsSlow); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHVals, std::move(ehvalsptr))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Address TODO comments and clarify command performance
There are two TODO comments in Chinese that need to be addressed:
- "Why is vals a slow command?"
- "What do these flag bits mean?"
Please translate these comments to English and provide explanations or necessary changes based on the answers to these questions. Additionally, verify whether PKHValsCmd
should indeed be marked with kCmdFlagsSlow
. If the PKHVals
operation is time-consuming due to data size or processing complexity, marking it as slow may be appropriate.
// Pika Expire Hash | ||
std::unique_ptr<Cmd> ehsetptr = std::make_unique<PKHSetCmd>( | ||
kCmdNamePKHSet, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHSet, std::move(ehsetptr))); | ||
|
||
////Ehexpire | ||
std::unique_ptr<Cmd> ehexpireptr = std::make_unique<PKHExpireCmd>( | ||
kCmdNamePKHExpire, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExpire, std::move(ehexpireptr))); | ||
////Ehexpireat | ||
std::unique_ptr<Cmd> ehexpireatptr = std::make_unique<PKHExpireatCmd>( | ||
kCmdNamePKHExpireat, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExpireat, std::move(ehexpireatptr))); | ||
////Ehexpiretime | ||
std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>( | ||
kCmdNamePKHExpiretime, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExpiretime, std::move(ehexpiretimeptr))); | ||
////Ehttl | ||
std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>( | ||
kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHTTL, std::move(ehttlptr))); | ||
|
||
////Ehpersist | ||
std::unique_ptr<Cmd> ehpersistptr = std::make_unique<PKHPersistCmd>( | ||
kCmdNamePKHPersist, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHPersist, std::move(ehpersistptr))); | ||
////Ehget | ||
std::unique_ptr<Cmd> ehgetptr = std::make_unique<PKHGetCmd>( | ||
kCmdNamePKHGet, 3, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHGet, std::move(ehgetptr))); | ||
|
||
std::unique_ptr<Cmd> ehsetexptr = std::make_unique<PKHSetexCmd>( | ||
kCmdNamePKHSetex, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHSetex, std::move(ehsetexptr))); | ||
|
||
std::unique_ptr<Cmd> ehexistsptr = std::make_unique<PKHExistsCmd>( | ||
kCmdNamePKHExists, 3, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExists, std::move(ehexistsptr))); | ||
|
||
std::unique_ptr<Cmd> ehdelptr = std::make_unique<PKHDelCmd>( | ||
kCmdNamePKHDel, -3, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHDel, std::move(ehdelptr))); | ||
|
||
std::unique_ptr<Cmd> ehlenptr = std::make_unique<PKHLenCmd>( | ||
kCmdNamePKHLen, 2, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHLen, std::move(ehlenptr))); | ||
|
||
std::unique_ptr<Cmd> ehstrlenptr = std::make_unique<PKHStrLenCmd>( | ||
kCmdNamePKHStrlen, 3, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHStrlen, std::move(ehstrlenptr))); | ||
|
||
std::unique_ptr<Cmd> ehincrbyptr = std::make_unique<PKHIncrbyCmd>( | ||
kCmdNamePKHIncrby, 4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHIncrby, std::move(ehincrbyptr))); | ||
|
||
std::unique_ptr<Cmd> ehmsetptr = std::make_unique<PKHMSetCmd>( | ||
kCmdNamePKHMSet, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHMSet, std::move(ehmsetptr))); | ||
|
||
std::unique_ptr<Cmd> ehmgetptr = std::make_unique<PKHMGetCmd>( | ||
kCmdNamePKHMGet, -3, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHMGet, std::move(ehmgetptr))); | ||
|
||
std::unique_ptr<Cmd> ehkeysptr = std::make_unique<PKHKeysCmd>( | ||
kCmdNamePKHKeys, 2, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHKeys, std::move(ehkeysptr))); | ||
// TODO(DDD) 为啥vals是慢的命令。 | ||
// TODO(DDD) 这些标志位都是啥意思。 | ||
std::unique_ptr<Cmd> ehvalsptr = std::make_unique<PKHValsCmd>( | ||
kCmdNamePKHVals, 2, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsSlow); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHVals, std::move(ehvalsptr))); | ||
|
||
std::unique_ptr<Cmd> ehgetallptr = std::make_unique<PKHGetAllCmd>( | ||
kCmdNamePKHGetall, 2, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsSlow); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHGetall, std::move(ehgetallptr))); | ||
|
||
std::unique_ptr<Cmd> ehscanptr = std::make_unique<PKHScanCmd>( | ||
kCmdNamePKHScan, -3, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsSlow); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHScan, std::move(ehscanptr))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review command flags for consistency
Please review the command flags for all PKHash commands to ensure consistency. Some observations:
- Most read operations use
kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast
. - Most write operations use
kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast
. - Some commands (e.g., PKHVals, PKHGetall, PKHScan) are marked as slow, while others with potentially similar complexity are marked as fast.
Ensure that the performance characteristics of each command are accurately reflected in their flags. If certain commands are indeed slower due to their implementation or typical use cases, consider adding comments explaining why they are marked as slow.
std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>( | ||
kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHTTL, std::move(ehttlptr))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct command flags for PKHTTLCmd
The PKHTTL
command is intended to return the time-to-live for a hash key, which is a read operation. However, it is currently flagged as a write command. Please change the flags to use kCmdFlagsRead
to accurately reflect its functionality.
- std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
- kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
+ std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>(
+ kCmdNamePKHTTL, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>( | |
kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | |
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHTTL, std::move(ehttlptr))); | |
std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>( | |
kCmdNamePKHTTL, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | |
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHTTL, std::move(ehttlptr))); |
src/storage/src/redis_pkhashes.cc
Outdated
if (ttl <= 0) { | ||
return Status::InvalidArgument("invalid expire time, must be >= 0"); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the error message regarding TTL value
The condition checks if ttl <= 0
, but the error message states "must be >= 0", which includes zero. Since a TTL of zero is invalid, the error message should read "must be > 0" to be accurate.
Apply this diff to correct the error message:
if (ttl <= 0) {
- return Status::InvalidArgument("invalid expire time, must be >= 0");
+ return Status::InvalidArgument("invalid expire time, must be > 0");
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (ttl <= 0) { | |
return Status::InvalidArgument("invalid expire time, must be >= 0"); | |
} | |
if (ttl <= 0) { | |
return Status::InvalidArgument("invalid expire time, must be > 0"); | |
} |
if (timestamp <= 0) { | ||
rets->assign(numfields, 2); | ||
return Status::InvalidArgument("invalid expire time, must be >= 0"); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the error message regarding expiration timestamp
The condition checks if timestamp <= 0
, but the error message says "must be >= 0", which includes zero. Since a timestamp of zero is invalid, the error message should state "must be > 0" to be accurate.
Apply this diff to correct the error message:
if (timestamp <= 0) {
rets->assign(numfields, 2);
- return Status::InvalidArgument("invalid expire time, must be >= 0");
+ return Status::InvalidArgument("invalid expire time, must be > 0");
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (timestamp <= 0) { | |
rets->assign(numfields, 2); | |
return Status::InvalidArgument("invalid expire time, must be >= 0"); | |
} | |
if (timestamp <= 0) { | |
rets->assign(numfields, 2); | |
return Status::InvalidArgument("invalid expire time, must be > 0"); | |
} |
@@ -0,0 +1,469 @@ | |||
// Copyright (c) 2015-present, Qihoo, Inc. All rights reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
改2024
9d208a6
to
25c41e8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 34
🧹 Outside diff range and nitpick comments (16)
src/storage/include/storage/storage_define.h (1)
47-47
: LGTM. Consider adding documentation for the new enum value.The addition of
kPKHashDataCF = 7
to theColumnFamilyIndex
enum is correct and aligns with the PR objectives. This will support the new PKHash functionality.Consider adding a brief comment explaining the purpose of this new column family, similar to the existing comment for
kMetaCF
.src/storage/src/base_value_format.h (2)
21-31
: LGTM! Consider adding a comment forkAll
.The additions to the
DataType
enum are well-structured and align with the PR objectives. The newkPKHashes
type supports the introduction of PKHash commands, andkAll
is a logical addition.Consider adding a brief comment for the
kAll
enum value to clarify its purpose, e.g.:kAll = 8, // Represents all data types, used for operations that apply to any type
Line range hint
57-61
: LGTM! Consider adding anoexcept
specifier.The changes to the
InternalValue
destructor are good improvements:
- Making it virtual allows for proper cleanup in derived classes.
- The added memory management prevents potential memory leaks.
Consider adding the
noexcept
specifier to the destructor:virtual ~InternalValue() noexcept { // ... existing code ... }This ensures that the destructor doesn't throw exceptions, which is a best practice for destructors.
include/pika_pkhash.h (1)
1-469
: Overall, well-structured and consistent implementationThe file implements a comprehensive set of command classes for PKHash operations, following a consistent structure and design pattern. The code demonstrates good adherence to object-oriented principles and separation of concerns. Minor issues with uninitialized member variables have been noted in previous comments.
Consider adding comments to explain the purpose of each command class and any complex logic within the methods. This would enhance code readability and maintainability.
include/pika_command.h (2)
141-158
: LGTM! Consider grouping PKHash constants.The addition of PKHash command constants is well-structured and follows the existing naming conventions. This will facilitate the implementation of PKHash functionality throughout the codebase.
For improved readability and consistency with other command groups, consider adding a comment line above these constants, similar to how other command groups are separated (e.g., "// Hash", "// List", etc.).
141-158
: Summary: PKHash support added consistently.The changes in this file lay the groundwork for PKHash functionality by adding necessary constants and a flag. These additions are consistent with the existing code structure and naming conventions.
As you continue implementing PKHash functionality:
- Ensure that all relevant command handlers and processors are updated to use these new constants and the
kCmdFlagsPKHash
flag.- Update any command tables or registries to include the new PKHash commands.
- Consider adding unit tests specifically for PKHash command parsing and flag checking to ensure robustness of the new functionality.
Also applies to: 312-313
src/storage/src/redis_hashes.cc (3)
90-92
: Improved error message formatting.The multi-line format enhances readability. Consider using a string literal with embedded newlines for even better readability:
-return Status::InvalidArgument("WRONGTYPE, key: " + key.ToString() + - ", expect type: " + DataTypeStrings[static_cast<int>(DataType::kHashes)] + - ", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); +return Status::InvalidArgument( + "WRONGTYPE, key: " + key.ToString() + "\n" + "expect type: " + DataTypeStrings[static_cast<int>(DataType::kHashes)] + "\n" + "get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]);
153-155
: Consistent improvement in error message formatting across multiple functions.The multi-line formatting has been consistently applied to error messages throughout the file, enhancing readability. Consider applying the string literal with embedded newlines suggestion to all these instances for further improvement and consistency.
Also applies to: 192-194, 234-236, 289-291, 380-382, 463-465, 504-506
1153-1155
: Further consistent improvement in error message formatting.The multi-line formatting for error messages has been consistently applied to these additional instances. As suggested earlier, consider using string literals with embedded newlines for even better readability and consistency across all error messages in the file.
Also applies to: 1192-1194, 1228-1230, 1266-1268, 1304-1306
src/storage/src/redis.cc (1)
42-42
: Remove unused code related to 'env_'The member
env_
is commented out in the constructor (// env_ = rocksdb::Env::Instance();
) and in the destructor (// delete env_;
). Ifenv_
is no longer needed, consider removing both the commented-out initialization and cleanup to keep the codebase clean.src/storage/src/redis.h (1)
537-538
: Address the TODO: Separate environment for each RocksDB instanceThe TODO comment suggests separating the environment for each RocksDB instance. Implementing this can enhance resource isolation and stability.
Would you like assistance in designing or implementing this change?
src/storage/include/storage/storage.h (1)
Line range hint
94-95
: Fix the typo: 'invaild_keys' should be 'invalid_keys'The member variable
invaild_keys
in theKeyInfo
struct is misspelled. It should beinvalid_keys
. This typo occurs in the declaration and usage within the struct.Apply this diff to correct the typo:
struct KeyInfo { uint64_t keys = 0; uint64_t expires = 0; uint64_t avg_ttl = 0; - uint64_t invaild_keys = 0; + uint64_t invalid_keys = 0; KeyInfo() : keys(0), expires(0), avg_ttl(0), - invaild_keys(0) {} + invalid_keys(0) {} KeyInfo(uint64_t k, uint64_t e, uint64_t a, uint64_t i) - : keys(k), expires(e), avg_ttl(a), invaild_keys(i) {} + : keys(k), expires(e), avg_ttl(a), invalid_keys(i) {}src/storage/src/storage.cc (3)
Line range hint
199-211
: Refactor Suggestion: Eliminate Code Duplication inMGet
FunctionsThe
Storage::MGet
andStorage::MGetWithTTL
functions contain similar loops iterating over the keys to retrieve values. To improve maintainability and reduce code duplication, consider extracting the common logic into a shared helper function that both methods can utilize.Also applies to: 218-231
2016-2017
: Typographical Error in Function NameEnableDymayticOptions
The function name
EnableDymayticOptions
appears to contain a typographical error. It should likely beEnableDynamicOptions
for clarity and consistency with naming conventions.Apply this diff to correct the typo:
-Status Storage::EnableDymayticOptions(const OptionType& option_type, const std::string& db_type, +Status Storage::EnableDynamicOptions(const OptionType& option_type, const std::string& db_type,
1991-1991
: Nitpick: Add Space for Clarity in Log MessageThe log message concatenates strings without spaces, which may reduce readability. Consider adding a space or delimiter for clarity.
Apply this diff:
- LOG(WARNING) << "Invalid DB Index: " << index << "total: " << db_instance_num_; + LOG(WARNING) << "Invalid DB Index: " << index << ", total: " << db_instance_num_;src/pika_command.cc (1)
966-966
: Correct the grammatical error in the commentThe comment contains a grammatical error. It should be corrected for clarity.
Apply this diff:
- Clear(); // Clear cmd, Derived class can has own implement + Clear(); // Clear command; derived classes can have their own implementations
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (19)
- include/acl.h (1 hunks)
- include/pika_command.h (5 hunks)
- include/pika_pkhash.h (1 hunks)
- src/pika_client_conn.cc (1 hunks)
- src/pika_command.cc (10 hunks)
- src/pika_pkhash.cc (1 hunks)
- src/storage/include/storage/storage.h (13 hunks)
- src/storage/include/storage/storage_define.h (2 hunks)
- src/storage/src/base_filter.h (6 hunks)
- src/storage/src/base_value_format.h (5 hunks)
- src/storage/src/pkhash_data_value_format.h (1 hunks)
- src/storage/src/redis.cc (12 hunks)
- src/storage/src/redis.h (14 hunks)
- src/storage/src/redis_hashes.cc (29 hunks)
- src/storage/src/redis_pkhashes.cc (1 hunks)
- src/storage/src/redis_strings.cc (39 hunks)
- src/storage/src/storage.cc (28 hunks)
- src/storage/tests/hashes_test.cc (3 hunks)
- src/storage/tests/pkhashes_test.cc (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- include/acl.h
- src/pika_client_conn.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc
[performance] 565-565: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 774-774: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 857-857: Searching before insertion is not necessary.
(stlFindInsert)
src/storage/src/redis_strings.cc
[error] 917-917: Unmatched '{'. Configuration
(syntaxError)
🔇 Additional comments (45)
src/storage/include/storage/storage_define.h (1)
Line range hint
1-135
: Summary of changes in storage_define.h
- Added
kPKHashDataCF = 7
to theColumnFamilyIndex
enum, which supports the new PKHash functionality introduced in this PR.- Minor formatting improvements in the
SeekUserkeyDelim
function.These changes are consistent with the PR objectives and related changes in other files. They lay the groundwork for the new PKHash commands and data management.
src/storage/src/base_value_format.h (1)
35-35
: LGTM!DataTypeStrings
array correctly updated.The
DataTypeStrings
array has been properly updated to include the new data types. The additions of "pkhash", "none", and "all" align correctly with the enum changes.include/pika_pkhash.h (1)
37-37
: Verify the emptyClear()
methodThe
Clear()
method is currently empty. Please confirm if this is intentional or if it should be implemented to reset the internal state of the command object.src/storage/src/redis_hashes.cc (5)
14-15
: Appropriate header inclusions added.The addition of these headers likely provides necessary definitions for key and value formatting used in hash operations.
855-856
: Improved function signature formatting.Splitting the long function signature across multiple lines enhances readability. This change is in line with good coding practices for handling long lines of code.
878-880
: Consistent improvement in function signature formatting.Long function signatures have been consistently split across multiple lines throughout the file. This change enhances code readability and is in line with good coding practices for handling long lines of code.
Also applies to: 937-938, 998-999, 1068-1069
1350-1351
: Improved code formatting for long line.Splitting the long line of code across multiple lines enhances readability. This change is in line with good coding practices for handling long lines of code.
Line range hint
1-1371
: Overall code readability improvements.The changes in this file consistently enhance code readability through improved formatting of error messages, function signatures, and long lines of code. While no functional changes were made, these formatting improvements align with good coding practices and make the code easier to read and maintain. Consider applying the suggested further improvements for even better consistency and readability across the file.
src/storage/tests/hashes_test.cc (3)
357-357
: Verify the removal of assertion for HIncrbyfloat testThe assertion for the expected value of
HIncrbyfloat
operation has been commented out. This reduces the strictness of the test case.Could you please clarify the reasoning behind removing this assertion? If this is intentional, consider adding a comment explaining why the exact value is no longer being checked.
#!/bin/bash # Check if there are any other changes to HIncrbyfloat tests or implementation rg -A 5 "HIncrbyfloat"
Line range hint
357-391
: Consider the impact of relaxed HIncrbyfloat testingThe removal of specific value assertions in both
HIncrbyfloat
test cases reduces the precision of these tests. While this might be intentional, it's important to ensure that theHIncrbyfloat
functionality is still adequately tested.Please review if there are alternative ways to test the
HIncrbyfloat
function that don't rely on exact value matching. For example, could we test for a range of acceptable values or other properties of the output? Additionally, ensure that these changes don't leave any critical behavior untested.#!/bin/bash # Check for any other tests related to HIncrbyfloat rg "HIncrbyfloat" src/storage/tests/
391-391
: Confirm the intention of commenting out HIncrbyfloat assertionSimilar to the previous change, another assertion for the
HIncrbyfloat
operation has been commented out.Is this part of a larger change in how
HIncrbyfloat
is being tested? If so, it might be beneficial to update the test description or add comments explaining the new testing strategy.src/storage/src/base_filter.h (6)
16-19
: Inclusion of necessary header filesThe added
#include
statements ensure that the required base key and value formats and debugging utilities are accessible within this file, improving code clarity and functionality.
58-58
: Enhanced debug logging for stream meta typeThe debug statement now includes the
version
information for stream meta types, which provides more detailed logging and aids in troubleshooting.
62-64
: Corrected debug logging format for list meta typeThe debug statement has been reformatted to ensure that the format specifiers align with the corresponding arguments, resulting in accurate and clear logging output.
113-113
: Improved constructor parameter alignment inBaseDataFilter
The constructor parameters are now clearly aligned, enhancing readability and maintainability of the code.
156-158
: Verify correct parsing and handling ofPKHashes
meta valuesThe inclusion of
DataType::kPKHashes
in the conditional ensures thatPKHashes
are handled similarly to other data types likeHashes
,Sets
, andZSets
. Please verify thatParsedBaseMetaValue
properly parsesPKHashes
meta values and that TTL (Etime
) and version management function correctly for this new data type.
227-228
: UpdatedBaseDataFilterFactory
constructor to includeDataType
parameterBy adding the
enum DataType type
parameter to the constructor, the factory can now create data filters specific to the data type, enhancing the modularity and reusability of the filter factory.src/storage/src/redis.cc (4)
94-95
: Confirm Correct DataType for HashesDataFilterFactory
102-111
: Confirm Correct DataType for 'pika_hash_data_cf' Compaction Filter Factory
167-168
: Update Column Family Indices After Adding New Column Family
493-494
: Correct Method Invocation of 'GetMapProperty'src/storage/src/redis.h (3)
160-161
: Parameters in 'BitOp' method are consistent and well-definedThe updated
BitOp
method signature is consistent and aligns with the expected parameters.
260-300
: Ensure unit tests are added for new PK Hash methodsThe addition of new PK Hash methods (e.g.,
PKHExpire
,PKHGet
,PKHSet
, etc.) enhances the Redis class functionality. Please ensure comprehensive unit tests are implemented to validate their correctness and prevent future regressions.Would you like assistance in generating unit tests for these methods?
468-474
: Update 'ExpectedStale' function to include 'kPKHashes'Adding
DataType::kPKHashes
to theExpectedStale
function ensures that PK Hashes are properly checked for staleness. This update aligns with the new PK Hash functionality.src/storage/src/redis_pkhashes.cc (8)
77-81
: Translate code comments to English for consistencySeveral code comments are written in Chinese (e.g., lines 77-81, 87-88, 136, 480-481). To maintain consistency and enhance readability for all contributors, please translate these comments into English.
Also applies to: 87-88, 136-136, 480-481
155-157
: Correct the error message regarding TTL valueThe condition checks if
ttl <= 0
, but the error message states "must be >= 0", which includes zero. Since a TTL of zero is invalid, the error message should read "must be > 0" to be accurate.
217-220
: Correct the error message regarding expiration timestampSimilarly, the condition checks if
timestamp <= 0
, but the error message should state "must be > 0" to accurately reflect the invalid input.
565-568
: Eliminate redundant check before inserting intofield_set
The explicit search using
field_set.find(field)
before insertion is unnecessary. You can rely on the result of theinsert
operation to determine if the field was newly inserted, improving performance.🧰 Tools
🪛 cppcheck
[performance] 565-565: Searching before insertion is not necessary.
(stlFindInsert)
774-777
: Eliminate redundant check before inserting intofields
Similarly, in this loop, you can remove the explicit search before insertion into
fields
to optimize the code.🧰 Tools
🪛 cppcheck
[performance] 774-774: Searching before insertion is not necessary.
(stlFindInsert)
857-860
: Eliminate redundant check before inserting intofields
This redundancy is also present here. Refactoring to remove the unnecessary check can enhance performance.
🧰 Tools
🪛 cppcheck
[performance] 857-857: Searching before insertion is not necessary.
(stlFindInsert)
680-765
: Address unusedttl
parameter inPKHIncrby
The
ttl
parameter in thePKHIncrby
function is not used anywhere in the function. If applying a TTL is intended, please modify the function to use this parameter. Otherwise, consider removing it to avoid confusion.
111-113
: Consider updating TTL when values are equalIn the
PKHSet
function, if the existing value is equal to the new value, the function returns early without updating the TTL or other metadata. If updating the TTL is desired even when the value hasn't changed, you should adjust the logic to handle this case.src/storage/include/storage/storage.h (4)
178-178
: Restrict access to theStorage()
default constructorThe default constructor
Storage()
is intended for unit tests only. To prevent unintended use in production code, consider making it private or protected.
416-427
: Remove redundantnumfields
parameter in PKH methodsThe
numfields
parameter in methods likePKHExpire
andPKHExpireat
is redundant since the number of fields can be obtained fromfields.size()
. Removing this parameter will simplify the method signatures and prevent potential inconsistencies.Apply the following changes:
For
PKHExpire
:Status PKHExpire(const Slice& key, int32_t ttl, - int32_t numfields, const std::vector<std::string>& fields, + const std::vector<std::string>& fields, std::vector<int32_t>* rets);For
PKHExpireat
:Status PKHExpireat(const Slice& key, int64_t timestamp, - int32_t numfields, const std::vector<std::string>& fields, + const std::vector<std::string>& fields, std::vector<int32_t>* rets);And similarly for other related methods.
1008-1008
: Use a pointer for the output parameter inXInfo
To maintain consistency in the codebase, the output parameter
result
inXInfo
should be passed as a pointer.Apply this diff:
Status XInfo(const Slice& key, StreamInfoResult& result); +Status XInfo(const Slice& key, StreamInfoResult* result);
And adjust the implementation accordingly.
1153-1154
: Correct the spelling ofEnableDymayticOptions
toEnableDynamicOptions
The method name
EnableDymayticOptions
contains a typographical error. Please correct it toEnableDynamicOptions
to improve code readability and prevent confusion.Apply this diff:
Status EnableDymayticOptions(const OptionType& option_type, const std::string& db_type, +Status EnableDynamicOptions(const OptionType& option_type, const std::string& db_type, const std::unordered_map<std::string, std::string>& options);
Ensure all references to this method are updated accordingly.
src/storage/tests/pkhashes_test.cc (5)
27-33
: Check the return status ofdb.Open
inSetUp()
This issue was previously noted and still applies: the return status of
db.Open(storage_options, path);
is assigned tos
, but there is no check to verify if the operation was successful. Ignoring the status may lead to unexpected behavior if the database fails to open.
95-103
: Unused variabletype_status
and incorrect error checking inmake_expired
This issue was previously noted and still applies: the variable
type_status
is declared but not populated. The condition!type_status[storage::DataType::kHashes].ok()
is invalid becausetype_status
is empty, leading to improper error handling in themake_expired
function.
826-828
: Incorrect error checking using uninitializedtype_status
This issue was previously noted and still applies: the variable
type_status
is declared but not populated. The assertionASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
will not function correctly sincetype_status
is empty, leading to invalid error checking.
521-524
: Incorrect error checking using uninitializedtype_status
inPKHMSet
testThis issue was previously noted and still applies: in the
PKHMSet
test, the variabletype_status
is declared but not used effectively. The conditionASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
is invalid becausetype_status
is empty and uninitialized.
386-405
:PKHGetTest
function is well-implementedThe
PKHGetTest
function correctly tests thePKHGet
operation, including scenarios where the key or field does not exist. The assertions appropriately verify the expected outcomes.src/storage/src/redis_strings.cc (1)
1499-1501
: Verify the use offmt::format
in loggingThe
LOG(INFO)
statement usesfmt::format
, which may not be compatible with the logging macro if it doesn't support string formatting.Please ensure that this usage is correct. If the logging macro doesn't support formatted strings directly, consider formatting the string before logging:
auto formatted_message = fmt::format("[key : {:<30}] [value : {:<30}] [timestamp : {:<10}] [version : {}] [survival_time : {}]", parsed_strings_key.Key().ToString(), parsed_strings_value.UserValue().ToString(), parsed_strings_value.Etime(), parsed_strings_value.Version(), survival_time); LOG(INFO) << formatted_message;src/storage/src/storage.cc (3)
304-308
: Good Addition: Validate Arguments inBitOp
FunctionThe added check correctly ensures that when performing the
BITOP NOT
operation, there is exactly one source key provided. This prevents invalid usage and potential errors.
1398-1398
: Verify the Logic of the Condition inScan
FunctionThe condition within the
if
statement may not correctly determine when to stop scanning. Specifically, the comparisonmiter.Key().compare(prefix) <= 0
could lead to unintended results whenprefix
is empty or when keys precede the prefix lexicographically. Please verify that the logic accurately reflects the intended behavior.
2036-2042
: Reminder: Include New Column Family Handles for PKHashThe addition of
pk_hash_cfhds
and their inclusion into thecfhds
vector is correct. This ensures thatEnableAutoCompaction
considers the new PKHash column families.
bool zero_ahead = false; | ||
for (int i = 0; i < length; i++) { | ||
if (ptr[i] == kNeedTransformCharacter && zero_ahead) { | ||
return ptr + i + 1; | ||
} | ||
//TODO: handle invalid format | ||
return ptr; | ||
zero_ahead = ptr[i] == kNeedTransformCharacter; | ||
} | ||
// TODO: handle invalid format | ||
return ptr; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
TODO Comment Lacks an Associated Issue
- The TODO comment
// TODO: handle invalid format
insrc/storage/include/storage/storage_define.h
is not linked to any existing issue.
🔗 Analysis chain
Formatting looks good. Consider addressing the TODO comment.
The formatting changes improve readability and consistency with the rest of the codebase.
Regarding the TODO comment about handling invalid format:
- Can you provide more context on what constitutes an "invalid format" in this case?
- Is there a ticket or issue tracking this TODO item?
To help investigate this further, we can run the following script:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for related TODOs or issues
echo "Searching for related TODOs:"
rg "TODO.*invalid format" --type cpp
echo "Searching for related issues:"
gh issue list --search "in:title invalid format"
Length of output: 303
src/storage/src/base_value_format.h
Outdated
constexpr int DataTypeNum = int(DataType::kNones); | ||
|
||
constexpr char DataTypeTag[] = { 'k', 'h', 's', 'l', 'z', 'x', 'n', 'a'}; | ||
constexpr char* DataTypeStrings[] = { "string", "hash", "set", "list", "zset", "streams", "none", "all"}; | ||
constexpr char DataTypeTag[] = {'k', 'h', 's', 'l', 'z', 'x', 'n', 'a'}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update DataTypeTag
array to include new enum values.
The update to DataTypeNum
is correct. However, the DataTypeTag
array should be extended to include tags for the new enum values kPKHashes
and kAll
.
Please update the DataTypeTag
array as follows:
constexpr char DataTypeTag[] = {'k', 'h', 's', 'l', 'z', 'x', 'p', 'n', 'a'};
Where 'p' represents kPKHashes
and 'a' represents kAll
.
if (value_->size() >= kPKHashDataValueSuffixLength) { | ||
user_value_ = rocksdb::Slice(value_->data(), value_->size() - kPKHashDataValueSuffixLength); | ||
memcpy(reserve_, value_->data() + user_value_.size(), kSuffixReserveLength); | ||
ctime_ = DecodeFixed64(value_->data() + user_value_.size() + kSuffixReserveLength); | ||
etime_ = DecodeFixed64(value_->data() + user_value_.size() + kSuffixReserveLength + kTimestampLength); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor duplicate code in constructors to improve maintainability
The constructors of ParsedPKHashDataValue
contain duplicate code for parsing the value. Consider extracting the common parsing logic into a private method to enhance readability and maintainability.
Here's how you could refactor the code:
Add a private method to handle parsing:
private:
void ParseValue(const char* data, size_t size) {
if (size >= kPKHashDataValueSuffixLength) {
user_value_ = rocksdb::Slice(data, size - kPKHashDataValueSuffixLength);
memcpy(reserve_, data + user_value_.size(), kSuffixReserveLength);
ctime_ = DecodeFixed64(data + user_value_.size() + kSuffixReserveLength);
etime_ = DecodeFixed64(data + user_value_.size() + kSuffixReserveLength + kTimestampLength);
}
}
Modify the constructors to use the new method:
explicit ParsedPKHashDataValue(std::string* value) : ParsedInternalValue(value) {
- if (value_->size() >= kPKHashDataValueSuffixLength) {
- user_value_ = rocksdb::Slice(value_->data(), value_->size() - kPKHashDataValueSuffixLength);
- memcpy(reserve_, value_->data() + user_value_.size(), kSuffixReserveLength);
- ctime_ = DecodeFixed64(value_->data() + user_value_.size() + kSuffixReserveLength);
- etime_ = DecodeFixed64(value_->data() + user_value_.size() + kSuffixReserveLength + kTimestampLength);
- }
+ ParseValue(value_->data(), value_->size());
}
explicit ParsedPKHashDataValue(const rocksdb::Slice& value) : ParsedInternalValue(value) {
- if (value.size() >= kPKHashDataValueSuffixLength) {
- user_value_ = rocksdb::Slice(value.data(), value.size() - kPKHashDataValueSuffixLength);
- memcpy(reserve_, value.data() + user_value_.size(), kSuffixReserveLength);
- ctime_ = DecodeFixed64(value.data() + user_value_.size() + kSuffixReserveLength);
- etime_ = DecodeFixed64(value.data() + user_value_.size() + kSuffixReserveLength + kTimestampLength);
- }
+ ParseValue(value.data(), value.size());
}
Also applies to: 75-80
const size_t kPKHashDataValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add static
keyword to class constant to ensure correct compilation
Similarly, the member variable kPKHashDataValueSuffixLength
should be declared as static const
to prevent compilation errors. This change indicates that the value is constant and shared among all instances of the class.
Apply this diff to fix the issue:
private:
- const size_t kPKHashDataValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
+ static const size_t kPKHashDataValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const size_t kPKHashDataValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2; | |
}; | |
static const size_t kPKHashDataValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2; | |
}; |
const size_t kDefaultValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add static
keyword to class constant to ensure correct compilation
The member variable kDefaultValueSuffixLength
should be declared as static const
because it is a class-level constant. Non-static const
members require initialization in the constructor, which is not provided here. Making it static
ensures proper compilation and reflects that it is shared across all instances of the class.
Apply this diff to fix the issue:
private:
- const size_t kDefaultValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
+ static const size_t kDefaultValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const size_t kDefaultValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2; | |
}; | |
static const size_t kDefaultValueSuffixLength = kSuffixReserveLength + kTimestampLength * 2; | |
}; |
src/storage/src/redis_strings.cc
Outdated
<<<<<<< HEAD | ||
return HashesExpire(key, ttl_millsec, std::move(meta_value)); | ||
======= | ||
return HashesExpire(key, ttl, std::move(meta_value)); | ||
case DataType::kPKHashes: | ||
return PKHashesExpire(key, ttl, std::move(meta_value)); | ||
>>>>>>> 9d208a64 (feat: add pkhash tests and hash basic cmd) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolve merge conflicts in the Expire
method
There is a merge conflict in the Expire
method related to handling hash expirations, including PKHashes
.
Please resolve the conflict:
<<<<<<< HEAD
return HashesExpire(key, ttl_millsec, std::move(meta_value));
=======
return HashesExpire(key, ttl, std::move(meta_value));
case DataType::kPKHashes:
return PKHashesExpire(key, ttl, std::move(meta_value));
>>>>>>> 9d208a64 (feat: add pkhash tests and hash basic cmd)
Integrate the PKHashesExpire
case if it's part of the new functionality.
Committable suggestion was skipped due to low confidence.
@@ -1344,8 +1452,7 @@ Status Storage::PKScanRange(const DataType& data_type, const Slice& key_start, c | |||
miter.Seek(temp); | |||
} | |||
|
|||
while (miter.Valid() && limit > 0 && | |||
(end_no_limit || miter.Key().compare(key_end.ToString()) <= 0)) { | |||
while (miter.Valid() && limit > 0 && (end_no_limit || miter.Key().compare(key_end.ToString()) <= 0)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential Off-by-One Error in Loop Condition
In the PKScanRange
function, the loop condition checks if miter.Key().compare(key_end.ToString()) <= 0
. Ensure that this comparison correctly includes or excludes the key_end
boundary as intended. Off-by-one errors could result in missing or extra keys in the scan results.
Please verify and adjust the condition if necessary.
@@ -1391,8 +1498,7 @@ | |||
miter.SeekForPrev(base_key_start.Encode().ToString()); | |||
} | |||
|
|||
while (miter.Valid() && limit > 0 && | |||
(end_no_limit || miter.Key().compare(key_end.ToString()) >= 0)) { | |||
while (miter.Valid() && limit > 0 && (end_no_limit || miter.Key().compare(key_end.ToString()) >= 0)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential Off-by-One Error in Reverse Scan Loop Condition
In the PKRScanRange
function, the loop condition uses miter.Key().compare(key_end.ToString()) >= 0
. Verify that this condition correctly handles the inclusion or exclusion of the key_end
boundary in reverse order. Off-by-one errors here could affect the accuracy of the reverse scan results.
Please review and adjust the condition if required.
std::copy(std::move_iterator<Iter>(vec.begin()), std::move_iterator<Iter>(vec.end()), | ||
std::insert_iterator<Uset>(member_set, member_set.begin())); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential Issue with Moving Elements into unordered_set
Using std::move_iterator
with std::copy
to insert elements into an unordered_set
may lead to undefined behavior. Moved-from elements may not remain valid for hashing and equality checks required by the set. Consider copying the elements instead of moving them.
Apply this diff to fix the issue:
- std::copy(std::move_iterator<Iter>(vec.begin()), std::move_iterator<Iter>(vec.end()),
+ std::copy(vec.begin(), vec.end(),
std::insert_iterator<Uset>(member_set, member_set.begin()));
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
std::copy(std::move_iterator<Iter>(vec.begin()), std::move_iterator<Iter>(vec.end()), | |
std::insert_iterator<Uset>(member_set, member_set.begin())); | |
} | |
std::copy(vec.begin(), vec.end(), | |
std::insert_iterator<Uset>(member_set, member_set.begin())); | |
} |
std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>( | ||
kCmdNamePKHExpiretime, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | ||
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExpiretime, std::move(ehexpiretimeptr))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct command flags for PKHExpiretimeCmd
The PKHExpiretime
command retrieves the expiration time of a hash key, which is a read operation. Currently, it is flagged as a write command. Please change the flags to use kCmdFlagsRead
to accurately reflect its functionality.
Apply this diff:
- std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>(
- kCmdNamePKHExpiretime, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
+ std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>(
+ kCmdNamePKHExpiretime, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>( | |
kCmdNamePKHExpiretime, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | |
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExpiretime, std::move(ehexpiretimeptr))); | |
std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>( | |
kCmdNamePKHExpiretime, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); | |
cmd_table->insert(std::pair<std::string, std::unique_ptr<Cmd>>(kCmdNamePKHExpiretime, std::move(ehexpiretimeptr))); |
25c41e8
to
801f122
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 21
🧹 Outside diff range and nitpick comments (28)
include/pika_pkhash.h (10)
37-37
: Consider implementing theClear
methodThe
Clear
method is currently empty. If there's no need to reset the internal state, consider removing this method or adding a comment explaining why it's intentionally left empty.
31-32
: Unused member variablesThe
numfields_
andfields_
member variables are declared but not used in any of the defined methods. Consider removing them if they're not needed, or add a comment explaining their purpose if they're used in the implementation file.
63-63
: Consider implementing theClear
methodThe
Clear
method is currently empty. If there's no need to reset the internal state, consider removing this method or adding a comment explaining why it's intentionally left empty.
57-58
: Unused member variablesThe
numfields_
andfields_
member variables are declared but not used in any of the defined methods. Consider removing them if they're not needed, or add a comment explaining their purpose if they're used in the implementation file.
88-88
: Consider implementing theClear
methodThe
Clear
method is currently empty. If there's no need to reset the internal state, consider removing this method or adding a comment explaining why it's intentionally left empty.
81-83
: Unused member variablesThe
ttl_
,numfields_
, andfields_
member variables are declared but not used in any of the defined methods. Consider removing them if they're not needed, or add a comment explaining their purpose if they're used in the implementation file.
114-114
: Consider implementing theClear
methodThe
Clear
method is currently empty. If there's no need to reset the internal state, consider removing this method or adding a comment explaining why it's intentionally left empty.
107-109
: Unused member variablesThe
ttl_
,numfields_
, andfields_
member variables are declared but not used in any of the defined methods. Consider removing them if they're not needed, or add a comment explaining their purpose if they're used in the implementation file.
140-140
: Consider implementing theClear
methodThe
Clear
method is currently empty. If there's no need to reset the internal state, consider removing this method or adding a comment explaining why it's intentionally left empty.
133-135
: Unused member variablesThe
ttl_
,numfields_
, andfields_
member variables are declared but not used in any of the defined methods. Consider removing them if they're not needed, or add a comment explaining their purpose if they're used in the implementation file.src/storage/src/redis.cc (1)
296-495
: Improved RocksDB statistics gatheringThe GetRocksDBInfo function has been significantly enhanced to provide more comprehensive RocksDB statistics. This improvement will offer better insights into the database's performance and state.
While the changes are valuable, the function has become quite long. Consider refactoring it into smaller, more focused functions for better maintainability. For example:
- Create separate functions for different categories of statistics (e.g.,
WriteMemtableStats
,WriteCompactionStats
, etc.).- Use a map of property names to metric names to reduce repetitive code in the
write_property
andwrite_ticker_count
loops.Example refactoring:
void Redis::GetRocksDBInfo(std::string& info, const char* prefix) { std::ostringstream string_stream; string_stream << "#" << prefix << "RocksDB" << "\r\n"; WriteMemtableStats(string_stream, prefix); WriteCompactionStats(string_stream, prefix); WriteKeyStats(string_stream, prefix); // ... other categories WriteCFStats(string_stream, prefix); info.append(string_stream.str()); } void Redis::WriteMemtableStats(std::ostringstream& stream, const char* prefix) { const std::vector<std::pair<rocksdb::Slice, const char*>> memtable_properties = { {rocksdb::DB::Properties::kNumImmutableMemTable, "num_immutable_mem_table"}, {rocksdb::DB::Properties::kNumImmutableMemTableFlushed, "num_immutable_mem_table_flushed"}, // ... other memtable properties }; for (const auto& [property, metric] : memtable_properties) { WriteAggregatedIntProperty(stream, property, metric, prefix); } } // ... other category functions void Redis::WriteAggregatedIntProperty(std::ostringstream& stream, const rocksdb::Slice& property, const char* metric, const char* prefix) { uint64_t value = 0; db_->GetAggregatedIntProperty(property, &value); stream << prefix << metric << ':' << value << "\r\n"; }This refactoring would make the code more modular and easier to maintain.
src/storage/src/redis.h (2)
260-300
: LGTM: New PK Hash methods addedThe new PK Hash methods (PKHExpire, PKHGet, PKHSet, PKHMSet, etc.) provide a comprehensive set of operations for PK Hashes. The naming conventions and parameter choices are consistent with existing Redis-style methods, and the inclusion of TTL parameters for some methods is a useful feature.
Consider adding brief documentation comments for each new method to explain their purpose and any unique behaviors, especially for methods that differ from standard Redis hash operations.
537-538
: Remove commented-out code and create a task for env separationThe
env_
member variable has been commented out with a TODO comment. Instead of keeping commented-out code in the header file, it's better to remove it entirely and create a separate task or issue to track the planned separation of env for each RocksDB instance.Consider removing these lines and creating a GitHub issue to track the planned work:
- // TODO(wangshaoyi): seperate env for each rocksdb instance - // rocksdb::Env* env_ = nullptr;src/storage/src/redis_strings.cc (1)
8-8
: Remove unnecessary includeThe
<iostream>
header is included but not used in this file. Removing it can slightly improve compilation time and reduce potential naming conflicts.Apply this diff to remove the unnecessary include:
-#include <iostream>
src/storage/src/pkhash_data_value_format.h (1)
47-47
: Translate or address the TODO comment.The TODO comment at line 47 is in Chinese. For consistency and to assist all team members, please translate it to English and address the pending task.
Would you like assistance in addressing this TODO or opening a GitHub issue to track it?
src/storage/src/redis_pkhashes.cc (5)
477-477
: Translate code comments to English for consistencyThe comment on line 477 is written in Chinese:
// 1. 判断类型是否匹配和key是否过期。
For consistency and readability, please translate it to English.
Apply this diff to translate the comment:
-// 1. 判断类型是否匹配和key是否过期。 +// 1. Check if the type matches and whether the key has expired.
694-694
: Remove unused variablemeta_value_buf
The variable
meta_value_buf
is declared but not used in this scope. Removing it will clean up the code.Apply this diff to remove the unused variable:
char value_buf[32] = {0}; -char meta_value_buf[4] = {0};
236-236
: Unused variableis_stale
The variable
is_stale
is declared but not used in thePKHExpireat
function. Consider removing it to clean up the code.Apply this diff to remove the unused variable:
bool is_stale = false;
348-348
: Unused variableis_stale
Similarly, the variable
is_stale
in thePKHTTL
function is declared but not used.Apply this diff:
bool is_stale = false;
762-762
: Check the status after database write operationAfter calling
db_->Write
, it's good practice to check the returned status to handle any potential write errors.Add error handling after the write operation:
s = db_->Write(default_write_options_, &batch); +if (!s.ok()) { + return s; +} UpdateSpecificKeyStatistics(DataType::kPKHashes, key.ToString(), statistic); return s;src/storage/include/storage/storage.h (2)
124-127
: Useint64_t
forttl
to support larger time valuesConsider changing
ttl
fromint32_t
toint64_t
to support larger TTL values and maintain consistency with other time-related variables.Apply this diff:
struct FieldValueTTL { std::string field; std::string value; - int32_t ttl; + int64_t ttl; bool operator==(const FieldValueTTL& fv) const { return (fv.field == field && fv.value == value && fv.ttl == ttl); } };
442-442
: Useint64_t
forttl
default parameter inPKHIncrby
Changing the
ttl
parameter fromint32_t
toint64_t
allows for larger TTL values and enhances consistency.Apply this diff:
-Status PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl = 0); +Status PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int64_t ttl = 0);src/storage/tests/pkhashes_test.cc (6)
139-139
: Typo in TODO commentThere's a typo in the TODO comment on line 139: "fisrt" should be "first".
Apply this change to correct the typo:
- // TODO (DDD: cmd basic test cases fisrt) + // TODO (DDD: cmd basic test cases first)
576-576
: Inconsistent language in commentThe comment on line 576 is in Chinese, while the rest of the codebase is in English. For consistency, please translate it to English.
Apply this change:
- // 从这里开始有问题。 + // There is an issue starting from here.
159-160
: Check the return status ofGetCurrentTime
The function
rocksdb::Env::GetCurrentTime
may return a status that should be checked to ensure it succeeded.If
GetCurrentTime
returns a status, capture and assert it:- rocksdb::Env::Default()->GetCurrentTime(&unix_time); + rocksdb::Status s_time = rocksdb::Env::Default()->GetCurrentTime(&unix_time); + ASSERT_TRUE(s_time.ok());If it doesn't return a status, this can be ignored.
624-625
: Add assertion afterSetex
callIn the
PKHSetTest
, when callingdb.Setex
, you assign the status tos
but do not check its value immediately.Consider adding an assertion to verify the operation was successful:
s = db.Setex("GP4_HSET_KEY", "STRING_VALUE_WITH_TTL", 1); + ASSERT_TRUE(s.ok());
345-347
: Add assertion to verify hash lengthIn the
PKHSetexTest
, after setting a field with expiration, you retrieve the length of the hash but do not assert its value.Consider adding an assertion to confirm that the hash length is as expected:
s = db.PKHLen("GP1_HSET_KEY", &ret); ASSERT_TRUE(s.ok()); + ASSERT_EQ(ret, 1);
280-281
: Remove unused variablerets
In the
PKHPersistTest
, the variablerets
is declared but not used after line 280.If
rets
is unnecessary, consider removing it to clean up the code:- std::vector<int32_t> rets;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (19)
- include/acl.h (1 hunks)
- include/pika_command.h (5 hunks)
- include/pika_pkhash.h (1 hunks)
- src/pika_client_conn.cc (1 hunks)
- src/pika_command.cc (10 hunks)
- src/pika_pkhash.cc (1 hunks)
- src/storage/include/storage/storage.h (13 hunks)
- src/storage/include/storage/storage_define.h (2 hunks)
- src/storage/src/base_filter.h (6 hunks)
- src/storage/src/base_value_format.h (5 hunks)
- src/storage/src/pkhash_data_value_format.h (1 hunks)
- src/storage/src/redis.cc (12 hunks)
- src/storage/src/redis.h (14 hunks)
- src/storage/src/redis_hashes.cc (29 hunks)
- src/storage/src/redis_pkhashes.cc (1 hunks)
- src/storage/src/redis_strings.cc (39 hunks)
- src/storage/src/storage.cc (28 hunks)
- src/storage/tests/hashes_test.cc (3 hunks)
- src/storage/tests/pkhashes_test.cc (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- include/acl.h
- src/pika_client_conn.cc
- src/storage/include/storage/storage_define.h
- src/storage/src/base_filter.h
- src/storage/src/redis_hashes.cc
- src/storage/tests/hashes_test.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc
[performance] 565-565: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 774-774: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 857-857: Searching before insertion is not necessary.
(stlFindInsert)
src/storage/src/redis_strings.cc
[error] 917-917: Unmatched '{'. Configuration
(syntaxError)
🔇 Additional comments (55)
src/storage/src/base_value_format.h (4)
57-57
: Approved: Virtual destructor forInternalValue
class.The change to make the destructor virtual is a good practice. It ensures proper cleanup of derived classes when deleting through a base class pointer.
87-87
: Approved: Formatting improvements inInternalValue
class.The formatting changes to the
protected
access specifier improve code readability and maintain consistency with the overall coding style.
Line range hint
99-164
: Approved: Formatting improvements inParsedInternalValue
class.The formatting changes to the
public
access specifier and theIsValid()
method improve code readability and maintain consistency with the overall coding style.
21-35
:⚠️ Potential issueUpdate
DataTypeTag
array to include new enum values.The
DataType
enum andDataTypeStrings
array have been correctly updated to include the newkPKHashes
andkAll
values. However, theDataTypeTag
array still needs to be updated to include tags for these new enum values.Please update the
DataTypeTag
array as follows:constexpr char DataTypeTag[] = {'k', 'h', 's', 'l', 'z', 'x', 'p', 'n', 'a'};Where 'p' represents
kPKHashes
and 'a' representskAll
.include/pika_pkhash.h (2)
1-469
: Overall review summaryThe file introduces a comprehensive set of command classes for PKHash operations. The structure is consistent across most classes, which is good for maintainability. However, there are a few minor issues to address:
- Several classes have empty
Clear
methods. Consider removing them if not needed or adding comments to explain their purpose.- Some classes have uninitialized member variables (
is_force_
inPKHLenCmd
,is_wt_
inPKHGetAllCmd
). These should be initialized to prevent undefined behavior.- The
PKHScanCmd
class'sClear
method doesn't initialize all member variables.- Some classes have unused member variables. Consider removing them if they're not needed in the implementation.
Addressing these issues will improve the overall code quality and reduce the potential for bugs.
459-463
:⚠️ Potential issueInitialize the
cursor_
member variable in theClear()
methodThe
Clear()
method initializespattern_
,count_
, andis_wt_
, but notcursor_
. This may lead to unexpected behavior ifcursor_
is used after callingClear()
. Consider initializingcursor_
in theClear()
method as well.Apply this diff to initialize
cursor_
:virtual void Clear() { pattern_ = "*"; count_ = 10; is_wt_ = false; + cursor_ = 0; }
Likely invalid or redundant comment.
include/pika_command.h (2)
140-158
: LGTM! New PKHash command constants added.The addition of these new PKHash command constants is consistent with the existing structure and naming conventions. This enhancement expands the command set for PKHash operations in the Pika database system.
313-313
: LGTM! New kCmdFlagsPKHash flag added.The addition of the
kCmdFlagsPKHash
flag to theCmdFlags
enum is appropriate and follows the existing pattern of command type flags.To ensure proper integration of the new PKHash commands and flag, please run the following verification script:
✅ Verification successful
Verification Successful:
kCmdFlagsPKHash
flag is appropriately integrated.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of new PKHash commands and flag # Check for PKHash command implementations echo "Searching for PKHash command implementations:" rg "class.*Cmd.*public.*Cmd" --type cpp | grep -E "PKH(Set|Expire|Expireat|Expiretime|TTL|Persist|Get|Exists|Del|Len|Strlen|Incrby|MSet|MGet|Keys|Vals|Getall|Scan)Cmd" # Check for usage of kCmdFlagsPKHash flag echo "Searching for usage of kCmdFlagsPKHash flag:" rg "kCmdFlagsPKHash" --type cpp # Check for registration of PKHash commands echo "Searching for registration of PKHash commands:" rg "cmd_table->insert" --type cpp | grep -E "PKH(Set|Expire|Expireat|Expiretime|TTL|Persist|Get|Exists|Del|Len|Strlen|Incrby|MSet|MGet|Keys|Vals|Getall|Scan)"Length of output: 6467
src/storage/src/redis.cc (4)
218-218
: Compaction for new PKHash column family addedThe addition of compaction for the new PKHash column family is consistent with the implementation of the new functionality.
493-493
: Corrected method call to GetMapPropertyThe method call to GetMapProperty has been corrected by removing the unnecessary
rocksdb::DB::
qualification. This change improves code correctness.
42-42
: Verify the impact of commenting outenv_
initializationThe initialization of
env_
has been commented out. Please ensure that this change doesn't negatively impact the functionality of the Redis class, especially in areas whereenv_
might have been used.To check for potential issues, run the following script:
✅ Verification successful
env_
Initialization Commented Out SuccessfullyCommenting out the initialization of
env_
has no impact on the current functionality, as there are no active uses ofenv_
in the codebase.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for uses of env_ in the codebase rg '\benv_\b' --type cppLength of output: 192
102-110
: New column family for "pika hash" addedThe addition of the new column family for "pika hash" aligns with the PR objectives. This enhancement will support the new PKHash operations.
To ensure proper integration, please verify that:
- The new column family is correctly used in relevant PKHash operations.
- The compaction and filtering logic for this new column family is consistent with other similar data types.
Run the following script to check for usage of the new column family:
✅ Verification successful
Integration of "pika hash" column family verified
The new column family is properly integrated and utilized within
src/storage/src/redis.cc
. No further issues found.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for uses of pika_hash_data_cf in the codebase rg 'pika_hash_data_cf' --type cppLength of output: 676
src/storage/src/redis.h (7)
17-19
: LGTM: New include statements addedThe new include statements for
env.h
,pika_codis_slot.h
,custom_comparator.h
, andredis_streams.h
are appropriate additions, likely supporting the new PK Hash functionality and other changes in this file.Also applies to: 24-24
416-417
: LGTM: Improved CreateIterator with range supportThe updates to the
CreateIterator
method, addinglower_bound
andupper_bound
parameters, enhance the functionality by allowing range-based iteration. This is a valuable improvement that provides more control over the iteration process.Also applies to: 421-422
473-474
: LGTM: ExpectedStale method updated for PKHashesThe
ExpectedStale
method has been correctly updated to include support for PKHashes, consistent with the new PK Hash functionality added throughout the file. The implementation for PKHashes matches that of regular Hashes, which is appropriate.
549-552
: LGTM: Improved scan methods with type supportThe updates to
GetScanStartPoint
andStoreScanNextPoint
methods, adding aDataType& type
parameter, enhance the functionality by allowing type-specific scan operations. This change is consistent with the overall improvements in the file for supporting different data types and provides more flexibility in scan operations.
Line range hint
1-564
: Summary of changes and action itemsThis update to
redis.h
introduces significant new functionality, primarily focused on PK Hash operations, and includes various improvements to existing methods. The changes are generally well-implemented and consistent. However, there are a few items that need attention:
- Resolve the merge conflict in the Expire methods (lines 116-130).
- Address potential thread-safety issues in the KeyStatistics struct (lines 77-78).
- Fix the use of rvalue reference parameters with default values in GetStreamMeta and XRange methods (lines 384-385, 395-396).
- Remove commented-out code for the env_ member variable (lines 537-538).
- Consider adding brief documentation for the new PK Hash methods.
Once these items are addressed, the changes will significantly enhance the capabilities of the Redis class while maintaining code quality and consistency.
160-161
: LGTM: Updated method signatures for improved functionalityThe changes to
BitOp
,SDiffstore
,SInterstore
, andSUnionstore
methods, adding thevalue_to_dest
parameter and adjusting return types, appear to be part of a broader refactoring. These changes look good and consistent across methods.To ensure these changes don't break existing code, please run:
#!/bin/bash # Search for usages of the modified methods rg "\b(BitOp|SDiffstore|SInterstore|SUnionstore)\b" --type cppAlso applies to: 306-307, 309-310, 319-320
Line range hint
60-78
: LGTM: KeyStatistics struct updated, but consider thread safetyThe changes to the
KeyStatistics
struct, including the improvedAvgDuration
method and newAddModifyCount
andModifyCount
methods, look good. However, if these methods might be accessed concurrently, consider using atomic operations or mutex locks to ensure thread safety.To verify potential concurrent access, please run:
src/storage/src/redis_strings.cc (9)
139-141
: Use a formatting library for error messagesAs suggested in previous comments, consider using a formatting library like
fmt
orstd::format
(C++20) to improve the performance and maintainability of error messages throughout the codebase.Example using fmt:
return Status::InvalidArgument(fmt::format("WRONGTYPE, key: {}, expect type: {}, get type: {}", key.ToString(), DataTypeStrings[static_cast<int>(DataType::kStrings)], DataTypeStrings[static_cast<int>(GetMetaValueType(value))]));
82-84
: Improve error message constructionAs suggested in previous comments, consider creating a helper function to format error messages. This will improve maintainability and reduce redundancy throughout the codebase.
Example helper function:
Status GenerateWrongTypeError(const Slice& key, DataType expected_type, DataType actual_type) { return Status::InvalidArgument(fmt::format("WRONGTYPE, key: {}, expect type: {}, get type: {}", key.ToString(), DataTypeStrings[static_cast<int>(expected_type)], DataTypeStrings[static_cast<int>(actual_type)])); }Use this helper function in your methods to simplify error handling and improve consistency.
915-920
:⚠️ Potential issueResolve merge conflict in Setvx function signature
There's a merge conflict in the
Setvx
function signature. The parameter name has changed fromttl_millsec
tottl
, which might affect other parts of the code that use this function.Resolve the merge conflict by choosing the appropriate implementation:
Status Redis::Setvx(const Slice& key, const Slice& value, const Slice& new_value, int32_t* ret, int64_t ttl) {After resolving the conflict, run the following script to verify the usage of the
Setvx
function throughout the codebase:#!/bin/bash # Search for all occurrences of the Setvx function rg "Setvx\s*\(" --type cppEnsure that all calls to
Setvx
are updated with the correct parameter name.🧰 Tools
🪛 cppcheck
[error] 917-917: Unmatched '{'. Configuration
(syntaxError)
1692-1694
: 🛠️ Refactor suggestionVerify implementation of PKPatternMatchDelWithRemoveKeys
The
PKPatternMatchDelWithRemoveKeys
function has been modified to include additional parameters for pattern matching and limiting the number of keys to delete.Please review the implementation to ensure it correctly handles:
- Pattern matching for keys
- Respecting the
max_count
limit- Properly populating the
remove_keys
vectorRun the following script to check for similar pattern matching implementations in the codebase:
#!/bin/bash # Search for pattern matching implementations rg "StringMatch|PatternMatch" --type cppConsider extracting the pattern matching logic into a separate helper function to improve code reusability and maintainability. For example:
bool MatchesPattern(const std::string& key, const std::string& pattern) { return StringMatch(pattern.data(), pattern.size(), key.data(), key.size(), 0) != 0; }This helper function can then be used in the
PKPatternMatchDelWithRemoveKeys
function and other similar implementations.
1283-1289
:⚠️ Potential issueResolve merge conflict in PKSetexAt function signature and address TODO
There's a merge conflict in the
PKSetexAt
function signature. The parameter name has changed fromtime_stamp_millsec_
totimestamp
, which might affect other parts of the code that use this function.
- Resolve the merge conflict by choosing the appropriate implementation:
// TODO(wangshaoyi): timestamp uint64_t Status Redis::PKSetexAt(const Slice& key, const Slice& value, int64_t timestamp) {
- Address the TODO comment by changing the timestamp parameter type to
uint64_t
:Status Redis::PKSetexAt(const Slice& key, const Slice& value, uint64_t timestamp) {After resolving the conflict and addressing the TODO, run the following script to verify the usage of the
PKSetexAt
function throughout the codebase:#!/bin/bash # Search for all occurrences of the PKSetexAt function rg "PKSetexAt\s*\(" --type cppEnsure that all calls to
PKSetexAt
are updated with the correct parameter name and type.Would you like me to create a GitHub issue to track the task of changing the timestamp parameter type to
uint64_t
throughout the codebase?
1576-1582
:⚠️ Potential issueResolve merge conflict in Expire function and verify PKHashes support
There's a merge conflict in the
Expire
function, and new cases forDataType::kPKHashes
have been added.Resolve the merge conflict by integrating the new PKHashes case:
case DataType::kHashes: return HashesExpire(key, ttl, std::move(meta_value)); case DataType::kPKHashes: return PKHashesExpire(key, ttl, std::move(meta_value)); case DataType::kLists: return ListsExpire(key, ttl_millsec, std::move(meta_value));After resolving the conflict, run the following script to verify the implementation of PKHashes support throughout the codebase:
#!/bin/bash # Search for all occurrences of PKHashes-related functions rg "PKHashes" --type cppEnsure that PKHashes support is consistently implemented across all relevant functions (e.g.,
Expire
,Expireat
,Del
,TTL
, etc.).
Line range hint
1-1694
: Final verification of changes and consistencyThe file has undergone several changes, including:
- Modification of function signatures
- Addition of PKHashes support
- Changes in parameter names
To ensure consistency throughout the file and the entire codebase, please perform the following steps:
- Resolve all merge conflicts as suggested in previous comments.
- Run the following script to check for any remaining inconsistencies in parameter naming:
#!/bin/bash # Search for inconsistent parameter names rg "ttl_millsec|expired_timestamp_millsec|time_stamp_millsec_" --type cpp
Verify that all functions using or returning time-related values are consistent in their use of milliseconds or seconds.
Ensure that the PKHashes support is implemented consistently across all relevant functions.
Run your test suite to catch any potential issues introduced by these changes.
Consider creating a separate file for PKHashes-related operations if the codebase continues to grow, to maintain better separation of concerns and improve code organization.
65-70
:⚠️ Potential issueResolve merge conflict in Append function signature
There's a merge conflict in the
Append
function signature. The parameter name has changed fromexpired_timestamp_millsec
toexpired_timestamp_sec
, which might affect other parts of the code that use this function.Resolve the merge conflict by choosing the appropriate implementation:
Status Redis::Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_sec, std::string& out_new_value) {After resolving the conflict, run the following script to verify the usage of the
Append
function throughout the codebase:Ensure that all calls to
Append
are updated with the correct parameter name.✅ Verification successful
Merge conflict in Append function signature resolved correctly
The parameter name change from
expired_timestamp_millsec
toexpired_timestamp_sec
does not affect functionality as parameter names do not need to match in C++ function calls. All usages are consistent by parameter position.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for all occurrences of the Append function rg "Append\s*\(" --type cppLength of output: 4366
528-534
:⚠️ Potential issueResolve merge conflict in GetrangeWithValue function signature
There's a merge conflict in the
GetrangeWithValue
function signature. The parameter name has changed fromttl_millsec
tottl
, which might affect other parts of the code that use this function.Resolve the merge conflict by choosing the appropriate implementation:
Status Redis::GetrangeWithValue(const Slice& key, int64_t start_offset, int64_t end_offset, std::string* ret, std::string* value, int64_t* ttl) {After resolving the conflict, run the following script to verify the usage of the
GetrangeWithValue
function throughout the codebase:Ensure that all calls to
GetrangeWithValue
are updated with the correct parameter name.src/storage/src/pkhash_data_value_format.h (1)
31-31
: Verify thatDataType::kNones
is correct.Please ensure that
DataType::kNones
is the intended data type. IfkNones
is correct, no action is needed. Otherwise, consider updating it to the appropriate data type.src/pika_pkhash.cc (1)
660-680
:⚠️ Potential issueFix incorrect variable usage in
PKHScanCmd::Do()
In the error handling section, you're checking
s_.IsInvalidArgument()
, buts_
is not defined in this scope. The variables
holds the status returned byHScan
and should be used instead.Apply this diff to correct the variable:
- } else if (s_.IsInvalidArgument()) { + } else if (s.IsInvalidArgument()) {Likely invalid or redundant comment.
src/storage/src/redis_pkhashes.cc (2)
217-220
:⚠️ Potential issueCorrect the error message regarding expiration timestamp
The condition checks if
timestamp <= 0
, but the error message says "must be >= 0", which is inconsistent with the condition. Since a timestamp of zero is invalid, the error message should state "must be > 0" to be accurate.Apply this diff to correct the error message:
if (timestamp <= 0) { rets->assign(numfields, 2); - return Status::InvalidArgument("invalid expire time, must be >= 0"); + return Status::InvalidArgument("invalid expire time, must be > 0"); }Likely invalid or redundant comment.
155-157
:⚠️ Potential issueCorrect the error message regarding TTL value
Similarly, the condition checks if
ttl <= 0
, but the error message states "must be >= 0", which includes zero. Since a TTL of zero is invalid, the error message should read "must be > 0" to be accurate.Apply this diff to correct the error message:
if (ttl <= 0) { - return Status::InvalidArgument("invalid expire time, must be >= 0"); + return Status::InvalidArgument("invalid expire time, must be > 0"); }Likely invalid or redundant comment.
src/storage/include/storage/storage.h (10)
28-28
: Include directive added forpkhash_data_value_format.h
The inclusion of
pkhash_data_value_format.h
is appropriate for the new PKHash functionalities.
Line range hint
98-103
: Duplicate: Correct the spelling ofinvaild_keys
The variable
invaild_keys
is misspelled. This issue has been previously flagged and remains unaddressed.
124-129
: Duplicate: Consider initializing members in the constructor forFieldValueTTL
Adding constructors to
FieldValueTTL
can prevent uninitialized members and improve code clarity. This issue has been previously flagged and remains unaddressed.
178-178
: Duplicate: Restrict access to theStorage()
default constructorTo prevent unintended use in production code, consider making the default constructor
private
orprotected
. This issue has been previously flagged and remains unaddressed.
278-279
: Duplicate: Ensure consistent use of pointers for output parameters inBitOp
methodThe output parameter
value_to_dest
should be a pointer for consistency. This issue has been previously flagged and remains unaddressed.
490-491
: Duplicate: Ensure consistent use of pointers for output parameters inSDiffstore
methodThe output parameter
value_to_dest
should be a pointer for consistency. This issue has been previously flagged and remains unaddressed.
574-575
: Duplicate: Ensure consistent use of pointers for output parameters inSUnionstore
methodThe output parameter
value_to_dest
should be a pointer for consistency. This issue has been previously flagged and remains unaddressed.
1008-1008
: Duplicate: Use a pointer for the output parameter inXInfo
Passing the output parameter
result
as a pointer maintains consistency in the codebase. This issue has been previously flagged and remains unaddressed.
1153-1154
: Duplicate: Correct the spelling ofEnableDymayticOptions
toEnableDynamicOptions
The method name contains a typographical error. This issue has been previously flagged and remains unaddressed.
416-427
: Duplicate: Remove redundantnumfields
parameter in PKH methodsThe
numfields
parameter is redundant since the number of fields can be obtained fromfields.size()
. This issue has been previously flagged and remains unaddressed.src/storage/tests/pkhashes_test.cc (2)
1053-1058
: Handle not found case correctly inPKHVals
In the
PKHVals
test, when the hash table does not exist, you correctly check thats.IsNotFound()
and thevalues
vector is empty.
711-723
: Verify handling of expired keys inPKHMGetTest
In the
PKHMGetTest
, after expiring the keyGP4_HMGET_KEY
, you test fetching fields and expect them to be not found.The test logic is correct. Ensure that the expiration mechanism works as intended and that expired keys do not return values.
src/storage/src/storage.cc (8)
490-494
: Duplicate comment: Useint64_t
for TTL parameter inPKHExpire
The previous review comment regarding changing the
ttl
parameter fromint32_t
toint64_t
inPKHExpire
is still applicable.
530-533
: Duplicate comment: Useint64_t
for TTL parameter inPKHSetex
The prior suggestion to change the
ttl
parameter fromint32_t
toint64_t
inPKHSetex
remains valid.
555-558
: Duplicate comment: Useint64_t
for TTL parameter inPKHIncrby
The earlier recommendation to modify the
ttl
parameter fromint32_t
toint64_t
inPKHIncrby
still applies.
1135-1136
: Duplicate comment: Specify parameter type in lambda function for C++11 compatibilityThe previous comment about replacing
auto
with an explicit parameter type in the lambda function to ensure C++11 compatibility is still relevant.
1975-1975
: Duplicate comment: Compatibility ofstd::plus<>
instd::transform
The concern regarding the use of
std::plus<>{}
requiring C++14 or newer is still applicable. The previous suggestion to specify the type or use a lambda function remains valid.
1455-1455
: Duplicate comment: Verify loop condition inPKScanRange
for off-by-one errorThe potential off-by-one error in the loop condition of
PKScanRange
has been previously mentioned and is still pertinent.
1501-1501
: Duplicate comment: Verify loop condition inPKRScanRange
for off-by-one errorThe earlier note about a possible off-by-one error in the
PKRScanRange
function's loop condition continues to be relevant.
821-823
: Duplicate comment: Avoid moving elements intostd::unordered_set
The prior issue regarding the use of
std::move_iterator
withstd::copy
into anunordered_set
is still valid.src/pika_command.cc (3)
486-488
: LGTM: Addition ofPKHSet
CommandThe
PKHSetCmd
is correctly added with appropriate flags and arity. The implementation aligns with the intended functionality for setting hash values with expiration.
512-514
: LGTM: Addition ofPKHGet
CommandThe
PKHGetCmd
is properly implemented with the correct flags for a read operation. The command should function as expected for retrieving hash values.
516-518
: Verify the Parameter Count forPKHSetexCmd
The
PKHSetexCmd
is added, but ensure that the arity and parameter parsing handle all required arguments correctly, especially with variable arguments.Run the following script to check the command usage:
✅ Verification successful
Parameter Count for
PKHSetexCmd
VerifiedThe
PKHSetexCmd
is instantiated with an arity of-4
, and the parameter parsing in its implementation correctly handles the required arguments, including variable arguments.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the arity and usage of PKHSetex command. # Test: Search for PKHSetex command definitions. rg 'PKHSetexCmd' -A 5Length of output: 2332
|
||
private: | ||
std::string key_; | ||
bool is_force_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initialize the is_force_
member variable
The is_force_
member variable is declared but not initialized. This may lead to undefined behavior if it's used before being assigned a value. Consider initializing it in the constructor or implementing a Clear()
method to reset the internal state.
|
||
private: | ||
std::string key_; | ||
bool is_wt_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initialize the is_wt_
member variable
The is_wt_
member variable is declared but not initialized. This may lead to undefined behavior if it's used before being assigned a value. Consider initializing it in the constructor or implementing a Clear()
method to reset the internal state.
src/storage/src/redis.h
Outdated
<<<<<<< HEAD | ||
virtual Status StringsExpire(const Slice& key, int64_t ttl_millsec, std::string&& prefetch_meta = {}); | ||
virtual Status HashesExpire(const Slice& key, int64_t ttl_millsec, std::string&& prefetch_meta = {}); | ||
virtual Status ListsExpire(const Slice& key, int64_t ttl_millsec, std::string&& prefetch_meta = {}); | ||
virtual Status ZsetsExpire(const Slice& key, int64_t ttl_millsec, std::string&& prefetch_meta = {}); | ||
virtual Status SetsExpire(const Slice& key, int64_t ttl_millsec, std::string&& prefetch_meta = {}); | ||
======= | ||
virtual Status StringsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {}); | ||
virtual Status HashesExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {}); | ||
virtual Status PKHashesExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {}); | ||
|
||
virtual Status ListsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {}); | ||
virtual Status ZsetsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {}); | ||
virtual Status SetsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {}); | ||
>>>>>>> 9d208a64 (feat: add pkhash tests and hash basic cmd) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolve merge conflict in Expire methods
There's a merge conflict in the Expire methods that needs to be resolved. When resolving:
- Use the updated parameter name
ttl
instead ofttl_millsec
for consistency. - Include the new
PKHashesExpire
method. - Ensure all methods use the
std::string&& prefetch_meta = {}
parameter.
Here's a suggested resolution:
virtual Status StringsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
virtual Status HashesExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
virtual Status PKHashesExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
virtual Status ListsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
virtual Status ZsetsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
virtual Status SetsExpire(const Slice& key, int64_t ttl, std::string&& prefetch_meta = {});
Status XRange(const Slice& key, const StreamScanArgs& args, std::vector<IdMessage>& id_messages, | ||
std::string&& prefetch_meta = {}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential issue with rvalue reference parameter
The GetStreamMeta
and XRange
methods have been updated to include a std::string&& prefetch_meta = {}
parameter. While this is consistent with other methods, using an rvalue reference with a default value is not standard C++ practice and may lead to compilation errors or unexpected behavior.
Consider changing these parameters to const std::string& prefetch_meta = {}
or std::string prefetch_meta = {}
instead.
Apply this diff to fix the issue:
- Status XRange(const Slice& key, const StreamScanArgs& args, std::vector<IdMessage>& id_messages,
- std::string&& prefetch_meta = {});
+ Status XRange(const Slice& key, const StreamScanArgs& args, std::vector<IdMessage>& id_messages,
+ const std::string& prefetch_meta = {});
- Status GetStreamMeta(StreamMetaValue& tream_meta, const rocksdb::Slice& key, rocksdb::ReadOptions& read_options,
- std::string&& prefetch_meta = {});
+ Status GetStreamMeta(StreamMetaValue& tream_meta, const rocksdb::Slice& key, rocksdb::ReadOptions& read_options,
+ const std::string& prefetch_meta = {});
Also applies to: 395-396
int32_t ret = 0; | ||
s_ = db_->storage()->PKHSet(key_, field_, value_, &ret); | ||
if (s_.ok()) { | ||
res_.AppendContent(":" + std::to_string(ret)); | ||
AddSlotKey("h", key_, db_); | ||
} else if (s_.IsInvalidArgument()) { | ||
res_.SetRes(CmdRes::kMultiKey); | ||
} else { | ||
res_.SetRes(CmdRes::kErrOther, s_.ToString()); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implement expiration functionality in PKHSetexCmd
The PKHSetexCmd
class currently does not implement the expiration functionality and is identical to PKHSetCmd
, lacking the handling of expiration time for the field-value pair.
Please update PKHSetexCmd
to correctly parse the expiration time in DoInitial()
and utilize a storage method that sets both the field-value pair and its expiration in Do()
. For example:
void PKHSetexCmd::DoInitial() {
if (!CheckArg(argv_.size())) {
res_.SetRes(CmdRes::kWrongNum, kCmdNamePKHSetex);
return;
}
key_ = argv_[1];
field_ = argv_[2];
if (pstd::string2int(argv_[3].data(), argv_[3].size(), &ttl_) != 1) {
res_.SetRes(CmdRes::kInvalidInt);
return;
}
value_ = argv_[4];
}
void PKHSetexCmd::Do() {
int32_t ret = 0;
s_ = db_->storage()->PKHSetex(key_, field_, value_, ttl_, &ret);
if (s_.ok()) {
res_.AppendContent(":" + std::to_string(ret));
AddSlotKey("h", key_, db_);
} else if (s_.IsInvalidArgument()) {
res_.SetRes(CmdRes::kMultiKey);
} else {
res_.SetRes(CmdRes::kErrOther, s_.ToString());
}
}
Status PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields, | ||
std::vector<int32_t>* rets); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use int64_t
for ttl
parameters to support larger expiration times
Consider changing the ttl
parameter from int32_t
to int64_t
in PKHExpire
and PKHSetex
to support larger expiration times and maintain consistency with other TTL parameters.
Apply this diff:
// In PKHExpire
-Status PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields,
+Status PKHExpire(const Slice& key, int64_t ttl, int32_t numfields, const std::vector<std::string>& fields,
std::vector<int32_t>* rets);
// In PKHSetex
-Status PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl, int32_t* ret);
+Status PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl, int32_t* ret);
Also applies to: 432-433
ASSERT_TRUE(s.IsCorruption()); | ||
ASSERT_EQ(value, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use IsInvalidArgument
instead of IsCorruption
Similarly, update the status check when attempting to increment a field with invalid numeric data.
Apply this change:
- ASSERT_TRUE(s.IsCorruption());
+ ASSERT_TRUE(s.IsInvalidArgument());
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
ASSERT_TRUE(s.IsCorruption()); | |
ASSERT_EQ(value, 0); | |
ASSERT_TRUE(s.IsInvalidArgument()); | |
ASSERT_EQ(value, 0); |
ASSERT_TRUE(s.IsCorruption()); | ||
ASSERT_EQ(value, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use IsInvalidArgument
instead of IsCorruption
In the PKHIncrby
test, when incrementing a field with invalid numeric data, you check s.IsCorruption()
. It's more appropriate to check for s.IsInvalidArgument()
in cases of invalid input.
Apply this change:
- ASSERT_TRUE(s.IsCorruption());
+ ASSERT_TRUE(s.IsInvalidArgument());
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
ASSERT_TRUE(s.IsCorruption()); | |
ASSERT_EQ(value, 0); | |
ASSERT_TRUE(s.IsInvalidArgument()); | |
ASSERT_EQ(value, 0); |
s = db.PKHSet("HINCRBY_KEY", "HINCRBY_NUM_FIELD", "10", &ret); | ||
ASSERT_TRUE(s.ok()); | ||
s = db.PKHIncrby("HINCRBY_KEY", "HINCRBY_NUM_FIELD", 9223372036854775807, &value); | ||
ASSERT_TRUE(s.IsInvalidArgument()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check for integer overflow error
When incrementing a field by a value that would cause an integer overflow, ensure you're checking for the correct error status.
If the storage layer provides a specific status for overflow, such as IsOverflow()
, consider using it. Otherwise, confirm that IsInvalidArgument()
is appropriate.
- ASSERT_TRUE(s.IsInvalidArgument());
+ ASSERT_TRUE(s.IsInvalidArgument() || s.IsOverflow());
Ensure that the storage layer returns an appropriate status for integer overflow.
Committable suggestion was skipped due to low confidence.
Status Storage::EnableDymayticOptions(const OptionType& option_type, const std::string& db_type, | ||
const std::unordered_map<std::string, std::string>& options) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo in function name EnableDymayticOptions
The function name EnableDymayticOptions
appears to have a typographical error. It should be EnableDynamicOptions
to correctly reflect the intended meaning.
Apply this diff to correct the function name:
-Status Storage::EnableDymayticOptions(const OptionType& option_type, const std::string& db_type,
+Status Storage::EnableDynamicOptions(const OptionType& option_type, const std::string& db_type,
const std::unordered_map<std::string, std::string>& options) {
Also, ensure all references to this function are updated accordingly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Status Storage::EnableDymayticOptions(const OptionType& option_type, const std::string& db_type, | |
const std::unordered_map<std::string, std::string>& options) { | |
Status Storage::EnableDynamicOptions(const OptionType& option_type, const std::string& db_type, | |
const std::unordered_map<std::string, std::string>& options) { |
801f122
to
3f146b6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 30
🧹 Outside diff range and nitpick comments (12)
include/pika_pkhash.h (3)
14-38
: Consider implementingClear
method and ensureDoInitial
is implemented.The
Clear
method is currently empty. Consider implementing it to reset the object's state if needed. Also, ensure that theDoInitial
method is implemented in the corresponding.cc
file.
143-164
: Consider adding aClear
method for consistency.Unlike the previous classes,
PKHGetCmd
doesn't have aClear
method. Consider adding one to maintain consistency across all command classes, even if it's empty.
1-469
: Overall good structure with minor consistency issuesThe file introduces 19 new command classes for PKHash operations, maintaining a consistent structure across most classes. However, there are a few areas for improvement:
- Consider adding a
Clear
method to all classes for consistency.- Ensure all member variables are properly initialized, either in the constructor or in the
Clear
method.- Implement the
DoInitial
methods in the corresponding.cc
file.These changes will improve the overall consistency and robustness of the code.
src/storage/src/redis.cc (1)
296-495
: Comprehensive update to GetRocksDBInfo functionThe GetRocksDBInfo function has been significantly refactored to provide more detailed information about the RocksDB instance. This includes:
- New lambda functions for writing properties and ticker counts.
- Additional metrics for memtables, compaction, background errors, keys, SST files, block cache, and blob files.
- Detailed RocksDB ticker information.
These changes greatly improve the observability of the RocksDB instance. However, there are a few points to consider:
- The function has become quite long. Consider breaking it down into smaller, more manageable functions.
- Some of the new metrics might have performance implications when called frequently. Ensure that this function is not called in any hot paths.
Consider refactoring this function into smaller, more focused functions for better maintainability. For example:
void Redis::GetRocksDBInfo(std::string& info, const char* prefix) { std::ostringstream string_stream; string_stream << "#" << prefix << "RocksDB" << "\r\n"; WriteAggregatedProperties(string_stream, prefix); WriteColumnFamilyProperties(string_stream, prefix); WriteTickerCounts(string_stream, prefix); WriteColumnFamilyStats(string_stream, prefix); info.append(string_stream.str()); } void Redis::WriteAggregatedProperties(std::ostringstream& stream, const char* prefix) { // Write aggregated properties here } void Redis::WriteColumnFamilyProperties(std::ostringstream& stream, const char* prefix) { // Write column family properties here } void Redis::WriteTickerCounts(std::ostringstream& stream, const char* prefix) { // Write ticker counts here } void Redis::WriteColumnFamilyStats(std::ostringstream& stream, const char* prefix) { // Write column family stats here }This refactoring would make the code more modular and easier to maintain.
src/storage/include/storage/storage.h (7)
124-129
: LGTM: New FieldValueTTL struct addedThe
FieldValueTTL
struct is a good addition for handling field-value pairs with TTL. However, as previously suggested, consider adding a constructor to initialize the members.Consider adding a constructor to initialize the members:
struct FieldValueTTL { std::string field; std::string value; int32_t ttl; FieldValueTTL() : field(""), value(""), ttl(0) {} FieldValueTTL(const std::string& f, const std::string& v, int32_t t) : field(f), value(v), ttl(t) {} bool operator==(const FieldValueTTL& fv) const { return (fv.field == field && fv.value == value && fv.ttl == ttl); } };
178-178
: Comment added for default constructor, but access should be restrictedThe comment clarifying that the default constructor is for unit tests only is helpful. However, as previously suggested, consider restricting access to this constructor to prevent unintended use in production code.
Consider making the default constructor private or protected:
class Storage { private: Storage(); // for unit test only public: Storage(int db_instance_num, int slot_num, bool is_classic_mode); // ... rest of the class };
1043-1044
: LGTM: New PKPatternMatchDelWithRemoveKeys method addedThe new method for pattern matching and key removal is a good addition. Its signature is consistent with other methods in the class.
Consider adding a brief comment explaining the purpose of this method, for example:
// Removes keys matching the given pattern, up to max_count. // Returns the number of keys removed and populates remove_keys with the removed keys. Status PKPatternMatchDelWithRemoveKeys(const std::string& pattern, int64_t* ret, std::vector<std::string>* remove_keys, const int64_t& max_count);
1155-1156
: LGTM: New EnableAutoCompaction method addedThe new EnableAutoCompaction method is a good addition, likely for managing auto-compaction settings. Its signature is consistent with other methods in the class.
Consider adding a brief comment explaining the purpose of this method, for example:
// Enables auto-compaction for the specified option type and database type with the given options. Status EnableAutoCompaction(const OptionType& option_type, const std::string& db_type, const std::unordered_map<std::string, std::string>& options);
414-457
: LGTM: New Pika Hash (PKH) methods addedThe addition of these new PKH methods significantly expands the functionality for Pika Hash operations. The methods are well-structured and consistent with the existing code style.
Consider using
int64_t
instead ofint32_t
for TTL-related parameters to maintain consistency with other TTL methods in the class and to support larger TTL values. For example:Status PKHExpire(const Slice& key, int64_t ttl, int32_t numfields, const std::vector<std::string>& fields, std::vector<int32_t>* rets); Status PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl, int32_t* ret);Please review and update other PKH methods that use TTL values similarly.
490-491
: Update set operation methods to use pointers for output parametersThe addition of the
value_to_dest
parameter to SDiffstore, SInterstore, and SUnionstore is good for providing more information about the operation results. However, for consistency with the codebase's conventions, consider using pointers for output parameters instead of references.Update the method signatures to use pointers for output parameters:
Status SDiffstore(const Slice& destination, const std::vector<std::string>& keys, std::vector<std::string>* value_to_dest, int32_t* ret); Status SInterstore(const Slice& destination, const std::vector<std::string>& keys, std::vector<std::string>* value_to_dest, int32_t* ret); Status SUnionstore(const Slice& destination, const std::vector<std::string>& keys, std::vector<std::string>* value_to_dest, int32_t* ret);Also applies to: 514-515, 574-575
278-279
: Update BitOp method to use pointer for output parameterThe addition of the
value_to_dest
parameter to BitOp is good for providing more information about the operation results. However, for consistency with the codebase's conventions, consider using a pointer for this output parameter instead of a reference.Update the method signature to use a pointer for the output parameter:
Status BitOp(BitOpType op, const std::string& dest_key, const std::vector<std::string>& src_keys, std::string* value_to_dest, int64_t* ret);src/storage/src/redis.h (1)
528-529
: Address the TODO comment.The TODO comment indicates that separating the environment for each RocksDB instance is a pending task. Consider creating a GitHub issue to track this enhancement if it's not already planned.
Do you want me to open a GitHub issue to track this task?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (19)
- include/acl.h (1 hunks)
- include/pika_command.h (5 hunks)
- include/pika_pkhash.h (1 hunks)
- src/pika_client_conn.cc (1 hunks)
- src/pika_command.cc (10 hunks)
- src/pika_pkhash.cc (1 hunks)
- src/storage/include/storage/storage.h (13 hunks)
- src/storage/include/storage/storage_define.h (2 hunks)
- src/storage/src/base_filter.h (6 hunks)
- src/storage/src/base_value_format.h (5 hunks)
- src/storage/src/pkhash_data_value_format.h (1 hunks)
- src/storage/src/redis.cc (12 hunks)
- src/storage/src/redis.h (14 hunks)
- src/storage/src/redis_hashes.cc (29 hunks)
- src/storage/src/redis_pkhashes.cc (1 hunks)
- src/storage/src/redis_strings.cc (35 hunks)
- src/storage/src/storage.cc (28 hunks)
- src/storage/tests/hashes_test.cc (3 hunks)
- src/storage/tests/pkhashes_test.cc (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
- include/acl.h
- src/pika_client_conn.cc
- src/storage/include/storage/storage_define.h
- src/storage/src/base_filter.h
- src/storage/src/base_value_format.h
- src/storage/src/pkhash_data_value_format.h
- src/storage/src/redis_hashes.cc
- src/storage/tests/hashes_test.cc
- src/storage/tests/pkhashes_test.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc
[performance] 565-565: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 774-774: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 857-857: Searching before insertion is not necessary.
(stlFindInsert)
🔇 Additional comments (123)
include/pika_command.h (2)
141-159
: LGTM: New PKHash command constants added.The new PKHash command constants have been added correctly, following the existing naming conventions. These additions align with the PR objectives of introducing new PKHash commands to the Pika database.
Line range hint
141-314
: Overall assessment: Changes are well-integrated and consistent.The additions for PKHash commands and the new flag are well-structured and consistent with the existing codebase. These changes successfully implement the new PKHash functionality as outlined in the PR objectives.
To ensure full integration:
- Verify that the new PKHash commands are properly implemented in their respective files.
- Check that the
kCmdFlagsPKHash
flag is correctly used in command implementations.- Update any relevant documentation to reflect these new PKHash capabilities.
src/storage/src/redis.cc (4)
218-218
: LGTM: New column family included in CompactRangeThe new 'pika_hash_data_cf' column family has been correctly included in the CompactRange function.
498-498
: LGTM: New setter for write WAL optionsThe new
SetWriteWalOptions
function provides a way to disable Write-Ahead Logging, which can be useful for certain scenarios where durability can be traded for performance.
167-168
: Update column family indices after adding new column familyWith the addition of 'pika_hash_data_cf' to the column_families vector, the indices of existing column families in the handles_ array may have shifted. Ensure that all references to handles_ use the correct indices corresponding to the updated order of column families.
#!/bin/bash # Search for references to handles_ array rg '\bhandles_\[' src
94-95
: Verify the compaction filter factory for the new PK Hash column familyThe new PK Hash column family is using the same compaction filter factory as the regular hash column family. This might be intentional, but it's worth verifying if a separate compaction filter is needed for PK Hash operations.
Also applies to: 102-110
src/storage/include/storage/storage.h (3)
26-28
: LGTM: New include for PKHash functionalityThe addition of the
pkhash_data_value_format.h
include is appropriate for the new PKH methods being introduced.
164-164
: LGTM: New Operation enum addedThe
Operation
enum is a good addition, likely for use with background tasks. The naming is clear and consistent with the existing code style.
Line range hint
1-1173
: Overall good additions with some minor improvements neededThe changes to this file, particularly the addition of the PKH (Pika Hash) methods, significantly expand the functionality of the Storage class. The new methods are well-structured and consistent with the existing code style.
However, there are a few areas that could be improved:
- Consistency in parameter passing: Consider using pointers instead of references for output parameters throughout the class to maintain consistency.
- TTL parameter types: Review the use of
int32_t
vsint64_t
for TTL-related parameters to ensure consistency and support for larger TTL values.- Method naming and comments: Correct the typo in
EnableDymayticOptions
and consider adding brief comments to new methods to explain their purpose.- Constructor access: Consider restricting access to the default constructor as previously suggested.
Addressing these points will improve the overall consistency and clarity of the code.
src/storage/src/storage.cc (22)
7-7
: Approved: Include statement reorganizationThe reordering of include statements improves code organization without introducing new dependencies or affecting functionality.
Also applies to: 11-12, 17-17, 20-21
Line range hint
199-217
: Approved: Improved MGet implementationThe change to a range-based for loop enhances readability and reduces the risk of index-related errors. This is a good use of modern C++ features.
Line range hint
218-236
: Approved: Consistent improvement in MGetWithTTLThis change mirrors the improvement made to the MGet method, using a range-based for loop. This consistency in coding style across similar methods is commendable.
304-308
: Approved: Enhanced input validation in BitOpThe addition of a check for the NOT operation with multiple source keys improves the robustness of the BitOp method. This prevents invalid operations and potential errors.
488-595
: Approved: Implementation of new Pika Hash commandsThe addition of these new Pika Hash commands (PKHExpire, PKHExpireat, PKHExpiretime, PKHPersist, PKHTTL, PKHGet, PKHSet, PKHSetex, PKHExists, PKHDel, PKHLen, PKHStrlen, PKHIncrby, PKHMSet, PKHMSetex, PKHMGet, PKHKeys, PKHVals, PKHGetall, and PKHScan) aligns well with the PR objectives. The implementation consistently delegates to the appropriate database instance, maintaining the existing architecture.
646-647
: Approved: Enhanced functionality in set operationsThe addition of the
value_to_dest
parameter to SDiffstore, SInterstore, and SUnionstore methods improves their functionality. This change allows the caller to receive the resulting set values directly, potentially reducing the need for additional queries and improving performance.Also applies to: 714-715, 829-830
1021-1021
: Approved: Improved code formattingThe line breaks added to the ZRangebyscore and ZRevrangebyscore methods improve code readability without affecting functionality. This change helps prevent excessively long lines.
Also applies to: 1070-1071
1210-1211
: Approved: Enhanced readability in method signaturesThe parameter lists for ZRangebylex, ZLexcount, and ZRemrangebylex methods have been split across multiple lines, improving code readability without affecting functionality. This change helps prevent excessively long lines and makes the method signatures easier to read.
Also applies to: 1217-1218, 1223-1224
1267-1267
: Approved: Improved readability in XRead method signatureThe addition of a line break in the XRead method signature improves code readability without affecting functionality. This change helps prevent an excessively long line.
1356-1357
: Approved: Simplified key_type determinationThe use of a ternary operator to determine the key_type based on the dtype simplifies the code without changing its functionality. This change makes the code more concise and easier to read.
1384-1384
: Approved: Simplified CreateIterator callThe removal of unnecessary parameters from the CreateIterator call simplifies the code without changing its functionality.
1442-1443
: Approved: Consistent simplification of CreateIterator callsThe removal of unnecessary parameters from the CreateIterator calls in PKScanRange and PKRScanRange methods simplifies the code without changing functionality. This change is consistent with the earlier modification to the Scan method, maintaining a uniform approach across the codebase.
Also applies to: 1490-1491
Line range hint
1517-1526
: Approved: Enhanced control in PKPatternMatchDelWithRemoveKeysThe addition of the max_count parameter and its usage in PKPatternMatchDelWithRemoveKeys provides better control over the number of keys deleted. The early return when the maximum count is reached improves efficiency by preventing unnecessary processing.
1544-1544
: Approved: Consistent simplification of CreateIterator call in ScanxThe removal of unnecessary parameters from the CreateIterator call in the Scanx method simplifies the code without changing functionality. This change maintains consistency with similar modifications made to other methods in the file.
1991-1991
: Approved: Improved logging in GetDBByIndexThe change from direct stderr output to LOG(WARNING) in the GetDBByIndex method improves the consistency of logging throughout the codebase. This approach allows for better log management and filtering.
1998-1998
: Approved: Enhanced flexibility with db_type parameterThe addition of the db_type parameter to SetOptions, EnableDymayticOptions, and EnableAutoCompaction methods provides more flexibility in handling different database types. This change allows for more specific customization based on the database type.
Also applies to: 2016-2017, 2028-2029
2036-2036
: Approved: Consistent handling of Pika Hash in EnableAutoCompactionThe addition of pk_hash_cfhds (Pika Hash column family handles) in the EnableAutoCompaction method ensures that the new Pika Hash data structures are properly managed during auto-compaction. This change is consistent with the Pika Hash functionality added earlier in the file.
Also applies to: 2042-2042
Line range hint
1-2060
: Summary of changes in storage.ccThis review covered significant additions and modifications to the Storage class, primarily focusing on new Pika Hash commands and improvements to existing methods. Key points:
- New Pika Hash commands (PKHExpire, PKHExpireat, etc.) have been successfully implemented.
- Existing methods like MGet, MGetWithTTL, and various set operations have been improved for better readability and efficiency.
- Several CreateIterator calls have been simplified consistently across different methods.
- Minor improvements in code formatting and logging have been made.
A few issues were identified:
- Potential C++11 compatibility issues with std::plus<> usage.
- A typo in the method name "EnableDymayticOptions".
- Inconsistent parameter passing style in some methods.
Overall, the changes align well with the PR objectives and generally improve the code quality. Addressing the identified issues will further enhance the robustness of the implementation.
2016-2017
:⚠️ Potential issueFix typo in method name
There's a typo in the method name "EnableDymayticOptions". It should be "EnableDynamicOptions".
Apply this diff to correct the method name:
-Status Storage::EnableDymayticOptions(const OptionType& option_type, const std::string& db_type, +Status Storage::EnableDynamicOptions(const OptionType& option_type, const std::string& db_type, const std::unordered_map<std::string, std::string>& options) {Likely invalid or redundant comment.
821-823
:⚠️ Potential issueFix potential undefined behavior in SUnion
Using
std::move_iterator
withstd::copy
to insert elements into anunordered_set
can lead to undefined behavior. Moved-from elements may not remain valid for hashing and equality checks required by the set.Apply this diff to fix the issue:
- std::copy(std::move_iterator<Iter>(vec.begin()), std::move_iterator<Iter>(vec.end()), + std::copy(vec.begin(), vec.end(), std::insert_iterator<Uset>(member_set, member_set.begin()));Likely invalid or redundant comment.
530-533
: 🛠️ Refactor suggestionConsider using int64_t for TTL parameter in PKHSetex
In the PKHSetex method, the TTL parameter is currently an int32_t. To support larger TTL values and maintain consistency with other time-related parameters, consider changing it to int64_t.
Apply this diff:
-Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl, int32_t* ret) { +Status Storage::PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl, int32_t* ret) {Likely invalid or redundant comment.
1398-1398
: Verify the impact of changed comparison logicThe condition in the while loop has been modified. Please verify that this change doesn't alter the intended behavior of the loop.
To verify the impact of this change, you can run the following script:
src/pika_pkhash.cc (2)
306-317
: 🛠️ Refactor suggestionSimplify error handling in
PKHExistsCmd::Do()
.The error handling can be simplified by combining the
ok
andIsNotFound
cases.Apply this diff:
void PKHExistsCmd::Do() { s_ = db_->storage()->HExists(key_, field_); - if (s_.ok()) { - res_.AppendContent(":1"); - } else if (s_.IsInvalidArgument()) { - res_.SetRes(CmdRes::kMultiKey); - } else if (s_.IsNotFound()) { - res_.AppendContent(":0"); + if (s_.ok() || s_.IsNotFound()) { + res_.AppendContent(s_.ok() ? ":1" : ":0"); } else { res_.SetRes(CmdRes::kErrOther, s_.ToString()); } }Likely invalid or redundant comment.
208-221
: 🛠️ Refactor suggestionSimplify error handling in
PKHGetCmd::Do()
.The error handling can be simplified by combining the
IsInvalidArgument
andIsNotFound
cases.Apply this diff:
void PKHGetCmd::Do() { std::string value; s_ = db_->storage()->PKHGet(key_, field_, &value); if (s_.ok()) { res_.AppendStringLenUint64(value.size()); res_.AppendContent(value); - } else if (s_.IsInvalidArgument()) { - res_.SetRes(CmdRes::kMultiKey); - } else if (s_.IsNotFound()) { + } else if (s_.IsNotFound() || s_.IsInvalidArgument()) { res_.AppendContent("$-1"); } else { res_.SetRes(CmdRes::kErrOther, s_.ToString()); } }Likely invalid or redundant comment.
src/storage/src/redis.h (23)
17-19
: LGTM!The added header includes appear relevant and necessary for the new functionality.
24-24
: LGTM!The
redis_streams.h
header is required for the new stream-related methods.
77-78
: ****The previous review comment about ensuring thread safety for
modify_count
operations is still applicable. Please address it if not already done.
86-87
: LGTM!The constructor changes are minor formatting improvements and do not alter functionality.
94-94
: LGTM!Adding a
const
qualifier toGetIndex()
is a good practice as it indicates the method does not modify the object's state.
121-121
: LGTM!The new
PKHashesExpire
method declaration is consistent with the existingExpire
methods for other data types.
151-152
: LGTM!The formatting change in the
BitOp
method signature improves readability without altering functionality.
244-247
: ****The previous review comment about validating bounds and correctness in
GetPKHashCFHandles
is still applicable. Please address it if not already done.
249-250
: LGTM!The minor formatting change in
GetRocksDBInfo
is acceptable.
297-298
: LGTM!The formatting change in the
SDiffstore
method signature is acceptable and improves readability.
300-301
: LGTM!The formatting change in the
SInterstore
method signature is acceptable and improves readability.
310-311
: LGTM!The formatting change in the
SUnionstore
method signature is acceptable and improves readability.
375-376
: ****The previous review comment about the potential issue with the rvalue reference parameter in
XRange
is still applicable. Please address it if not already done.
386-387
: ****The previous review comment about the potential issue with the rvalue reference parameter in
GetStreamMeta
is still applicable. Please address it if not already done.
407-408
: LGTM!The formatting change in the
CreateIterator
method signature is acceptable and improves readability.
412-413
: LGTM!The formatting change in the overloaded
CreateIterator
method signature is acceptable and improves readability.
446-446
: LGTM!The minor formatting change in
GetMetaValueType
is acceptable.
451-451
: LGTM!The minor formatting change in
ExpectedMetaValue
is acceptable.
459-459
: LGTM!The minor formatting change in
ExpectedStale
is acceptable.
464-465
: LGTM!The addition of the
kPKHashes
case in theExpectedStale
method is consistent with the new PK Hash functionality.
487-487
: LGTM!Making the
GenerateStreamID
method private is a good practice to encapsulate internal functionality.
540-543
: LGTM!The formatting changes in the
GetScanStartPoint
andStoreScanNextPoint
method signatures are acceptable and improve readability.
252-291
: Verify the new PK Hash methods in the codebase.The addition of the new PK Hash methods (
PKHExpire
,PKHExpireat
,PKHExpiretime
,PKHTTL
,PKHPersist
,PKHGet
,PKHSet
,PKHSetex
,PKHExists
,PKHDel
,PKHLen
,PKHStrlen
,PKHIncrby
,PKHMSet
,PKHMSetex
,PKHMGet
,PKHKeys
,PKHVals
,PKHGetall
,PKHScan
) extends the functionality of theRedis
class.Please ensure that:
- These methods are implemented correctly in the corresponding source files.
- The methods are used appropriately throughout the codebase.
- Relevant tests are added to verify the functionality of these methods.
Run the following script to check the usage of these new methods in the codebase:
✅ Verification successful
PK Hash Methods Verified Successfully
All new PK Hash methods are properly implemented, utilized throughout the codebase, and adequately tested.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for usage of new PK Hash methods # List of new PK Hash method names methods=( "PKHExpire" "PKHExpireat" "PKHExpiretime" "PKHTTL" "PKHPersist" "PKHGet" "PKHSet" "PKHSetex" "PKHExists" "PKHDel" "PKHLen" "PKHStrlen" "PKHIncrby" "PKHMSet" "PKHMSetex" "PKHMGet" "PKHKeys" "PKHVals" "PKHGetall" "PKHScan" ) # Search for usage of each method for method in "${methods[@]}"; do echo "Checking usage of $method:" rg "\b${method}\s*\(" --type cpp echo doneLength of output: 34819
src/storage/src/redis_strings.cc (1)
8-8
: 🛠️ Refactor suggestionRemove unnecessary inclusion of
<iostream>
The header
<iostream>
is not used in this file and can be safely removed to reduce compilation time and prevent potential namespace pollution.Apply this diff to remove the unnecessary include:
-#include <iostream>
Likely invalid or redundant comment.
src/pika_command.cc (66)
19-19
: LGTM!The new include statement for
pika_pkhash.h
is correctly added.
159-161
: LGTM!The
clearcacheptr
command is correctly initialized with the appropriate flags.
162-164
: LGTM!The
lastsaveptr
command is correctly initialized with the appropriate flags.
247-249
: LGTM!The
setptr
command is correctly initialized with the appropriate flags.
251-254
: LGTM!The
getptr
command is correctly initialized with the appropriate flags.
256-259
: LGTM!The
delptr
command is correctly initialized with the appropriate flags.
264-266
: LGTM!The
incrptr
command is correctly initialized with the appropriate flags.
269-270
: LGTM!The
incrbyptr
command is correctly initialized with the appropriate flags.
273-275
: LGTM!The
incrbyfloatptr
command is correctly initialized with the appropriate flags.
277-279
: LGTM!The
decrptr
command is correctly initialized with the appropriate flags.
282-283
: LGTM!The
decrbyptr
command is correctly initialized with the appropriate flags.
286-287
: LGTM!The
getsetptr
command is correctly initialized with the appropriate flags.
290-291
: LGTM!The
appendptr
command is correctly initialized with the appropriate flags.
293-296
: LGTM!The
mgetptr
command is correctly initialized with the appropriate flags.
303-304
: LGTM!The
setnxptr
command is correctly initialized with the appropriate flags.
306-308
: LGTM!The
setexptr
command is correctly initialized with the appropriate flags.
310-312
: LGTM!The
psetexptr
command is correctly initialized with the appropriate flags.
315-316
: LGTM!The
delvxptr
command is correctly initialized with the appropriate flags.
318-320
: LGTM!The
msetptr
command is correctly initialized with the appropriate flags.
322-324
: LGTM!The
msetnxptr
command is correctly initialized with the appropriate flags.
327-329
: LGTM!The
getrangeptr
command is correctly initialized with the appropriate flags.
332-333
: LGTM!The
setrangeptr
command is correctly initialized with the appropriate flags.
335-338
: LGTM!The
strlenptr
command is correctly initialized with the appropriate flags.
340-343
: LGTM!The
existsptr
command is correctly initialized with the appropriate flags.
346-348
: LGTM!The
expireptr
command is correctly initialized with the appropriate flags.
351-353
: LGTM!The
pexpireptr
command is correctly initialized with the appropriate flags.
355-358
: LGTM!The
expireatptr
command is correctly initialized with the appropriate flags.
360-363
: LGTM!The
pexpireatptr
command is correctly initialized with the appropriate flags.
365-367
: LGTM!The
ttlptr
command is correctly initialized with the appropriate flags.
369-371
: LGTM!The
pttlptr
command is correctly initialized with the appropriate flags.
373-376
: LGTM!The
persistptr
command is correctly initialized with the appropriate flags.
378-380
: LGTM!The
typeptr
command is correctly initialized with the appropriate flags.
391-392
: LGTM!The
pksetexatptr
command is correctly initialized with the appropriate flags.
394-396
: LGTM!The
pkscanrangeptr
command is correctly initialized with the appropriate flags.
398-400
: LGTM!The
pkrscanrangeptr
command is correctly initialized with the appropriate flags.
404-406
: LGTM!The
hdelptr
command is correctly initialized with the appropriate flags.
408-410
: LGTM!The
hsetptr
command is correctly initialized with the appropriate flags.
412-415
: LGTM!The
hgetptr
command is correctly initialized with the appropriate flags.
417-420
: LGTM!The
hgetallptr
command is correctly initialized with the appropriate flags.
422-425
: LGTM!The
hexistsptr
command is correctly initialized with the appropriate flags.
427-429
: LGTM!The
hincrbyptr
command is correctly initialized with the appropriate flags.
431-434
: LGTM!The
hincrbyfloatptr
command is correctly initialized with the appropriate flags.
436-439
: LGTM!The
hkeysptr
command is correctly initialized with the appropriate flags.
441-444
: LGTM!The
hlenptr
command is correctly initialized with the appropriate flags.
446-449
: LGTM!The
hmgetptr
command is correctly initialized with the appropriate flags.
451-453
: LGTM!The
hmsetptr
command is correctly initialized with the appropriate flags.
455-457
: LGTM!The
hsetnxptr
command is correctly initialized with the appropriate flags.
459-462
: LGTM!The
hstrlenptr
command is correctly initialized with the appropriate flags.
464-467
: LGTM!The
hvalsptr
command is correctly initialized with the appropriate flags.
469-471
: LGTM!The
hscanptr
command is correctly initialized with the appropriate flags.
473-475
: LGTM!The
hscanxptr
command is correctly initialized with the appropriate flags.
477-479
: LGTM!The
pkhscanrangeptr
command is correctly initialized with the appropriate flags.
481-483
: LGTM!The
pkhrscanrangeptr
command is correctly initialized with the appropriate flags.
485-488
: LGTM!The
ehsetptr
command is correctly initialized with the appropriate flags.
490-493
: LGTM!The
ehexpireptr
command is correctly initialized with the appropriate flags.
495-497
: LGTM!The
ehexpireatptr
command is correctly initialized with the appropriate flags.
507-510
: LGTM!The
ehpersistptr
command is correctly initialized with the appropriate flags.
511-514
: LGTM!The
ehgetptr
command is correctly initialized with the appropriate flags.
516-518
: LGTM!The
ehsetexptr
command is correctly initialized with the appropriate flags.
520-522
: LGTM!The
ehexistsptr
command is correctly initialized with the appropriate flags.
524-526
: LGTM!The
ehdelptr
command is correctly initialized with the appropriate flags.
528-530
: LGTM!The
ehlenptr
command is correctly initialized with the appropriate flags.
532-534
: LGTM!The
ehstrlenptr
command is correctly initialized with the appropriate flags.
536-538
: LGTM!The
ehincrbyptr
command is correctly initialize
502-505
:⚠️ Potential issueCorrect command flags for
PKHTTLCmd
The
PKHTTL
command is intended to return the time-to-live for a hash key, which is a read operation. However, it is currently flagged as a write command. Please change the flags to usekCmdFlagsRead
to accurately reflect its functionality.- std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>( - kCmdNamePKHTTL, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); + std::unique_ptr<Cmd> ehttlptr = std::make_unique<PKHTTLCmd>( + kCmdNamePKHTTL, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);Likely invalid or redundant comment.
498-501
:⚠️ Potential issueCorrect command flags for
PKHExpiretimeCmd
The
PKHExpiretime
command retrieves the expiration time of a hash key, which is a read operation. Currently, it is flagged as a write command. Please change the flags to usekCmdFlagsRead
to accurately reflect its functionality.Apply this diff:
- std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>( - kCmdNamePKHExpiretime, -4, kCmdFlagsWrite | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast); + std::unique_ptr<Cmd> ehexpiretimeptr = std::make_unique<PKHExpiretimeCmd>( + kCmdNamePKHExpiretime, -4, kCmdFlagsRead | kCmdFlagsDoThroughDB | kCmdFlagsPKHash | kCmdFlagsFast);Likely invalid or redundant comment.
class PKHLenCmd : public Cmd { | ||
public: | ||
PKHLenCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void DoThroughDB() override; | ||
void DoUpdateCache() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHLenCmd(*this); } | ||
|
||
private: | ||
std::string key_; | ||
bool is_force_; | ||
void DoInitial() override; | ||
rocksdb::Status s_; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initialize the is_force_
member variable.
The is_force_
member variable is declared but not initialized. This may lead to undefined behavior if it's used before being assigned a value. Consider initializing it in the constructor or implementing a Clear()
method to reset the internal state.
class PKHGetAllCmd : public Cmd { | ||
public: | ||
PKHGetAllCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void DoThroughDB() override; | ||
void DoUpdateCache() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHGetAllCmd(*this); } | ||
|
||
private: | ||
std::string key_; | ||
bool is_wt_; | ||
void DoInitial() override; | ||
rocksdb::Status s_; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initialize the is_wt_
member variable.
The is_wt_
member variable is declared but not initialized. This may lead to undefined behavior if it's used before being assigned a value. Consider initializing it in the constructor or implementing a Clear()
method to reset the internal state.
@@ -290,7 +310,8 @@ enum CmdFlags { | |||
kCmdFlagsOperateKey = (1 << 19), // redis keySpace | |||
kCmdFlagsStream = (1 << 20), | |||
kCmdFlagsFast = (1 << 21), | |||
kCmdFlagsSlow = (1 << 22) | |||
kCmdFlagsSlow = (1 << 22), | |||
kCmdFlagsPKHash = (1 << 23), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Flag Inconsistencies Found: Missing kCmdFlagsPKHash
The following files are missing the kCmdFlagsPKHash
flag where it should be applied:
tools/pika-port/pika_port_3/pika_command.h
include/pika_hyperloglog.h
src/pika_zset.cc
include/pika_set.h
include/pika_kv.h
include/pika_bit.h
src/pika_list.h
src/pika_cmd_table_manager.cc
src/acl.cc
Please ensure that kCmdFlagsPKHash
is consistently added to all relevant command definitions to maintain proper functionality.
🔗 Analysis chain
LGTM: New PKHash flag added.
The kCmdFlagsPKHash
flag has been added to the CmdFlags
enum, which is consistent with the addition of new PKHash commands. This flag will allow for proper identification and handling of PKHash commands in the system.
To ensure the new flag is being utilized correctly, please run the following script to check for its usage across the codebase:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the usage of the new kCmdFlagsPKHash flag
# Search for uses of the new flag
echo "Searching for uses of kCmdFlagsPKHash:"
rg "kCmdFlagsPKHash" --type cpp
# Search for potential places where the flag should be used
echo "Potential places where kCmdFlagsPKHash should be used:"
rg "kCmdFlags(Kv|Hash|List|Set|Zset|Bit|HyperLogLog|Geo|PubSub|Stream)" --type cpp
Length of output: 23556
Script:
#!/bin/bash
# Description: Verify that kCmdFlagsPKHash is used in all relevant command definitions
# List of potential flag definitions where kCmdFlagsPKHash should be applied
potential_flags=(
"kCmdFlagsKv"
"kCmdFlagsHash"
"kCmdFlagsList"
"kCmdFlagsSet"
"kCmdFlagsZset"
"kCmdFlagsBit"
"kCmdFlagsHyperLogLog"
"kCmdFlagsGeo"
"kCmdFlagsPubSub"
"kCmdFlagsStream"
)
echo "Verifying usage of kCmdFlagsPKHash in relevant command definitions:"
for flag in "${potential_flags[@]}"; do
echo "Checking for commands using ${flag}:"
rg "${flag}" --type cpp | while read -r line; do
file=$(echo "$line" | cut -d':' -f1)
if ! grep -q "kCmdFlagsPKHash" "$file"; then
echo "Missing kCmdFlagsPKHash in $file"
fi
done
done
Length of output: 47799
} | ||
// column family stats | ||
std::map<std::string, std::string> mapvalues; | ||
db_->rocksdb::DB::GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix method invocation of 'GetMapProperty'
The method call to GetMapProperty
incorrectly qualifies the method with rocksdb::DB::
. Since db_
is a pointer to rocksdb::DB
, the correct invocation should be:
db_->GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues);
Apply this diff to fix the method call:
- db_->rocksdb::DB::GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues);
+ db_->GetMapProperty(rocksdb::DB::Properties::kCFStats, &mapvalues);
Status EnableDymayticOptions(const OptionType& option_type, const std::string& db_type, | ||
const std::unordered_map<std::string, std::string>& options); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the typo in EnableDymayticOptions method name
As previously pointed out, there's a typo in the method name. It should be "Dynamic" instead of "Dymaytic".
Please correct the method name:
Status EnableDynamicOptions(const OptionType& option_type, const std::string& db_type,
const std::unordered_map<std::string, std::string>& options);
version = parsed_hashes_meta_value.UpdateVersion(); | ||
parsed_hashes_meta_value.SetCount(1); | ||
parsed_hashes_meta_value.SetEtime(0); | ||
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Simplify version and count updates
In the PKHIncrby
function, when the hash is stale or empty, the code updates the version and resets the count. However, resetting the count and expiration time is unnecessary because UpdateVersion()
and InitialMetaValue()
already handle these.
Apply this diff to remove redundant code:
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) {
version = parsed_hashes_meta_value.UpdateVersion();
- parsed_hashes_meta_value.SetCount(1);
- parsed_hashes_meta_value.SetEtime(0);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
HashesDataKey hashes_data_key(key, version, field);
Committable suggestion was skipped due to low confidence.
int64_t rest = count; | ||
int64_t step_length = count; | ||
rocksdb::ReadOptions read_options; | ||
const rocksdb::Snapshot* snapshot; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate count
parameter to prevent unexpected behavior
In the PKHScan
function, variables rest
and step_length
are initialized with count
, but there is no validation to ensure count
is positive. A non-positive count
could lead to unexpected behavior.
Add validation for the count
parameter:
if (count <= 0) {
*next_cursor = 0;
return Status::OK();
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
int64_t rest = count; | |
int64_t step_length = count; | |
rocksdb::ReadOptions read_options; | |
const rocksdb::Snapshot* snapshot; | |
if (count <= 0) { | |
*next_cursor = 0; | |
return Status::OK(); | |
} | |
int64_t rest = count; | |
int64_t step_length = count; | |
rocksdb::ReadOptions read_options; | |
const rocksdb::Snapshot* snapshot; | |
", get type: " + DataTypeStrings[static_cast<int>(GetMetaValueType(meta_value))]); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle unexpected errors from GetScanStartPoint
After retrieving the scan start point with GetScanStartPoint
, the current code only checks for IsNotFound()
. Other errors are not handled, which might lead to unintended behavior.
Adjust the error handling to catch all unexpected errors:
s = GetScanStartPoint(DataType::kPKHashes, key, pattern, cursor, &start_point);
- if (s.IsNotFound()) {
+ if (!s.ok()) {
+ if (s.IsNotFound()) {
cursor = 0;
if (isTailWildcard(pattern)) {
start_point = pattern.substr(0, pattern.size() - 1);
}
+ } else {
+ return s;
+ }
}
Committable suggestion was skipped due to low confidence.
src/storage/src/redis_pkhashes.cc
Outdated
for (const auto& field : fields) { | ||
HashesDataKey data_key(key, version, field); | ||
std::string data_value; | ||
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], data_key.Encode(), &data_value); | ||
if (s.ok()) { | ||
ParsedPKHashDataValue parsed_internal_value(&data_value); | ||
if (parsed_internal_value.IsStale()) { | ||
rets->push_back(-2); | ||
} else { | ||
rets->push_back(1); | ||
parsed_internal_value.SetRelativeTimestamp(ttl); | ||
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), data_value); | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential errors when setting field expirations
In the PKHExpire
method, the loop over the fields
vector does not handle cases where db_->Get
returns an error other than Status::NotFound
. If Get
fails due to a temporary error, the function may proceed incorrectly.
Apply this diff to handle potential errors:
for (const auto& field : fields) {
HashesDataKey data_key(key, version, field);
std::string data_value;
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], data_key.Encode(), &data_value);
- if (s.ok()) {
+ if (!s.ok()) {
+ if (s.IsNotFound()) {
+ rets->push_back(-2);
+ continue;
+ } else {
+ return s;
+ }
+ }
+ ParsedPKHashDataValue parsed_internal_value(&data_value);
+ if (parsed_internal_value.IsStale()) {
rets->push_back(-2);
} else {
rets->push_back(1);
parsed_internal_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), data_value);
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for (const auto& field : fields) { | |
HashesDataKey data_key(key, version, field); | |
std::string data_value; | |
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], data_key.Encode(), &data_value); | |
if (s.ok()) { | |
ParsedPKHashDataValue parsed_internal_value(&data_value); | |
if (parsed_internal_value.IsStale()) { | |
rets->push_back(-2); | |
} else { | |
rets->push_back(1); | |
parsed_internal_value.SetRelativeTimestamp(ttl); | |
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), data_value); | |
} | |
} | |
} | |
for (const auto& field : fields) { | |
HashesDataKey data_key(key, version, field); | |
std::string data_value; | |
s = db_->Get(default_read_options_, handles_[kPKHashDataCF], data_key.Encode(), &data_value); | |
if (!s.ok()) { | |
if (s.IsNotFound()) { | |
rets->push_back(-2); | |
continue; | |
} else { | |
return s; | |
} | |
} | |
ParsedPKHashDataValue parsed_internal_value(&data_value); | |
if (parsed_internal_value.IsStale()) { | |
rets->push_back(-2); | |
} else { | |
rets->push_back(1); | |
parsed_internal_value.SetRelativeTimestamp(ttl); | |
batch.Put(handles_[kPKHashDataCF], data_key.Encode(), data_value); | |
} | |
} |
LOG(INFO) << fmt::format("[key : {:<30}] [value : {:<30}] [timestamp : {:<10}] [version : {}] [survival_time : {}]", | ||
parsed_strings_key.Key().ToString(), parsed_strings_value.UserValue().ToString(), | ||
parsed_strings_value.Etime(), parsed_strings_value.Version(), survival_time); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid logging sensitive information
The LOG(INFO)
statement logs key and value data, which may contain sensitive information. Logging such data could lead to potential PII leakage. Consider removing or masking sensitive information before logging.
Apply this diff to sanitize the log output:
LOG(INFO) << fmt::format("[key : {:<30}] [value : {:<30}] [timestamp : {:<10}] [version : {}] [survival_time : {}]",
- parsed_strings_key.Key().ToString(), parsed_strings_value.UserValue().ToString(),
+ parsed_strings_key.Key().ToString(), "<redacted>",
parsed_strings_value.Etime(), parsed_strings_value.Version(), survival_time);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
LOG(INFO) << fmt::format("[key : {:<30}] [value : {:<30}] [timestamp : {:<10}] [version : {}] [survival_time : {}]", | |
parsed_strings_key.Key().ToString(), parsed_strings_value.UserValue().ToString(), | |
parsed_strings_value.Etime(), parsed_strings_value.Version(), survival_time); | |
LOG(INFO) << fmt::format("[key : {:<30}] [value : {:<30}] [timestamp : {:<10}] [version : {}] [survival_time : {}]", | |
parsed_strings_key.Key().ToString(), "<redacted>", | |
parsed_strings_value.Etime(), parsed_strings_value.Version(), survival_time); |
3f146b6
to
d690461
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 20
🧹 Outside diff range and nitpick comments (7)
src/storage/src/base_value_format.h (1)
150-164
: Add documentation for clarity.Consider adding documentation to:
- Explain the purpose and contract of
IsValid()
- Document the purpose of
reserve_
array or remove if unusedApply this diff:
- virtual bool IsValid() { return !IsStale(); } + // Returns true if the value is still valid (not expired) + virtual bool IsValid() { return !IsStale(); } virtual void StripSuffix() = 0; protected: virtual void SetVersionToValue() = 0; virtual void SetEtimeToValue() = 0; virtual void SetCtimeToValue() = 0; std::string* value_ = nullptr; rocksdb::Slice user_value_; uint64_t version_ = 0; uint64_t ctime_ = 0; uint64_t etime_ = 0; DataType type_; - char reserve_[16] = {0}; // unused + // Reserved for future use to maintain ABI compatibility + char reserve_[16] = {0};src/storage/src/redis.h (1)
528-529
: Consider tracking the TODO in the issue tracker.The TODO comment about separating env for each rocksdb instance suggests a potential improvement for better isolation.
Would you like me to create a GitHub issue to track this TODO item?
src/storage/include/storage/storage.h (1)
419-463
: Add documentation for PKHash methodsThe new PKHash methods lack documentation explaining their behavior, parameters, and return values. Consider adding detailed documentation similar to other methods in the file.
Example documentation format:
// Sets the specified fields' expiration time in the hash stored at key. // If key does not exist, returns an error. // Returns the number of fields that were updated in rets. Status PKHExpire(const Slice& key, int64_t ttl, int32_t numfields, const std::vector<std::string>& fields, std::vector<int32_t>* rets);src/storage/tests/pkhashes_test.cc (2)
582-582
: Translate or remove non-English commentThe comment
// 从这里开始有问题。
is not in English. To maintain consistency and readability, please translate it to English or remove it.
628-628
: Correct grammatical error in commentThe comment
// hset after string type key expires, should success
has a grammatical error. It should readshould succeed
instead ofshould success
.src/storage/src/storage.cc (1)
Line range hint
218-226
: Initializettl_millsec
before use to prevent undefined behaviorIn the
MGetWithTTL
function, the variablettl_millsec
may be used uninitialized wheninst->MGetWithTTL
returnsNotFound()
. This could lead to undefined behavior when adding it tovss
. Ensure thatttl_millsec
is properly initialized before it's used.Apply this diff to initialize
ttl_millsec
before callingMGetWithTTL
:for (const auto& key : keys) { auto& inst = GetDBInstance(key); std::string value; + int64_t ttl_millsec = -1; s = inst->MGetWithTTL(key, &value, &ttl_millsec); if (s.ok()) { vss->push_back({value, Status::OK(), ttl_millsec}); } else if (s.IsNotFound()) { vss->push_back({std::string(), Status::NotFound(), ttl_millsec}); } else { vss->clear(); return s; } }
src/pika_command.cc (1)
966-966
: Correct the grammatical error in the commentPlease update the comment on line 966 for clarity:
- Clear(); // Clear cmd, Derived class can has own implement + Clear(); // Clear cmd; derived classes can have their own implementation
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (20)
- CMakeLists.txt (1 hunks)
- include/acl.h (1 hunks)
- include/pika_command.h (5 hunks)
- include/pika_pkhash.h (1 hunks)
- src/pika_client_conn.cc (1 hunks)
- src/pika_command.cc (10 hunks)
- src/pika_pkhash.cc (1 hunks)
- src/storage/include/storage/storage.h (18 hunks)
- src/storage/include/storage/storage_define.h (2 hunks)
- src/storage/src/base_filter.h (6 hunks)
- src/storage/src/base_value_format.h (5 hunks)
- src/storage/src/pkhash_data_value_format.h (1 hunks)
- src/storage/src/redis.cc (12 hunks)
- src/storage/src/redis.h (14 hunks)
- src/storage/src/redis_hashes.cc (29 hunks)
- src/storage/src/redis_pkhashes.cc (1 hunks)
- src/storage/src/redis_strings.cc (35 hunks)
- src/storage/src/storage.cc (28 hunks)
- src/storage/tests/hashes_test.cc (3 hunks)
- src/storage/tests/pkhashes_test.cc (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
- include/acl.h
- src/pika_client_conn.cc
- src/storage/include/storage/storage_define.h
- src/storage/src/base_filter.h
- src/storage/src/pkhash_data_value_format.h
- src/storage/src/redis.cc
- src/storage/src/redis_hashes.cc
- src/storage/src/redis_strings.cc
- src/storage/tests/hashes_test.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc
[performance] 577-577: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 786-786: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 869-869: Searching before insertion is not necessary.
(stlFindInsert)
🔇 Additional comments (20)
src/storage/src/base_value_format.h (2)
52-57
: LGTM! Good improvements to the class structure.The changes enhance the class design:
- Virtual destructor is correctly added for the base class
- Constructor initialization list is properly implemented
- Protected section formatting improves readability
21-31
:⚠️ Potential issueUpdate DataTypeNum and document tag mappings.
Several issues need attention:
DataTypeNum
should be set tokAll
(8) instead ofkNones
(7) to reflect the actual number of valid data types.- The tag mappings in
DataTypeTag
should be documented to explain the relationship between tags and enum values.Apply this diff:
enum class DataType : uint8_t { kStrings = 0, // 'k' - key-value strings kHashes = 1, // 'h' - hashes kSets = 2, // 's' - sets kLists = 3, // 'l' - lists kZSets = 4, // 'z' - sorted sets kStreams = 5, // 'x' - streams kPKHashes = 6, // 'p' - persistent key hashes kNones = 7, // 'n' - none kAll = 8, // 'a' - all types }; -constexpr int DataTypeNum = int(DataType::kNones); +constexpr int DataTypeNum = int(DataType::kAll); -constexpr char DataTypeTag[] = {'k', 'h', 's', 'l', 'z', 'x', 'e', 'n', 'a'}; +constexpr char DataTypeTag[] = {'k', 'h', 's', 'l', 'z', 'x', 'p', 'n', 'a'};Let's verify the tag usage across the codebase:
Also applies to: 34-35
✅ Verification successful
Let me analyze the findings from the shell script results:
DataTypeTag
is used consistently across the codebase in multiple files:
src/storage/src/redis.h
src/storage/src/redis.cc
src/storage/src/storage.cc
src/storage/src/base_value_format.h
The tag 'e' for
kPKHashes
is used in key construction, cursor management, and type identification throughout the codebase.Let me verify if changing it to 'p' would cause conflicts:
Let me analyze the findings from the shell script results:
The search for 'p' usage shows it's used in various contexts:
- As a parameter name in function arguments
- As a variable name in loops
- As a time format specifier for AM/PM in Highcharts
- As a HTML tag in jQuery selectors
- As a URL parameter in Angular
Most importantly, there are no occurrences where 'p' is used as a data type tag in the storage system, which means changing 'e' to 'p' for
kPKHashes
would not conflict with any existing usage.Let me verify the data type tag usage more specifically:
Based on the shell script results, I can now provide a complete analysis:
The review comment is correct and should be implemented.
The
DataTypeTag
array is used consistently throughout the storage system to identify data types in:
- Key construction for cursors (storage.cc)
- Iterator creation (redis.h)
- Key lookup operations (redis.cc)
- Statistics tracking (redis.cc)
The tag 'e' for
kPKHashes
should be changed to 'p' because:
- It's more semantically meaningful ('p' for "persistent key hashes")
- The tag 'p' is not used for any other data type in the storage system
- The change is consistent with the string representation "pkhash" in
DataTypeStrings
- No conflicts were found in the codebase where 'p' is used as a data type tag
The
DataTypeNum
should be updated tokAll
since it represents the maximum valid data type value.The suggested changes in the review comment are valid and improve code clarity and consistency.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for DataTypeTag usage to ensure consistency rg "DataTypeTag" -A 5Length of output: 5639
Script:
#!/bin/bash # Search for any usage of 'p' as a type identifier rg "'p'|\"p\"" -A 2Length of output: 179611
Script:
#!/bin/bash # Search for data type tag usage in storage files rg "DataTypeTag" src/storage/src/ -A 2 -B 2Length of output: 4961
include/pika_pkhash.h (1)
439-467
: Consider adding bounds checking for cursor and count.The PKHScanCmd should validate:
- Cursor value is non-negative
- Count value is positive
Let's verify if there are any existing bounds checks in the implementation:
include/pika_command.h (2)
140-160
: LGTM: PKHash command constants are well-defined.The new PKHash command constants follow consistent naming patterns and provide a comprehensive set of operations including the required expiration functionality.
314-314
: Verify PKHash flag usage across the codebase.The
kCmdFlagsPKHash
flag has been added, but ensure it's properly applied to all PKHash commands in their implementations.✅ Verification successful
PKHash flag is correctly applied to all PKHash commands
The verification shows that
kCmdFlagsPKHash
is properly used in all PKHash command constructors insrc/pika_command.cc
. The flag is consistently applied along with other appropriate flags (read/write, throughDB, fast/slow) for each PKHash command implementation including:
- Write commands: PKHSet, PKHExpire, PKHExpireat, PKHSetex, PKHDel, PKHIncrby, PKHMSet
- Read commands: PKHGet, PKHExists, PKHLen, PKHStrlen, PKHMGet, PKHKeys
- Slow operations: PKHVals, PKHGetall, PKHScan
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify PKHash flag usage in command implementations # Search for PKHash command class definitions echo "Checking PKHash command implementations for flag usage:" ast-grep --pattern 'class PKH$$$Cmd : public Cmd { $$$ PKH$$$Cmd() : Cmd($$$) { $$$ } $$$ }' # Search for PKHash flag usage in command constructors echo "Checking for kCmdFlagsPKHash usage in constructors:" rg "kCmdFlagsPKHash" --type cppLength of output: 2727
src/storage/src/redis.h (3)
151-152
: LGTM! Method signatures reformatted for better readability.The changes improve code consistency and readability without affecting functionality.
Also applies to: 297-298, 300-301, 310-311
464-465
: LGTM! PKHashes correctly added to stale check logic.The PKHashes case is appropriately handled in the ExpectedStale method, following the same pattern as regular Hashes.
251-291
: Well-structured implementation of PK Hash commands.The new PK Hash commands are well-organized and follow consistent patterns with other Redis data type implementations. The comprehensive set of commands (PKHExpire, PKHGet, PKHSet, etc.) aligns perfectly with the PR objectives.
Let's verify the consistency of PK Hash command implementations:
✅ Verification successful
PK Hash commands implementation is consistent and well-tested
The verification confirms that:
- TTL handling is consistent across PKHSetex and PKHMSetex implementations with proper validation
- Error handling follows a uniform pattern across all PK Hash commands
- Comprehensive test coverage exists for TTL-related operations (PKHExpire, PKHExpireat, PKHSetex)
- The implementation maintains atomicity through proper locking mechanisms
- Command implementations follow consistent patterns for argument validation and error responses
The code structure and implementation patterns align perfectly with the PR objectives and maintain consistency with other Redis data type implementations.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify PK Hash command implementations follow consistent patterns # Check for consistent error handling patterns rg "Status PKH\w+\s*\([^)]+\)\s*{" --type cpp -A 5 # Check for consistent TTL handling rg "PKH\w+ex\s*\([^)]+ttl" --type cpp -A 5Length of output: 2512
Script:
#!/bin/bash # Let's fix the regex and check for consistent error handling patterns rg "Status PKH.*?\{" --type cpp -A 5 # Let's also check the implementation of PKHMSetex to verify TTL handling consistency rg "PKHMSetex.*?\{" --type cpp -A 10Length of output: 2582
Script:
#!/bin/bash # Let's check the implementation of PKHSetex to compare with PKHMSetex rg "PKHSetex.*?\{" --type cpp -A 10 # Also check PKHExpire implementation for TTL handling rg "PKHExpire.*?\{" --type cpp -A 10Length of output: 16928
src/storage/include/storage/storage.h (2)
112-114
: LGTM: ValueStatus struct changesThe addition of ttl_millsec and its inclusion in operator== is well implemented and consistent with the TTL functionality.
421-424
:⚠️ Potential issueUse int64_t consistently for TTL parameters
For consistency and to support larger TTL values, use int64_t instead of int32_t for TTL parameters.
-Status PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields, +Status PKHExpire(const Slice& key, int64_t ttl, int32_t numfields, const std::vector<std::string>& fields, std::vector<int32_t>* rets);Likely invalid or redundant comment.
src/storage/src/redis_pkhashes.cc (4)
23-64
: FunctionPKHGet
implemented correctlyThe
PKHGet
function properly retrieves the value of a specified field in a hash, with appropriate handling of stale entries and type checks.
66-152
: FunctionPKHSet
implemented correctlyThe
PKHSet
function correctly sets the value of a hash field, managing metadata updates and handling both existing and new fields appropriately.
155-217
: FunctionPKHExpire
implemented correctlyThe
PKHExpire
function accurately sets expiration times for specified hash fields, with proper input validation and error handling.
473-563
: FunctionPKHSetex
implemented correctlyThe
PKHSetex
function effectively sets the value and expiration time for a hash field, ensuring TTL is applied correctly.src/storage/tests/pkhashes_test.cc (4)
27-33
:⚠️ Potential issueCheck the return status of
db.Open
inSetUp()
The return status of
db.Open(storage_options, path);
is assigned tos
, but there is no check to verify if the operation was successful. Ignoring the status may lead to unexpected behavior if the database fails to open.Apply this diff to add an assertion to ensure the database opens successfully:
s = db.Open(storage_options, path); +ASSERT_TRUE(s.ok());
Likely invalid or redundant comment.
872-877
:⚠️ Potential issueUse
IsInvalidArgument
instead ofIsCorruption
inPKHIncrby
When incrementing a field with invalid numeric data, you check
s.IsCorruption()
. It's more appropriate to check fors.IsInvalidArgument()
when handling invalid input.Apply this change:
ASSERT_TRUE(s.IsCorruption()); +ASSERT_TRUE(s.IsInvalidArgument());
Likely invalid or redundant comment.
95-103
:⚠️ Potential issueRemove unused variable and correct error checking in
make_expired
In the
make_expired
function, the variabletype_status
is declared but not used effectively. The condition!type_status[storage::DataType::kHashes].ok()
is invalid becausetype_status
is empty, leading to improper error handling.Apply this diff to remove the unused variable and correct the error checking:
-static std::map<storage::DataType, rocksdb::Status> type_status; int ret = db->Expire(key, 1); -if ((ret == 0) || !type_status[storage::DataType::kHashes].ok()) { +if (ret == 0) { return false; }Alternatively, if you need to check the status returned by
Expire
, modify the code as follows:-static std::map<storage::DataType, rocksdb::Status> type_status; -int ret = db->Expire(key, 1); +storage::Status s = db->Expire(key, 1); -if ((ret == 0) || !type_status[storage::DataType::kHashes].ok()) { +if (!s.ok()) { return false; }Likely invalid or redundant comment.
1468-1475
: Ensure proper handling of expired keys inPKHScanTest
In the
PKHScan
test for Group 10, after expiring the key usingmake_expired
, you are checking fors.IsNotFound()
. Confirm that themake_expired
function effectively expires the key and that the test reliably validates the expected behavior.Run the following script to verify the expiration:
✅ Verification successful
Let me gather more information about the
make_expired
function and its implementation to verify the expiration handling.
Let me search for the implementation of
make_expired
function in the pkhashes test file to verify its behavior.
Let me try a different approach to find the implementation.
Based on the implementation found, I can now provide the final response:
The key expiration handling in
PKHScanTest
is properly implementedThe
make_expired
function correctly handles key expiration by:
- Setting a 1-second expiration using
db->Expire(key, 1)
- Waiting for 2 seconds (
sleep_for(2000ms)
) to ensure the key has expired- Verifying the expired state through
IsNotFound()
status checkThe test properly validates that scanning an expired key returns a NotFound status and empty results.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Confirm that key "GP10_HSCAN_KEY" has expired. # Test: Attempt to get the key. Expect: No results found. ast-grep --pattern $'#define GP10_HSCAN_KEY' \ && echo "Key exists" || echo "Key has expired or does not exist"Length of output: 76
Script:
#!/bin/bash # Search for make_expired function implementation ast-grep --pattern 'make_expired($$$)' -l cpp # Also search using ripgrep for any related implementation rg "make_expired" -A 5Length of output: 59850
Script:
#!/bin/bash # Search for make_expired implementation in pkhashes_test.cc rg -A 10 "make_expired.*pkhashes_test" # Also search for Expire implementation ast-grep --pattern 'int.*Expire($$$)' -l cppLength of output: 87
Script:
#!/bin/bash # Search for make_expired function in pkhashes_test.cc directly rg -A 6 "^static.*make_expired" src/storage/tests/pkhashes_test.cc # Also check the Expire function usage in the test rg "db.*Expire" src/storage/tests/pkhashes_test.ccLength of output: 1179
src/storage/src/storage.cc (1)
488-595
: Addition of PKH methods enhances Pika Hash functionality*The implementation of the new
PKH*
methods is well-structured and adheres to the existing code conventions. These additions significantly enhance the Pika Hash capabilities, providing more flexible data management and expiration control.src/pika_command.cc (1)
485-564
: PKHash Commands Added CorrectlyThe new PKHash commands have been added to the command table with appropriate initializations and flags. This enhances the command handling capabilities for PKHash operations.
#ifndef PIKA_PKHASH_H_ | ||
#define PIKA_PKHASH_H_ | ||
|
||
#include "include/acl.h" | ||
#include "include/pika_command.h" | ||
#include "include/pika_db.h" | ||
#include "storage/storage.h" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding documentation and extracting common code.
The file would benefit from:
- Class-level documentation explaining the purpose and usage of each command
- Consider extracting common code (e.g.,
current_key()
, status handling) into a base class specific to PKHash commands
Add documentation at the file level:
/**
* @file pika_pkhash.h
* @brief Command implementations for PKHash operations in Pika database.
*
* This file contains command classes that implement various hash operations including:
* - Hash expiration (PKHExpire, PKHExpireat, PKHExpiretime)
* - Hash persistence (PKHPersist)
* - Hash manipulation (PKHGet, PKHSet, etc.)
* - Hash scanning and iteration (PKHScan)
*/
virtual void Clear() { | ||
pattern_ = "*"; | ||
count_ = 10; | ||
is_wt_ = false; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initialize all member variables in Clear() method.
The cursor_
member variable is not initialized in the Clear()
method, which could lead to undefined behavior.
Apply this fix:
virtual void Clear() {
+ key_.clear();
pattern_ = "*";
+ cursor_ = 0;
count_ = 10;
is_wt_ = false;
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
virtual void Clear() { | |
pattern_ = "*"; | |
count_ = 10; | |
is_wt_ = false; | |
} | |
virtual void Clear() { | |
key_.clear(); | |
pattern_ = "*"; | |
cursor_ = 0; | |
count_ = 10; | |
is_wt_ = false; | |
} |
set(CMAKE_BUILD_TYPE "Debug") | ||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O0 -Wall -g3 -ggdb -fno-inline -fno-builtin-memcmp") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove hardcoded Debug build type and compiler flags.
The changes force Debug build type and add debug-specific compiler flags that disable optimizations. This could significantly impact performance in production environments and conflicts with the existing build type logic below (lines 46-55).
Consider one of these alternatives:
- Remove these lines and let the existing build type logic handle it:
-set(CMAKE_BUILD_TYPE "Debug")
-set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O0 -Wall -g3 -ggdb -fno-inline -fno-builtin-memcmp")
- Move these settings to a development-only configuration:
option(PIKA_DEVELOPMENT_MODE "Enable development mode with debug settings" OFF)
if(PIKA_DEVELOPMENT_MODE)
set(CMAKE_BUILD_TYPE "Debug")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O0 -Wall -g3 -ggdb -fno-inline -fno-builtin-memcmp")
endif()
Status Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_millsec, | ||
std::string& out_new_value); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use pointers consistently for output parameters
For consistency with the codebase style, output parameters should use pointers rather than references.
-Status Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_millsec,
- std::string& out_new_value);
+Status Append(const Slice& key, const Slice& value, int32_t* ret, int64_t* expired_timestamp_millsec,
+ std::string* out_new_value);
-Status BitOp(BitOpType op, const std::string& dest_key, const std::vector<std::string>& src_keys,
- std::string& value_to_dest, int64_t* ret);
+Status BitOp(BitOpType op, const std::string& dest_key, const std::vector<std::string>& src_keys,
+ std::string* value_to_dest, int64_t* ret);
Also applies to: 283-284
struct FieldValueTTL { | ||
std::string field; | ||
std::string value; | ||
int64_t ttl_millsec; | ||
bool operator==(const FieldValueTTL& fv) const { | ||
return (fv.field == field && fv.value == value && fv.ttl_millsec == ttl_millsec); | ||
} | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add constructors to FieldValueTTL struct
The struct should have constructors to ensure proper initialization of all fields, especially ttl_millsec. Consider adding both copy and move constructors for better performance.
struct FieldValueTTL {
std::string field;
std::string value;
int64_t ttl_millsec;
+ FieldValueTTL() : field(""), value(""), ttl_millsec(0) {}
+ FieldValueTTL(const std::string& f, const std::string& v, int64_t ttl)
+ : field(f), value(v), ttl_millsec(ttl) {}
+ FieldValueTTL(std::string&& f, std::string&& v, int64_t ttl)
+ : field(std::move(f)), value(std::move(v)), ttl_millsec(ttl) {}
bool operator==(const FieldValueTTL& fv) const {
return (fv.field == field && fv.value == value && fv.ttl_millsec == ttl_millsec);
}
};
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
struct FieldValueTTL { | |
std::string field; | |
std::string value; | |
int64_t ttl_millsec; | |
bool operator==(const FieldValueTTL& fv) const { | |
return (fv.field == field && fv.value == value && fv.ttl_millsec == ttl_millsec); | |
} | |
}; | |
struct FieldValueTTL { | |
std::string field; | |
std::string value; | |
int64_t ttl_millsec; | |
FieldValueTTL() : field(""), value(""), ttl_millsec(0) {} | |
FieldValueTTL(const std::string& f, const std::string& v, int64_t ttl) | |
: field(f), value(v), ttl_millsec(ttl) {} | |
FieldValueTTL(std::string&& f, std::string&& v, int64_t ttl) | |
: field(std::move(f)), value(std::move(v)), ttl_millsec(ttl) {} | |
bool operator==(const FieldValueTTL& fv) const { | |
return (fv.field == field && fv.value == value && fv.ttl_millsec == ttl_millsec); | |
} | |
}; |
std::map<storage::DataType, rocksdb::Status> type_status; | ||
db.Expire("HMSET_KEY", 1); | ||
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct error checking and remove unused type_status
variable in PKHMSetTest
In the PKHMSetTest
, the type_status
map is declared but not used properly. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
is invalid because type_status
is empty.
Apply this diff to correct the error checking:
-std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("HMSET_KEY", 1);
-ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
+ASSERT_TRUE(s.ok());
Capture and check the status returned by Expire
:
+storage::Status s = db.Expire("HMSET_KEY", 1);
ASSERT_TRUE(s.ok());
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
std::map<storage::DataType, rocksdb::Status> type_status; | |
db.Expire("HMSET_KEY", 1); | |
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); | |
storage::Status s = db.Expire("HMSET_KEY", 1); | |
ASSERT_TRUE(s.ok()); |
std::map<storage::DataType, rocksdb::Status> type_status; | ||
db.Expire("B_HKEYS_KEY", 1); | ||
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); | ||
std::this_thread::sleep_for(std::chrono::milliseconds(2000)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct error checking and remove unused type_status
variable in PKHKeys
In the PKHKeys
test, the type_status
map is declared but not initialized or used properly. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
is invalid because type_status
is empty.
Apply this diff to correct the error checking:
-std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("B_HKEYS_KEY", 1);
-ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
+ASSERT_TRUE(s.ok());
Capture and check the status:
+storage::Status s = db.Expire("B_HKEYS_KEY", 1);
ASSERT_TRUE(s.ok());
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
std::map<storage::DataType, rocksdb::Status> type_status; | |
db.Expire("B_HKEYS_KEY", 1); | |
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); | |
std::this_thread::sleep_for(std::chrono::milliseconds(2000)); | |
storage::Status s = db.Expire("B_HKEYS_KEY", 1); | |
ASSERT_TRUE(s.ok()); | |
std::this_thread::sleep_for(std::chrono::milliseconds(2000)); |
std::map<storage::DataType, rocksdb::Status> type_status; | ||
db.Expire("B_HVALS_KEY", 1); | ||
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); | ||
std::this_thread::sleep_for(std::chrono::milliseconds(2000)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct error checking and remove unused type_status
variable in PKHVals
In the PKHVals
test, the type_status
map is declared but not used properly. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
is invalid due to the empty type_status
.
Apply this diff to correct the error checking:
-std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("B_HVALS_KEY", 1);
-ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
+ASSERT_TRUE(s.ok());
Capture and check the status:
+storage::Status s = db.Expire("B_HVALS_KEY", 1);
ASSERT_TRUE(s.ok());
Committable suggestion was skipped due to low confidence.
db.Expire("HDEL_TIMEOUT_KEY", 1); | ||
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); | ||
std::this_thread::sleep_for(std::chrono::milliseconds(2000)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct error checking and remove unused type_status
variable in PKHDel
In the PKHDel
test, the type_status
map is declared but not initialized. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
does not function correctly since type_status
is empty.
Apply this diff to correct the error checking:
-std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("HDEL_TIMEOUT_KEY", 1);
-ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
+ASSERT_TRUE(s.ok());
Capture and check the status:
+storage::Status s = db.Expire("HDEL_TIMEOUT_KEY", 1);
ASSERT_TRUE(s.ok());
Committable suggestion was skipped due to low confidence.
std::map<storage::DataType, rocksdb::Status> type_status; | ||
db.Expire("B_HGETALL_KEY", 1); | ||
ASSERT_TRUE(type_status[storage::DataType::kHashes].ok()); | ||
std::this_thread::sleep_for(std::chrono::milliseconds(2000)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct error checking and remove unused type_status
variable in PKHGetall
In the PKHGetall
test, the type_status
map is declared but not properly used. The assertion ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
is invalid due to the empty type_status
.
Apply this diff to correct the error checking:
-std::map<storage::DataType, rocksdb::Status> type_status;
db.Expire("B_HGETALL_KEY", 1);
-ASSERT_TRUE(type_status[storage::DataType::kHashes].ok());
+ASSERT_TRUE(s.ok());
Capture and check the status:
+storage::Status s = db.Expire("B_HGETALL_KEY", 1);
ASSERT_TRUE(s.ok());
Committable suggestion was skipped due to low confidence.
1. pkhget pkhset 2. pkhexpire pkhexpireat 3. pkhexpiretime pkhpersist pkhttl 4. add new test cases for pkhash cmd below 5. PKHSetex PKHExists PKHDel PKHLen PKHStrlen 6. PKHIncrby PKHMSet PKHMSetex PKHMGet PKHKeys 7. PKHVals PKHGetall PKHScan
d690461
to
589a5f6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
🧹 Outside diff range and nitpick comments (23)
include/pika_pkhash.h (1)
1-13
: Add comprehensive file-level documentation.
The file would benefit from detailed documentation explaining:
- Purpose and scope of PKHash operations
- Relationship between different command classes
- Usage examples and common patterns
Add this documentation at the file level:
/**
* @file pika_pkhash.h
* @brief Implementation of persistent hash commands with expiration support.
*
* This file implements a set of commands for managing persistent hash structures:
* - Expiration management (PKHExpire, PKHExpireat, PKHExpiretime, PKHPersist, PKHTTL)
* - Core operations (PKHGet, PKHSet, PKHExists)
* - Bulk operations (PKHMGet, PKHMSet)
* - Iteration (PKHScan, PKHKeys, PKHVals)
*
* Each command inherits from the base Cmd class and implements the required
* interface for execution, caching, and database operations.
*/
src/storage/src/redis.h (1)
Line range hint 61-77
: Consider using a more efficient algorithm for calculating the average duration.
The current implementation finds the min and max values to exclude them from the average calculation. This approach has a time complexity of O(n), where n is the size of durations
.
Suggestions for optimization:
- Maintain a running sum of durations and subtract the first and last values before calculating the average.
- Use a data structure like
std::priority_queue
to efficiently track the min and max values.
Here's an optimized version using a running sum:
uint64_t AvgDuration() {
if (durations.size() < window_size) {
return 0;
}
uint64_t sum = std::accumulate(durations.begin(), durations.end(), 0ULL);
sum -= durations.front() + durations.back();
return sum / (durations.size() - 2);
}
src/storage/tests/pkhashes_test.cc (2)
225-225
: Typographical error in comment
There is a typo in the comment. The word 'ex/pired'
should be corrected to 'expired'
.
130-131
: Uncomment or remove inactive assertions
The assertions are commented out. If they are necessary for the test, consider uncommenting them. If not, it's better to remove them to keep the code clean.
src/pika_command.cc (19)
56-57
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::unique_ptr<Cmd> compactptr = std::make_unique<CompactCmd>(
- kCmdNameCompact, -1, kCmdFlagsRead | kCmdFlagsAdmin | kCmdFlagsSlow | kCmdFlagsSuspend);
+ std::unique_ptr<Cmd> compactptr =
+ std::make_unique<CompactCmd>(kCmdNameCompact, -1, kCmdFlagsRead | kCmdFlagsAdmin | kCmdFlagsSlow | kCmdFlagsSuspend);
60-61
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::unique_ptr<Cmd> compactrangeptr =
- std::make_unique<CompactRangeCmd>(kCmdNameCompactRange, 4, kCmdFlagsRead | kCmdFlagsAdmin | kCmdFlagsSuspend);
+ std::unique_ptr<Cmd> compactrangeptr = std::make_unique<CompactRangeCmd>(
+ kCmdNameCompactRange, 4, kCmdFlagsRead | kCmdFlagsAdmin | kCmdFlagsSuspend);
159-160
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::unique_ptr<Cmd> clearcacheptr =
- std::make_unique<ClearCacheCmd>(kCmdNameClearCache, 1, kCmdFlagsAdmin | kCmdFlagsWrite);
+ std::unique_ptr<Cmd> clearcacheptr = std::make_unique<ClearCacheCmd>(
+ kCmdNameClearCache, 1, kCmdFlagsAdmin | kCmdFlagsWrite);
162-163
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::unique_ptr<Cmd> lastsaveptr =
- std::make_unique<LastsaveCmd>(kCmdNameLastSave, 1, kCmdFlagsAdmin | kCmdFlagsRead | kCmdFlagsFast);
+ std::unique_ptr<Cmd> lastsaveptr = std::make_unique<LastsaveCmd>(
+ kCmdNameLastSave, 1, kCmdFlagsAdmin | kCmdFlagsRead | kCmdFlagsFast);
247-248
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::unique_ptr<Cmd> setptr = std::make_unique<SetCmd>(
- kCmdNameSet, -3, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
+ std::unique_ptr<Cmd> setptr =
+ std::make_unique<SetCmd>(kCmdNameSet, -3, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
251-253
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::unique_ptr<Cmd> getptr = std::make_unique<GetCmd>(
- kCmdNameGet, 2,
- kCmdFlagsRead | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsReadCache | kCmdFlagsSlow);
+ std::unique_ptr<Cmd> getptr =
+ std::make_unique<GetCmd>(kCmdNameGet, 2, kCmdFlagsRead | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsReadCache | kCmdFlagsSlow);
256-258
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::unique_ptr<Cmd> delptr = std::make_unique<DelCmd>(
- kCmdNameDel, -2,
- kCmdFlagsWrite | kCmdFlagsOperateKey | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
+ std::unique_ptr<Cmd> delptr =
+ std::make_unique<DelCmd>(kCmdNameDel, -2, kCmdFlagsWrite | kCmdFlagsOperateKey | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
264-265
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::unique_ptr<Cmd> incrptr = std::make_unique<IncrCmd>(
- kCmdNameIncr, 2, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
+ std::unique_ptr<Cmd> incrptr =
+ std::make_unique<IncrCmd>(kCmdNameIncr, 2, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
269-269
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- kCmdNameIncrby, 3, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
+ kCmdNameIncrby, 3,
+ kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
273-274
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::unique_ptr<Cmd> incrbyfloatptr = std::make_unique<IncrbyfloatCmd>(
- kCmdNameIncrbyfloat, 3,
+ std::unique_ptr<Cmd> incrbyfloatptr =
+ std::make_unique<IncrbyfloatCmd>(kCmdNameIncrbyfloat, 3,
277-278
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::unique_ptr<Cmd> decrptr = std::make_unique<DecrCmd>(
- kCmdNameDecr, 2, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
+ std::unique_ptr<Cmd> decrptr =
+ std::make_unique<DecrCmd>(kCmdNameDecr, 2, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
282-282
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- kCmdNameDecrby, 3, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
+ kCmdNameDecrby, 3,
+ kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
286-286
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- kCmdNameGetset, 3, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
+ kCmdNameGetset, 3,
+ kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
290-290
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- kCmdNameAppend, 3, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
+ kCmdNameAppend, 3,
+ kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsFast);
293-295
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::unique_ptr<Cmd> mgetptr = std::make_unique<MgetCmd>(
- kCmdNameMget, -2,
- kCmdFlagsRead | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsReadCache | kCmdFlagsFast);
+ std::unique_ptr<Cmd> mgetptr =
+ std::make_unique<MgetCmd>(kCmdNameMget, -2, kCmdFlagsRead | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsReadCache | kCmdFlagsFast);
303-303
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::make_unique<SetnxCmd>(kCmdNameSetnx, 3, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsFast);
+ std::make_unique<SetnxCmd>(
+ kCmdNameSetnx, 3, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsFast);
306-307
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::unique_ptr<Cmd> setexptr = std::make_unique<SetexCmd>(
- kCmdNameSetex, 4, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsSlow);
+ std::unique_ptr<Cmd> setexptr =
+ std::make_unique<SetexCmd>(kCmdNameSetex, 4, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsSlow);
310-311
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::unique_ptr<Cmd> psetexptr = std::make_unique<PsetexCmd>(
- kCmdNamePsetex, 4, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsSlow);
+ std::unique_ptr<Cmd> psetexptr =
+ std::make_unique<PsetexCmd>(kCmdNamePsetex, 4, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdateCache | kCmdFlagsSlow);
315-315
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::make_unique<DelvxCmd>(kCmdNameDelvx, 3, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsSlow);
+ std::make_unique<DelvxCmd>(
+ kCmdNameDelvx, 3, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsSlow);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (20)
- CMakeLists.txt (1 hunks)
- include/acl.h (1 hunks)
- include/pika_command.h (5 hunks)
- include/pika_pkhash.h (1 hunks)
- src/pika_client_conn.cc (1 hunks)
- src/pika_command.cc (10 hunks)
- src/pika_pkhash.cc (1 hunks)
- src/storage/include/storage/storage.h (17 hunks)
- src/storage/include/storage/storage_define.h (2 hunks)
- src/storage/src/base_filter.h (6 hunks)
- src/storage/src/base_value_format.h (5 hunks)
- src/storage/src/pkhash_data_value_format.h (1 hunks)
- src/storage/src/redis.cc (12 hunks)
- src/storage/src/redis.h (14 hunks)
- src/storage/src/redis_hashes.cc (29 hunks)
- src/storage/src/redis_pkhashes.cc (1 hunks)
- src/storage/src/redis_strings.cc (35 hunks)
- src/storage/src/storage.cc (28 hunks)
- src/storage/tests/hashes_test.cc (3 hunks)
- src/storage/tests/pkhashes_test.cc (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (12)
- include/acl.h
- include/pika_command.h
- src/pika_client_conn.cc
- src/pika_pkhash.cc
- src/storage/include/storage/storage_define.h
- src/storage/src/base_filter.h
- src/storage/src/base_value_format.h
- src/storage/src/pkhash_data_value_format.h
- src/storage/src/redis.cc
- src/storage/src/redis_hashes.cc
- src/storage/src/redis_strings.cc
- src/storage/tests/hashes_test.cc
🧰 Additional context used
🪛 cppcheck
src/storage/src/redis_pkhashes.cc
[performance] 577-577: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 786-786: Searching before insertion is not necessary.
(stlFindInsert)
[performance] 869-869: Searching before insertion is not necessary.
(stlFindInsert)
🔇 Additional comments (40)
CMakeLists.txt (2)
38-39
: Remove hardcoded Debug build type.
The previous review comment about removing hardcoded Debug build type and compiler flags is still valid. This change could significantly impact performance in production environments.
#!/bin/bash
# Check if there are any production deployment scripts that might be affected
rg -l "CMAKE_BUILD_TYPE.*Release"
Line range hint 46-55
: LGTM: Proper build type handling logic.
The existing build type logic correctly handles different build types with appropriate fallback to RELEASE, but it's being overridden by the hardcoded Debug setting above.
src/storage/src/storage.cc (4)
305-309
: LGTM: Proper validation for BitOpNot operation
The added validation correctly ensures that BitOpNot operation is only performed on a single operand, returning an invalid argument error otherwise.
489-596
: LGTM: Well-structured implementation of PKHash commands
The implementation follows the established pattern of:
- Getting the correct database instance using the key
- Delegating the operation to that instance
- Proper error propagation
All the required commands from PR objectives (pkhexpire, pkhexpireat, pkhexpiretime, pkhpersist, pkhttl) are implemented.
2061-2067
: LGTM: Proper integration of PKHash column families
The PKHash column families are correctly integrated into the auto-compaction handling, following the same pattern as other data types (strings, hashes, lists, sets, zsets).
1443-1444
: Verify the intended usage of nullptr bounds in CreateIterator
The code consistently passes nullptr for both lower_bound and upper_bound parameters in CreateIterator calls. Please verify that this is intentional and doesn't limit the scanning functionality.
Also applies to: 1491-1492, 1545-1545
✅ Verification successful
Let me gather more context about the CreateIterator implementation to understand if nullptr bounds are part of the intended design.
Let me try a different approach to understand the CreateIterator implementation and its bounds usage.
Let me check one more thing to see if there are any instances where bounds are actually used with non-null values.
Based on the evidence gathered, I can now provide a final response:
Consistent nullptr usage for bounds is correct
The codebase shows that:
- CreateIterator is defined to accept optional Slice pointers for bounds in redis.h
- All iterator implementations (StringsIterator, HashesIterator, etc.) inherit from TypeIterator
- Every call to CreateIterator consistently uses nullptr for bounds
- There are no instances where non-null bounds are used, indicating this is the intended design pattern
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if CreateIterator is ever called with non-null bounds
# This helps verify if the nullptr usage is consistent with the intended design
# Search for CreateIterator calls with actual bounds
rg -U "CreateIterator.*lower_bound.*upper_bound" --type cpp
Length of output: 876
Script:
#!/bin/bash
# Search for CreateIterator definition and implementation
ast-grep --pattern 'CreateIterator($$$lower_bound, $$$upper_bound)'
# Also check if there are any comments or documentation about the bounds
rg -B 2 -A 2 "CreateIterator.*Iterator.*" --type cpp
Length of output: 124
Script:
#!/bin/bash
# Search for CreateIterator method definition
ast-grep --pattern 'CreateIterator($_, $_, $_, $_)'
# Also look for any documentation or comments about iterator bounds
rg -B 3 "CreateIterator" --type cpp
Length of output: 2292
Script:
#!/bin/bash
# Look for TypeIterator implementation to understand bounds usage
rg -B 3 "class.*Iterator" --type cpp
# Also check for any Slice usage in iterator context
rg "Slice.*bound" --type cpp
Length of output: 2457
src/storage/src/redis.h (16)
18-20
: LGTM!
The new header file inclusions appear relevant and necessary for the added functionality.
78-79
: ****
87-88
: LGTM!
The constructor initializer list is properly formatted and the member variables are initialized in the correct order.
248-251
: ****
253-254
: LGTM!
The GetRocksDBInfo
method signature has been updated for consistency with the rest of the codebase.
379-380
: ****
390-391
: ****
411-412
: LGTM!
The CreateIterator
method has been updated to take DataType
as a parameter instead of a raw character. This improves code readability and type safety.
450-453
: LGTM!
The GetMetaValueType
method correctly retrieves the DataType
from the first byte of the meta_value
string.
455-462
: LGTM!
The ExpectedMetaValue
method properly compares the expected DataType
with the actual type stored in the meta_value
string.
Line range hint 463-489
: LGTM!
The ExpectedStale
method correctly handles the different DataType
cases and checks for staleness based on the corresponding parsed meta values.
Line range hint 491-569
: LGTM!
The new private methods and data members added to the Redis
class seem relevant and necessary for the added functionality.
582-585
: LGTM!
The GetScanStartPoint
and StoreScanNextPoint
methods have been updated to take DataType
as a parameter instead of a raw character, improving code readability and type safety.
125-126
: Verify the impact of the new PKHashesExpire
method on the codebase.
The PKHashesExpire
method has been added to the Redis
class. Please ensure that:
- The method is properly implemented in the corresponding source file.
- The new functionality is covered by unit tests.
- The addition of this method does not introduce any breaking changes or inconsistencies in the codebase.
Run the following script to check for potential impact:
#!/bin/bash
# Search for usage of the new method
echo "Checking usage of PKHashesExpire:"
rg --type cpp 'PKHashesExpire'
# Search for related test files
echo "Checking for PKHash related test files:"
fd --type file --regex '.*pkhash.*_test\.cc$'
Line range hint 344-345
: Verify the correctness of the ZRangeWithTTL
method.
The ZRangeWithTTL
method has been updated to include a ttl_millsec
output parameter. Please ensure that:
- The
ttl_millsec
parameter is properly populated with the correct TTL value. - The method handles the case when the key does not have an associated TTL.
- The behavior of the method remains consistent with its documentation and intended usage.
To verify the correctness, you can use the following script to check the implementation and usage of ZRangeWithTTL
:
#!/bin/bash
# Check the implementation of ZRangeWithTTL
echo "Checking the implementation of ZRangeWithTTL..."
rg --type cpp 'Status\s+Redis::ZRangeWithTTL'
# Check the usage of ZRangeWithTTL
echo "Checking the usage of ZRangeWithTTL..."
rg --type cpp 'ZRangeWithTTL'
Review the implementation to ensure that the ttl_millsec
parameter is properly handled and that the method returns the expected results. Also, verify that the usages of ZRangeWithTTL
are consistent with the updated method signature.
255-295
: Ensure comprehensive test coverage for the new PK Hash methods.
Multiple new methods have been added to support PK Hash operations. To maintain code quality and prevent regressions, please verify that:
- Each new method has corresponding unit tests that cover all key scenarios, edge cases, and error conditions.
- Integration tests are in place to validate the interaction between these methods and other parts of the system.
- Performance tests are considered to measure the efficiency and scalability of these operations, especially for methods like
PKHScan
that can potentially return large result sets.
Run the following script to check PK Hash related test files:
Ensure that there are test files corresponding to the new methods, and that they provide sufficient coverage. If any tests are missing, please create new test files or add test cases to existing ones.
✅ Verification successful
Let me search for test files related to PK Hash operations with a more comprehensive approach.
Let me check the content of one of these test cases to verify the test coverage quality.
Based on the gathered information, I can now provide the final response:
Test coverage for PK Hash methods is comprehensive and well-structured
The codebase shows thorough test coverage for PK Hash operations:
- All PK Hash methods have corresponding test cases in
src/storage/tests/pkhashes_test.cc
- Tests cover the full range of operations including:
- Basic operations (Set, Get, Del)
- Bulk operations (MSet, MGet)
- TTL operations (Expire, TTL, Persist)
- Iteration operations (Scan)
- Meta operations (Exists, Len)
- The test implementation (as seen in PKHScanTest) includes:
- Edge cases and different data sizes
- Proper verification of return values
- Status code validation
- Data integrity checks
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
echo "Searching for PK Hash related test files..."
fd --type file --regex 'pkhash.*_test\.cc$'
Length of output: 1252
Script:
#!/bin/bash
# Search for test files containing PKH operations
echo "Searching for test files containing PKH operations..."
rg -l --type cpp "PKH" | grep "_test"
# Search for implementation files to cross-reference
echo -e "\nSearching for implementation files..."
rg -l --type cpp "^Status.*PKH"
# Look for test cases within files
echo -e "\nSearching for PKH test cases..."
rg --type cpp "TEST.*PKH"
Length of output: 2115
Script:
#!/bin/bash
# Check content of PKHScanTest as an example of test coverage
echo "Checking PKHScanTest implementation..."
rg -A 20 "TEST_F\(PKHashesTest, PKHScanTest\)" src/storage/tests/pkhashes_test.cc
# Check test fixture setup
echo -e "\nChecking test fixture setup..."
rg "class PKHashesTest" src/storage/tests/pkhashes_test.cc -A 10
Length of output: 1556
src/storage/src/redis_pkhashes.cc (8)
164-164
:
Remove unused variable is_stale
.
The variable is_stale
is declared but not used in this function. Removing it will clean up the code.
Apply this diff to remove the unused variable:
- bool is_stale = false;
int32_t version = 0;
Likely invalid or redundant comment.
221-224
:
Correct the error message regarding expiration timestamp.
The condition checks if timestamp <= 0
, but the error message says "must be >= 0", which includes zero. Since a timestamp of zero is invalid, the error message should state "must be > 0" to be accurate.
Apply this diff to correct the error message:
if (timestamp <= 0) {
rets->assign(numfields, 2);
- return Status::InvalidArgument("invalid expire time, must be >= 0");
+ return Status::InvalidArgument("invalid expire time, must be > 0");
}
Likely invalid or redundant comment.
1188-1190
:
Handle unexpected errors from GetScanStartPoint
.
After retrieving the scan start point with GetScanStartPoint
, the current code only checks for IsNotFound()
. Other errors are not handled, which might lead to unintended behavior.
Adjust the error handling to catch all unexpected errors:
s = GetScanStartPoint(DataType::kPKHashes, key, pattern, cursor, &start_point);
- if (s.IsNotFound()) {
+ if (!s.ok()) {
+ if (s.IsNotFound()) {
cursor = 0;
if (isTailWildcard(pattern)) {
start_point = pattern.substr(0, pattern.size() - 1);
}
+ } else {
+ return s;
+ }
}
Likely invalid or redundant comment.
1265-1270
: 🛠️ Refactor suggestion
Simplify condition when retrieving meta value.
The condition checking for a stale key is redundant because ExpectedMetaValue
already handles stale cases. Simplify the conditional logic.
Apply this diff:
if (meta_value.empty()) {
s = db_->Get(default_read_options_, handles_[kMetaCF], base_meta_key.Encode(), &meta_value);
if (!s.ok()) {
return s;
}
- if (ExpectedStale(meta_value)) {
- s = Status::NotFound();
- }
}
Likely invalid or redundant comment.
1171-1175
:
Validate count
parameter to prevent unexpected behavior.
In the PKHScan
function, variables rest
and step_length
are initialized with count
, but there is no validation to ensure count
is positive. A non-positive count
could lead to unexpected behavior.
Add validation for the count
parameter:
+if (count <= 0) {
+ *next_cursor = 0;
+ return Status::OK();
+}
Likely invalid or redundant comment.
692-777
:
Address unused TTL parameter.
The PKHIncrby
function is well-implemented, correctly handling the increment operation for hash fields, including various scenarios and overflow cases. However, there's one issue to address:
The ttl
parameter (line 669) is not used anywhere in the function. This might be an oversight, as the function doesn't apply any TTL to the incremented field.
Consider one of the following options:
- If TTL should be applied, modify the function to use the
ttl
parameter:
PKHashDataValue internal_value(value_buf);
+internal_value.SetRelativeTimestamp(ttl);
batch.Put(handles_[kPKHashDataCF], hashes_data_key.Encode(), internal_value.Encode());
Apply this change in all places where PKHashDataValue
is created and stored.
- If TTL is not needed for this operation, remove the parameter:
-Status Redis::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl) {
+Status Redis::PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret) {
Make sure to update any calls to this function accordingly.
Do you want me to generate the code changes to address the unused TTL parameter?
Likely invalid or redundant comment.
716-721
: 🛠️ Refactor suggestion
Simplify version and count updates.
In the PKHIncrby
function, when the hash is stale or empty, the code updates the version and resets the count. However, resetting the count and expiration time is unnecessary because UpdateVersion()
and InitialMetaValue()
already handle these.
Apply this diff to remove redundant code:
if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.Count() == 0) {
version = parsed_hashes_meta_value.UpdateVersion();
- parsed_hashes_meta_value.SetCount(1);
- parsed_hashes_meta_value.SetEtime(0);
batch.Put(handles_[kMetaCF], base_meta_key.Encode(), meta_value);
HashesDataKey hashes_data_key(key, version, field);
Likely invalid or redundant comment.
732-736
:
Handle potential integer conversion error.
In the PKHIncrby
function, when converting old_value
to an integer, if StrToInt64
returns 0
, which indicates a conversion failure, the function returns a Corruption
status. It's more appropriate to return an InvalidArgument
status to indicate that the value is not an integer.
Apply this diff to adjust the return status:
if (StrToInt64(old_value.data(), old_value.size(), &ival) == 0) {
- return Status::Corruption("hash value is not an integer");
+ return Status::InvalidArgument("hash value is not an integer");
}
Likely invalid or redundant comment.
src/storage/include/storage/storage.h (3)
461-461
: 🛠️ Refactor suggestion
Consider adding a default value for the ttl
parameter in PKHIncrby
.
To maintain consistency with other methods that have optional TTL parameters, consider adding a default value of 0 for the ttl
parameter in the PKHIncrby
method.
Apply this diff:
-Status PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl);
+Status PKHIncrby(const Slice& key, const Slice& field, int64_t value, int64_t* ret, int32_t ttl = 0);
Likely invalid or redundant comment.
435-436
: 🛠️ Refactor suggestion
Use int64_t
for ttl
parameters to support larger expiration times.
Consider changing the ttl
parameter from int32_t
to int64_t
in PKHExpire
and PKHSetex
to support larger expiration times and maintain consistency with other TTL parameters.
Apply this diff:
// In PKHExpire
-Status PKHExpire(const Slice& key, int32_t ttl, int32_t numfields, const std::vector<std::string>& fields,
+Status PKHExpire(const Slice& key, int64_t ttl, int32_t numfields, const std::vector<std::string>& fields,
std::vector<int32_t>* rets);
// In PKHSetex
-Status PKHSetex(const Slice& key, const Slice& field, const Slice& value, int32_t ttl, int32_t* ret);
+Status PKHSetex(const Slice& key, const Slice& field, const Slice& value, int64_t ttl, int32_t* ret);
Likely invalid or redundant comment.
135-142
: 🛠️ Refactor suggestion
Consider adding constructors to the FieldValueTTL
struct.
The FieldValueTTL
struct should have constructors to ensure proper initialization of all fields, especially ttl_millsec
. Consider adding both copy and move constructors for better performance.
struct FieldValueTTL {
std::string field;
std::string value;
int64_t ttl_millsec;
+ FieldValueTTL() : field(""), value(""), ttl_millsec(0) {}
+ FieldValueTTL(const std::string& f, const std::string& v, int64_t ttl)
+ : field(f), value(v), ttl_millsec(ttl) {}
+ FieldValueTTL(std::string&& f, std::string&& v, int64_t ttl)
+ : field(std::move(f)), value(std::move(v)), ttl_millsec(ttl) {}
bool operator==(const FieldValueTTL& fv) const {
return (fv.field == field && fv.value == value && fv.ttl_millsec == ttl_millsec);
}
};
Likely invalid or redundant comment.
src/storage/tests/pkhashes_test.cc (5)
133-133
: Optimize test durations by reducing sleep times
The tests use long sleep durations (e.g., over 3 seconds) to wait for key expiration. This can significantly slow down the test suite.
[performance]
Consider reducing TTL values and sleep durations to shorten test execution time. Alternatively, mock the time functions or use a time manipulation library to simulate the passage of time without actual delays.
Also applies to: 175-175, 222-222, 268-268, 310-310, 367-367, 386-386, 631-631
941-943
: Ensure consistent data when inserting additional key-value pairs
In PKHKeys
, after inserting entries into A_HKEYS_KEY
, you retrieve keys from B_HKEYS_KEY
. Verify that the insertion into a different key does not affect the retrieved data from B_HKEYS_KEY
.
1488-1499
: Check for proper cleanup in main
function
In the main
function, after running the tests, ensure that all resources are properly released and there are no memory leaks or dangling pointers.
96-98
:
Correct error checking and remove unused type_status
variable in make_expired
In the make_expired
function, type_status
is declared but never used properly. The condition !type_status[storage::DataType::kHashes].ok()
is invalid because type_status
is empty and uninitialized.
Apply this diff to correct the error checking:
-static std::map<storage::DataType, rocksdb::Status> type_status;
int ret = db->Expire(key, 1);
-if ((ret == 0) || !type_status[storage::DataType::kHashes].ok()) {
+if (ret == 0) {
return false;
}
Alternatively, if db->Expire
returns a Status
, modify the code as follows:
-static std::map<storage::DataType, rocksdb::Status> type_status;
-int ret = db->Expire(key, 1);
+storage::Status s = db->Expire(key, 1);
-if ((ret == 0) || !type_status[storage::DataType::kHashes].ok()) {
+if (!s.ok()) {
return false;
}
Likely invalid or redundant comment.
627-635
: Validate behavior when setting a hash after a string key expires
In the PKHSetTest
, you set a string key with Setex
and then, after expiration, set a hash with the same key. Ensure that setting a hash after a string key expires behaves as expected and does not cause any issues in the database.
Run the following script to verify that the key type changes correctly after expiration:
✅ Verification successful
Test case correctly validates key type transition after expiration
The test case is well-designed and properly validates the behavior:
- It verifies that PKHSet succeeds after the string key expires (2100ms wait > 1ms TTL)
- The same test exists for both PKHSet and HSet operations, showing consistent behavior
- The assertions verify both the operation success (ASSERT_TRUE(s.ok())) and the expected return value (ASSERT_EQ(ret, 1))
- The return value of 1 confirms that a new hash field was created, indicating the key type transition worked correctly
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify key type after expiration.
# Wait for the key to expire
sleep 2
# Check the type of the key
echo "Type after expiration:"
pika-cli -c "TYPE GP4_HSET_KEY"
# Set hash after expiration
pika-cli -c "PKHSET GP4_HSET_KEY HSET_TEST_NEW_FIELD HSET_TEST_NEW_VALUE"
# Check the type of the key again
echo "Type after setting hash:"
pika-cli -c "TYPE GP4_HSET_KEY"
Length of output: 497
Script:
#!/bin/bash
# Let's analyze the test file and related code to understand the behavior
# First, let's check the implementation of Setex and PKHSet
rg -A 5 "Status Storage::Setex" src/
# Check PKHSet implementation
rg -A 5 "Status Storage::PKHSet" src/
# Look for similar test cases or related tests
rg -B 2 -A 2 "Setex.*PKH" src/
Length of output: 1626
Script:
#!/bin/bash
# Let's check the implementation details in the storage engine
# Look for the DB instance implementation of Setex and PKHSet
ast-grep --pattern 'Status $class::Setex(const Slice& $_, const Slice& $_, $_) {
$$$
}'
# Check for PKHSet implementation
ast-grep --pattern 'Status $class::PKHSet(const Slice& $_, const Slice& $_, const Slice& $_, $_) {
$$$
}'
# Look for other test cases that verify key type transitions
rg -B 3 -A 3 "GP4_HSET_KEY" src/
Length of output: 1693
src/pika_command.cc (2)
19-19
: LGTM!
The new header file include/pika_pkhash.h
is included correctly.
318-319
: Improve formatting consistency.
Align the line continuation with the opening parenthesis for better readability:
- std::unique_ptr<Cmd> msetptr = std::make_unique<MsetCmd>(
- kCmdNameMset, -3, kCmdFlagsWrite | kCmdFlagsKv | kCmdFlagsDoThroughDB | kCmdFlagsUpdate
</details>
</details>
</details>
<!-- This is an auto-generated comment by CodeRabbit for review status -->
PKHExpireatCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHExpireatCmd(*this); } | ||
|
||
private: | ||
std::string key_; | ||
int64_t timestamp_ = 0; | ||
int64_t numfields_ = 0; | ||
std::vector<std::string> fields_; | ||
|
||
rocksdb::Status s_; | ||
|
||
void DoInitial() override; | ||
void Clear() override {} | ||
}; | ||
class PKHExpiretimeCmd : public Cmd { | ||
public: | ||
PKHExpiretimeCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHExpiretimeCmd(*this); } | ||
|
||
private: | ||
std::string key_; | ||
int64_t ttl_ = 0; | ||
int64_t numfields_ = 0; | ||
std::vector<std::string> fields_; | ||
|
||
rocksdb::Status s_; | ||
|
||
void DoInitial() override; | ||
void Clear() override {} | ||
}; | ||
|
||
class PKHPersistCmd : public Cmd { | ||
public: | ||
PKHPersistCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHPersistCmd(*this); } | ||
|
||
private: | ||
std::string key_; | ||
int64_t ttl_ = 0; | ||
int64_t numfields_ = 0; | ||
std::vector<std::string> fields_; | ||
|
||
rocksdb::Status s_; | ||
|
||
void DoInitial() override; | ||
void Clear() override {} | ||
}; | ||
|
||
class PKHTTLCmd : public Cmd { | ||
public: | ||
PKHTTLCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHTTLCmd(*this); } | ||
|
||
private: | ||
std::string key_; | ||
int64_t ttl_ = 0; | ||
int64_t numfields_ = 0; | ||
std::vector<std::string> fields_; | ||
|
||
rocksdb::Status s_; | ||
|
||
void DoInitial() override; | ||
void Clear() override {} | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve initialization and reduce code duplication in expiration commands.
Several issues need attention:
- Empty
Clear()
methods - Uninitialized member variables
- Duplicated code patterns across classes
Consider these improvements:
- Initialize member variables in
Clear()
:
// Apply to all expiration-related commands
void Clear() override {
key_.clear();
fields_.clear();
ttl_ = 0;
numfields_ = 0;
}
- Extract common functionality into a base class:
class PKHashExpirationCmd : public Cmd {
protected:
std::string key_;
int64_t ttl_ = 0;
int64_t numfields_ = 0;
std::vector<std::string> fields_;
rocksdb::Status s_;
void Clear() override {
key_.clear();
fields_.clear();
ttl_ = 0;
numfields_ = 0;
}
};
// Then inherit from this base class
class PKHExpireCmd : public PKHashExpirationCmd {
// Specific implementation
};
class PKHGetCmd : public Cmd { | ||
public: | ||
PKHGetCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void ReadCache() override; | ||
void DoThroughDB() override; | ||
void DoUpdateCache() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHGetCmd(*this); } | ||
|
||
private: | ||
std::string key_, field_; | ||
void DoInitial() override; | ||
rocksdb::Status s_; | ||
}; | ||
|
||
class PKHSetCmd : public Cmd { | ||
public: | ||
PKHSetCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void DoThroughDB() override; | ||
void DoUpdateCache() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHSetCmd(*this); } | ||
|
||
private: | ||
std::string key_, field_, value_; | ||
void DoInitial() override; | ||
rocksdb::Status s_; | ||
}; | ||
|
||
class PKHSetexCmd : public Cmd { | ||
public: | ||
PKHSetexCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void DoThroughDB() override; | ||
void DoUpdateCache() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHSetexCmd(*this); } | ||
|
||
private: | ||
std::string key_, field_, value_; | ||
int64_t sec_; | ||
void DoInitial() override; | ||
rocksdb::Status s_; | ||
}; | ||
|
||
class PKHExistsCmd : public Cmd { | ||
public: | ||
PKHExistsCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void DoThroughDB() override; | ||
void DoUpdateCache() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHExistsCmd(*this); } | ||
|
||
private: | ||
std::string key_, field_; | ||
void DoInitial() override; | ||
rocksdb::Status s_; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add input validation and documentation for core hash operations.
The core hash operation classes need:
- Input parameter validation
- Error handling for cache operations
- Method documentation
Consider these improvements:
class PKHSetCmd : public Cmd {
public:
/**
* @brief Sets the value of a hash field
* @throws std::invalid_argument if key or field is empty
*/
void Do() override {
if (key_.empty() || field_.empty()) {
throw std::invalid_argument("Key and field cannot be empty");
}
// Existing implementation
}
/**
* @brief Updates the cache after successful DB operation
* @return true if cache was updated successfully
*/
void DoUpdateCache() override {
try {
// Existing implementation
} catch (const std::exception& e) {
LOG(WARNING) << "Cache update failed: " << e.what();
}
}
};
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHMSetCmd(*this); } | ||
|
||
private: | ||
std::string key_; | ||
std::vector<storage::FieldValue> fvs_; | ||
void DoInitial() override; | ||
rocksdb::Status s_; | ||
}; | ||
|
||
class PKHMGetCmd : public Cmd { | ||
public: | ||
PKHMGetCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void DoThroughDB() override; | ||
void DoUpdateCache() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHMGetCmd(*this); } | ||
|
||
private: | ||
std::string key_; | ||
std::vector<std::string> fields_; | ||
void DoInitial() override; | ||
rocksdb::Status s_; | ||
}; | ||
|
||
class PKHKeysCmd : public Cmd { | ||
public: | ||
PKHKeysCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void DoThroughDB() override; | ||
void DoUpdateCache() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHKeysCmd(*this); } | ||
|
||
private: | ||
std::string key_; | ||
void DoInitial() override; | ||
rocksdb::Status s_; | ||
}; | ||
|
||
class PKHValsCmd : public Cmd { | ||
public: | ||
PKHValsCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void DoThroughDB() override; | ||
void DoUpdateCache() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHValsCmd(*this); } | ||
|
||
private: | ||
std::string key_; | ||
void DoInitial() override; | ||
rocksdb::Status s_; | ||
}; | ||
|
||
class PKHGetAllCmd : public Cmd { | ||
public: | ||
PKHGetAllCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void DoThroughDB() override; | ||
void DoUpdateCache() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHGetAllCmd(*this); } | ||
|
||
private: | ||
std::string key_; | ||
bool is_wt_; | ||
void DoInitial() override; | ||
rocksdb::Status s_; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initialize boolean flags and add range validation.
Several classes have uninitialized boolean flags and lack parameter validation:
PKHLenCmd::is_force_
PKHGetAllCmd::is_wt_
Apply these fixes:
- Initialize boolean flags:
class PKHLenCmd : public Cmd {
private:
bool is_force_ = false; // Initialize with default value
};
class PKHGetAllCmd : public Cmd {
private:
bool is_wt_ = false; // Initialize with default value
};
- Add range validation for numeric parameters:
class PKHIncrbyCmd : public Cmd {
private:
void DoInitial() override {
if (sec_ < 0) {
throw std::out_of_range("Expiration time cannot be negative");
}
// Existing implementation
}
};
class PKHScanCmd : public Cmd { | ||
public: | ||
PKHScanCmd(const std::string& name, int arity, uint32_t flag) | ||
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | ||
std::vector<std::string> current_key() const override { | ||
std::vector<std::string> res; | ||
res.push_back(key_); | ||
return res; | ||
} | ||
void Do() override; | ||
void DoThroughDB() override; | ||
void DoUpdateCache() override; | ||
void Split(const HintKeys& hint_keys) override {}; | ||
void Merge() override {}; | ||
Cmd* Clone() override { return new PKHScanCmd(*this); } | ||
|
||
private: | ||
std::string key_, pattern_; | ||
int64_t cursor_, count_; | ||
bool is_wt_; | ||
virtual void Clear() { | ||
pattern_ = "*"; | ||
count_ = 10; | ||
is_wt_ = false; | ||
} | ||
|
||
void DoInitial() override; | ||
rocksdb::Status s_; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix initialization and inheritance issues in PKHScanCmd.
The PKHScanCmd
class has several issues:
- Incomplete member initialization in
Clear()
- Unnecessary
virtual
keyword onClear()
method - Missing initialization of
cursor_
Apply these fixes:
class PKHScanCmd : public Cmd {
private:
- virtual void Clear() {
+ void Clear() override {
+ key_.clear();
pattern_ = "*";
+ cursor_ = 0;
count_ = 10;
is_wt_ = false;
}
};
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class PKHScanCmd : public Cmd { | |
public: | |
PKHScanCmd(const std::string& name, int arity, uint32_t flag) | |
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | |
std::vector<std::string> current_key() const override { | |
std::vector<std::string> res; | |
res.push_back(key_); | |
return res; | |
} | |
void Do() override; | |
void DoThroughDB() override; | |
void DoUpdateCache() override; | |
void Split(const HintKeys& hint_keys) override {}; | |
void Merge() override {}; | |
Cmd* Clone() override { return new PKHScanCmd(*this); } | |
private: | |
std::string key_, pattern_; | |
int64_t cursor_, count_; | |
bool is_wt_; | |
virtual void Clear() { | |
pattern_ = "*"; | |
count_ = 10; | |
is_wt_ = false; | |
} | |
void DoInitial() override; | |
rocksdb::Status s_; | |
}; | |
class PKHScanCmd : public Cmd { | |
public: | |
PKHScanCmd(const std::string& name, int arity, uint32_t flag) | |
: Cmd(name, arity, flag, static_cast<uint32_t>(AclCategory::PKHASH)) {} | |
std::vector<std::string> current_key() const override { | |
std::vector<std::string> res; | |
res.push_back(key_); | |
return res; | |
} | |
void Do() override; | |
void DoThroughDB() override; | |
void DoUpdateCache() override; | |
void Split(const HintKeys& hint_keys) override {}; | |
void Merge() override {}; | |
Cmd* Clone() override { return new PKHScanCmd(*this); } | |
private: | |
std::string key_, pattern_; | |
int64_t cursor_, count_; | |
bool is_wt_; | |
void Clear() override { | |
key_.clear(); | |
pattern_ = "*"; | |
cursor_ = 0; | |
count_ = 10; | |
is_wt_ = false; | |
} | |
void DoInitial() override; | |
rocksdb::Status s_; | |
}; |
TypeIterator* CreateIterator(const char& type, const std::string& pattern, const Slice* lower_bound, | ||
const Slice* upper_bound) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider removing the redundant CreateIterator
overload.
The CreateIterator
method now has two overloads:
CreateIterator(const DataType& type, ...)
CreateIterator(const char& type, ...)
Since the first overload covers all the necessary cases with the DataType
enum, the second overload taking a char
type may be redundant and can be removed to simplify the code.
If the second overload is not used externally or if it can be easily replaced by the DataType
version, consider removing it:
// Remove this overload
TypeIterator* CreateIterator(const char& type, const std::string& pattern, const Slice* lower_bound,
const Slice* upper_bound) { ... }
Alternatively, if backward compatibility is a concern, you can mark the char
overload as deprecated and plan its removal in a future version:
[[deprecated("Use CreateIterator(const DataType& type, ...) instead.")]]
TypeIterator* CreateIterator(const char& type, const std::string& pattern, const Slice* lower_bound,
const Slice* upper_bound) { ... }
@@ -962,7 +1025,7 @@ class Storage { | |||
Status XLen(const Slice& key, int32_t& len); | |||
Status XRead(const StreamReadGroupReadArgs& args, std::vector<std::vector<storage::IdMessage>>& results, | |||
std::vector<std::string>& reserved_keys); | |||
Status XInfo(const Slice& key, StreamInfoResult &result); | |||
Status XInfo(const Slice& key, StreamInfoResult& result); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Revert change and use pointer for output parameter in XInfo
.
The change to pass result
by value goes against the existing comment and the general practice in this codebase. It's better to use a pointer for output parameters for consistency and efficiency.
Please revert this change and use a pointer as suggested:
Status XInfo(const Slice& key, StreamInfoResult* result);
Status EnableDymayticOptions(const OptionType& option_type, const std::string& db_type, | ||
const std::unordered_map<std::string, std::string>& options); | ||
Status EnableAutoCompaction(const OptionType& option_type, const std::string& db_type, | ||
const std::unordered_map<std::string, std::string>& options); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the typo in EnableDynamicOptions
method name.
As previously pointed out, there's a typo in the method name. It should be "Dynamic" instead of "Dymaytic".
Please correct the method name:
Status EnableDynamicOptions(const OptionType& option_type, const std::string& db_type,
const std::unordered_map<std::string, std::string>& options);
TEST_F(PKHashesTest, PKHExpireTest) { // NOLINT | ||
|
||
int32_t ret = 0; | ||
std::vector<int32_t> rets; | ||
|
||
std::string value; | ||
|
||
// ***************** Group 1 Test ***************** | ||
// If a field with expired time(sec) in the hash and the field should expire after ttl sec. | ||
s = db.PKHSet("GP1_HSET_KEY", "HSET_TEST_FIELD", "HSET_TEST_VALUE", &ret); | ||
ASSERT_TRUE(s.ok()); | ||
ASSERT_EQ(ret, 1); | ||
s = db.PKHLen("GP1_HSET_KEY", &ret); | ||
ASSERT_TRUE(s.ok()); | ||
ASSERT_EQ(ret, 1); | ||
uint64_t ttl = 2; | ||
|
||
s = db.PKHGet("GP1_HSET_KEY", "HSET_TEST_FIELD", &value); | ||
ASSERT_TRUE(s.ok()); | ||
ASSERT_EQ(value, "HSET_TEST_VALUE"); | ||
|
||
s = db.PKHExpire("GP1_HSET_KEY", ttl * 1000, 1, {"HSET_TEST_FIELD"}, &rets); | ||
s = db.PKHGet("GP1_HSET_KEY", "HSET_TEST_FIELD", &value); | ||
|
||
// ASSERT_TRUE(s.ok()); | ||
// ASSERT_EQ(value, "HSET_TEST_VALUE"); | ||
|
||
std::this_thread::sleep_for(std::chrono::milliseconds(3100)); | ||
|
||
s = db.PKHGet("GP1_HSET_KEY", "HSET_TEST_FIELD", &value); | ||
ASSERT_FALSE(s.ok()); | ||
|
||
// if [field:value] already expire and then the [field:value] should be updated | ||
s = db.PKHSet("GP1_HSET_KEY", "HSET_TEST_FIELD", "HSET_TEST_VALUE", &ret); | ||
ASSERT_TRUE(s.ok()); | ||
ASSERT_EQ(ret, 1); | ||
|
||
// ***************** Group 2 Test ***************** | ||
// If multi fields with expired timestamp(sec) in the hash and the fields should expire after timestamp. | ||
// TODO(DDD: cmd basic test cases fisrt) | ||
} | ||
|
||
// PKHExpireatTest | ||
TEST_F(PKHashesTest, PKHExpireatTest) { // NOLINT | ||
|
||
int32_t ret = 0; | ||
std::vector<int32_t> rets; | ||
|
||
std::string value; | ||
|
||
// ***************** Group 1 Test ***************** | ||
// If a field with expired time(sec) in the hash and the field should expire after ttl sec. | ||
s = db.PKHSet("GP1_HSET_KEY", "HSET_TEST_FIELD", "HSET_TEST_VALUE", &ret); | ||
ASSERT_TRUE(s.ok()); | ||
ASSERT_EQ(ret, 1); | ||
s = db.PKHLen("GP1_HSET_KEY", &ret); | ||
ASSERT_TRUE(s.ok()); | ||
ASSERT_EQ(ret, 1); | ||
|
||
int64_t unix_time; | ||
rocksdb::Env::Default()->GetCurrentTime(&unix_time); | ||
int64_t timestamp = unix_time + 2; | ||
|
||
// It will expire in 2 seconds | ||
s = db.PKHExpireat("GP1_HSET_KEY", timestamp * 1000, 1, {"HSET_TEST_FIELD"}, &rets); | ||
s = db.PKHGet("GP1_HSET_KEY", "HSET_TEST_FIELD", &value); | ||
ASSERT_TRUE(s.ok()); | ||
ASSERT_EQ(value, "HSET_TEST_VALUE"); | ||
|
||
std::this_thread::sleep_for(std::chrono::milliseconds(3100)); | ||
|
||
s = db.PKHGet("GP1_HSET_KEY", "HSET_TEST_FIELD", &value); | ||
ASSERT_FALSE(s.ok()); // the field has expired | ||
|
||
// if [field:value] already expire and then the [field:value] should be updated | ||
s = db.PKHSet("GP1_HSET_KEY", "HSET_TEST_FIELD", "HSET_TEST_VALUE", &ret); | ||
ASSERT_TRUE(s.ok()); | ||
ASSERT_EQ(ret, 1); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor expiration tests to reduce code duplication
The tests PKHExpireTest
, PKHExpireatTest
, and PKHExpiretimeTest
contain similar code structures and setup. Consider refactoring common code into helper functions or using parameterized tests to improve maintainability.
pre_fvs_in.push_back({"PRE_TEST_FIELD1", "PRE_TEST_VALUE1"}); | ||
pre_fvs_in.push_back({"PRE_TEST_FIELD2", "PRE_TEST_VALUE2"}); | ||
pre_fvs_in.push_back({"PRE_TEST_FIELD3", "PRE_TEST_VALUE3"}); | ||
s = db.PKHMSet("A_HVALS_KEY", pre_fvs_in); | ||
ASSERT_TRUE(s.ok()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential confusion between keys during test setup
In PKHVals
, after inserting key-value pairs into A_HVALS_KEY
, you retrieve values from B_HVALS_KEY
. Ensure that the correct key is being used for both insertion and retrieval to prevent test inaccuracies.
Check that the test should manipulate and query the same key or adjust the test to reflect the intended behavior.
s = db.Open(storage_options, path); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check the return status of db.Open
in SetUp()
The return status of db.Open(storage_options, path)
is assigned to s
, but there is no check to verify if the operation was successful. Ignoring the status may lead to unexpected behavior if the database fails to open.
Apply this diff to add an assertion:
s = db.Open(storage_options, path);
+ASSERT_TRUE(s.ok());
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
s = db.Open(storage_options, path); | |
} | |
s = db.Open(storage_options, path); | |
ASSERT_TRUE(s.ok()); | |
} |
补充一下TCL测试吧 |
Please add the TCL test |
好的,这周补充。 |
Okay, more to add this week. |
Summary by CodeRabbit
New Features
PKHash
functionality with multiple commands for managing hash data structures, including setting, getting, deleting, and expiration of fields.Tests
PKHashes
, covering various operations including expiration, CRUD operations, batch handling, and scanning.Documentation