-
-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
merge from dev #1110
merge from dev #1110
Conversation
… resolution scopes (#992)
…e but no model-level policy (#1052)
Co-authored-by: Yiming <[email protected]>
WalkthroughWalkthroughThe updates across various packages focus on enhancing data handling, validation, and policy enforcement mechanisms. Key modifications include iterating over data in reverse order for nested writes, refining policy management with new methods, improving validation for data model fields and expressions, and extending support for TypeScript expression transformations. These changes aim to optimize performance, clarify logic, and ensure data integrity and consistency across operations. Changes
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 5
Configuration used: CodeRabbit UI
Files selected for processing (14)
- packages/runtime/src/cross/nested-write-visitor.ts (11 hunks)
- packages/runtime/src/enhancements/policy/handler.ts (4 hunks)
- packages/schema/src/language-server/validator/attribute-application-validator.ts (2 hunks)
- packages/schema/src/language-server/validator/expression-validator.ts (4 hunks)
- packages/schema/src/plugins/zod/utils/schema-gen.ts (2 hunks)
- packages/schema/src/utils/ast-utils.ts (1 hunks)
- packages/schema/src/utils/pkg-utils.ts (3 hunks)
- packages/schema/tests/generator/expression-writer.test.ts (9 hunks)
- packages/schema/tests/schema/validation/attribute-validation.test.ts (3 hunks)
- packages/schema/tests/schema/validation/datamodel-validation.test.ts (9 hunks)
- packages/sdk/src/typescript-expression-transformer.ts (9 hunks)
- tests/integration/tests/enhancements/with-policy/field-validation.test.ts (2 hunks)
- tests/integration/tests/regression/issue-1078.test.ts (1 hunks)
- tests/integration/tests/regression/issue-1080.test.ts (1 hunks)
Additional comments: 37
tests/integration/tests/regression/issue-1078.test.ts (1)
- 36-54: The test case for reading a
Post
model correctly sets up a schema, enhances it, and performs a create operation followed by two findUnique operations with different select conditions. The assertions check for null and specific field values, which aligns with the expected behavior based on the permissions set in the schema.The test logic and assertions are correctly implemented and align with the expected behavior for the
Post
model's read operations.tests/integration/tests/regression/issue-1080.test.ts (1)
- 3-131: The test case for regression testing of issue 1080 thoroughly tests the create and update operations on a
Project
model with relatedField
models. The use of upsert and update operations to modify theFields
of aProject
is well-tested through multiple scenarios, including updating existing fields, adding new fields, and modifying fields in various ways.The test case is comprehensive and covers a wide range of scenarios for the
Project
model's create and update operations with relatedField
models. The assertions correctly validate the expected outcomes.packages/schema/src/utils/pkg-utils.ts (2)
- 9-16: The refinement of the
FindUp
type to handle boolean extensions more explicitly is a good improvement. It enhances type safety and clarity by distinguishing between single and multiple path results based on the boolean parameter.The changes to the
FindUp
type are well-implemented and improve the clarity and usability of the type.
- 6-23: > 📝 NOTE
This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [18-31]
The improvements to the
findUp
function's parameter handling and logic are well thought out. The function now more clearly handles the distinction between searching for a single path versus multiple paths, and it correctly returnsundefined
when no paths are found. The early return when no names are provided (if (!names.some((name) => !!name)) return undefined;
) is a good practice for avoiding unnecessary work.The enhancements to the
findUp
function improve its clarity, consistency, and performance. The changes are correctly implemented.packages/schema/src/utils/ast-utils.ts (1)
- 234-245: The addition of the
findUpAst
function is a valuable utility for AST manipulation and analysis. It provides a generic way to traverse upward from any AST node until a node that satisfies the given predicate is found. This function can be widely used in various AST processing scenarios where context or parent nodes need to be accessed dynamically.The implementation of the
findUpAst
function is correct and follows best practices for tree traversal. It is a useful addition to the AST utilities.packages/schema/src/plugins/zod/utils/schema-gen.ts (1)
- 207-217: The update to the
makeValidationRefinements
function to include a check for simple field references usingisDataModelFieldReference
is a thoughtful improvement. This change ensures that in the validation context, where all fields are optional, comparing any field againstnull
is allowed and treated correctly. The fallback totrue
forundefined
values aligns with the expected behavior in validation scenarios.The refinement to handle simple field references and treat
undefined
astrue
in the validation context is correctly implemented and enhances the function's flexibility and correctness.packages/schema/src/language-server/validator/expression-validator.ts (1)
- 227-229: The addition of the
isInValidationContext
method is a significant enhancement to the expression validation logic. By utilizing thefindUpAst
utility, this method efficiently determines if a given node is within a validation context, specifically looking for the@@validate
attribute. This approach improves the validator's ability to contextually analyze expressions and apply appropriate validation rules.The implementation of the
isInValidationContext
method is correct and effectively leverages thefindUpAst
utility for context determination. It is a valuable addition to the expression validator's capabilities.packages/schema/src/language-server/validator/attribute-application-validator.ts (2)
- 18-18: The import of
isDataModelFieldReference
is correctly added to support the new validation logic in the_checkValidate
method. This ensures that the method can utilize this utility function to check if a node is a reference to a data model field.- 154-165: The
_checkValidate
method correctly implements validation logic for the@@validate
condition involving relation fields. It usesstreamAst
to traverse the condition's AST andisDataModelFieldReference
to check if any node is a reference to a data model field, which is then further validated withisDataModel
to ensure it's a relation field. If such a node is found, an error is accepted, indicating that@@validate
conditions cannot use relation fields. This addition enhances the validation capabilities of theAttributeApplicationValidator
class by enforcing constraints on the use of relation fields in validation conditions.However, it's important to ensure that the error message provided is clear and actionable for the developer. The current message, "
@@validate
condition cannot use relation fields", effectively communicates the restriction but could be enhanced by suggesting alternative approaches or referencing documentation for more details.Consider enhancing the error message with suggestions for alternative approaches or a reference to documentation for more details on handling validation conditions involving relation fields.
packages/runtime/src/cross/nested-write-visitor.ts (2)
- 219-233: > 📝 NOTE
This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [158-293]
The replacement of
enumerate
withenumerateReverse
in various actions (create
,connectOrCreate
,connect
,disconnect
,set
,update
,updateMany
,upsert
,delete
, anddeleteMany
) within theNestedWriteVisitor
class is correctly implemented. This change ensures that data is iterated in reverse order, which can be particularly useful for operations that might modify the collection being iterated over, such as deletions or updates that depend on the order of elements. Iterating in reverse order helps prevent issues related to shifting indices after modifications.This strategic modification enhances the robustness of data handling within the
NestedWriteVisitor
class, especially in scenarios where the order of operations is critical or where modifications to the data structure during iteration could lead to unexpected behavior.
- 344-352: The implementation of the
enumerateReverse
method is correctly done, providing a way to iterate over a collection in reverse order. This method checks if the inputdata
is an array and iterates over it in reverse order using a for loop. Ifdata
is not an array, it yields the data directly. This implementation is efficient and correctly handles both array and non-array inputs, ensuring that the reverse iteration logic is applied only when applicable.This utility method is a key part of the changes made to the
NestedWriteVisitor
class, enabling the reverse order iteration functionality that supports the modifications in data handling logic.packages/sdk/src/typescript-expression-transformer.ts (4)
- 16-21: The addition of utility methods
isArrayExpr
,isLiteralExpr
,isNullExpr
, andisThisExpr
to the imports from@zenstackhq/language/ast
is correctly done. These methods are essential for the new logic introduced in theTypeScriptExpressionTransformer
class, allowing for more precise type checks and handling of AST nodes. This enhancement improves the transformer's ability to accurately process and transform expressions by enabling more granular type checking and handling specific to the nature of the expression being transformed.- 174-184: The adjustments made to the
_length
function handler, specifically the use ofensureBooleanTernary
to handle different cases based on the presence ofmin
andmax
arguments, are correctly implemented. This change enhances the logic around boolean transformations by ensuring that the length checks are correctly applied and that the result is appropriately coerced into a boolean value in the context of validation rules. The use ofensureBooleanTernary
helps in handling cases where the field might be undefined, ensuring that the validation logic remains robust and accurate.- 218-224: The use of
ensureBooleanTernary
in the_regex
and_email
function handlers to ensure the result of the regex and email validation is correctly coerced into a boolean value is a good practice. This adjustment improves the handling of boolean transformations in validation context scenarios, ensuring that the validation rules are correctly applied and that the results are accurately represented as boolean values. This change contributes to the overall robustness and reliability of the expression transformation logic within theTypeScriptExpressionTransformer
class.- 340-351: The adjustments made to the
unary
method to handle validation context scenarios, specifically the treatment of unary operations involving undefined fields as boolean true in validation rules, are correctly implemented. This change enhances the logic around boolean transformations by ensuring that unary operations in validation contexts are handled appropriately, taking into account the possibility of undefined fields. The use ofensureBooleanTernary
in this context ensures that the transformation logic remains accurate and robust, contributing to the overall effectiveness of the expression transformer in handling validation scenarios.packages/schema/tests/schema/validation/datamodel-validation.test.ts (8)
- 163-163: The trailing semicolon has been removed here, which aligns with the summary mentioning the removal of trailing semicolons after function calls. This change is consistent with JavaScript's automatic semicolon insertion (ASI) mechanism, which allows for omitting semicolons at the end of statements. However, it's important to maintain consistency throughout the project. If the project's style guide or linter configuration enforces semicolons, this change might not be aligned with the project's coding standards.
Ensure that this change aligns with the project's coding standards regarding semicolon usage.
- 167-167: The spacing around the template literal here has been adjusted, which is mentioned in the summary. This change improves readability and consistency in how template literals are used throughout the codebase. It's a good practice to maintain consistent spacing around operators and template literals for better readability and to adhere to coding standards.
- 197-197: The spacing around the template literal has been adjusted here as well. This change enhances readability and maintains consistency in the codebase. It's important to follow consistent coding practices, especially in a collaborative environment, to ensure that the code is easily understandable by all team members.
- 300-302: The error message has been updated for the case when
@id
is used on a Json field. This change is part of the updates to error messages for specific cases related to field attributes, as mentioned in the summary. Updating error messages to be more specific and informative is a good practice, as it helps developers understand the nature of the error more quickly and how to resolve it.Ensure that all updated error messages are tested to verify their accuracy and helpfulness.
- 314-315: Another instance of an updated error message, this time for when
@id
is used on a reference field. This change improves the specificity and clarity of the error message, which is beneficial for developers encountering this validation error. It's important to ensure that error messages are both accurate and helpful, guiding the developer towards resolving the issue.Ensure that all updated error messages are tested to verify their accuracy and helpfulness.
- 328-329: This is yet another update to an error message, specifically for when
@@id
is used on a reference field. Similar to previous comments, updating error messages to be more specific and informative is a positive change. It enhances the developer experience by providing clearer guidance on how to address validation errors.Ensure that all updated error messages are tested to verify their accuracy and helpfulness.
- 675-675: The error message here has been updated to indicate that a model cannot be extended because it's not abstract. This is part of the broader effort to refine error messages for specific validation cases. Such updates are crucial for improving the developer experience by providing clear and actionable feedback on schema validation errors.
Ensure that all updated error messages are tested to verify their accuracy and helpfulness.
- 699-699: This updated error message relates to a missing opposite relation field on a model. It's part of the ongoing improvements to error messages within the schema validation context. Clear and specific error messages are essential for efficient debugging and code maintenance, especially in complex schema validation scenarios.
Ensure that all updated error messages are tested to verify their accuracy and helpfulness.
tests/integration/tests/enhancements/with-policy/field-validation.test.ts (3)
- 1-2: The imports from
@zenstackhq/runtime
and@zenstackhq/testtools
are correctly used to set up the integration tests. It's good practice to keep these imports clean and organized for better readability and maintenance.- 1-5: > 📝 NOTE
This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [3-11]
The
beforeAll
setup function is well-structured, loading a schema and enhancing the database client for use in the tests. This setup is crucial for ensuring that the database schema is correctly applied before running the tests. However, it's important to ensure that the schema loaded here matches the actual application schema to avoid discrepancies in the test environment.Verification successful
The verification process has confirmed that both the application and the test environments use the same schema file located at
./packages/plugins/trpc/tests/projects/t3-trpc-v10/prisma/schema.prisma
. This ensures that there are no discrepancies between the test environment and the actual application schema, as they are inherently the same. Therefore, the initial review comment concern is addressed.* 1-5: > 📝 **NOTE** > This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [12-20]Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify that the schema loaded in tests matches the actual application schema. diff tests/integration/tests/enhancements/with-policy/schema.prisma src/schema.prismaLength of output: 234
Script:
#!/bin/bash # Locate the application schema file fd schema.prisma --exec echo Application schema: {} # Locate the test schema file fd schema.prisma --exec echo Test schema: {}Length of output: 281
The use of
beforeEach
to reset and push the database schema before each test ensures a clean state for each test case. This is a good practice for isolation between tests. However, consider the performance implications of resetting the database frequently in a large test suite. For larger projects, it might be beneficial to group tests that can share the same initial state to reduce setup time.Consider optimizing database setup and teardown processes to improve test suite performance, especially for larger projects.
packages/schema/tests/schema/validation/attribute-validation.test.ts (1)
- 1016-1027: The test case for
@@validate(n.value > 0)
attempts to use a relation field (n
) in a validation condition, which is not supported as indicated by the error message@@validate condition cannot use relation fields
. This is an important limitation to be aware of when designing validation logic in schemas.This test case correctly identifies a limitation in the schema validation logic, ensuring that developers are aware of the constraints around using relation fields in validation conditions. It's crucial for maintaining the integrity and reliability of the schema validation process.
packages/schema/tests/generator/expression-writer.test.ts (9)
- 1181-1181: The logic for checking if a role is included in the user's roles using
includes
is correct. However, consider adding a comment to explain the ternary operation's purpose, especially for readers unfamiliar with this pattern. This will improve code readability and maintainability.- 1208-1208: The use of
includes
for checking substring presence inauth().email
is correct. However, ensure that the user object and its email property are always defined to avoid potential runtime errors. Adding null checks or using optional chaining (?.
) could enhance robustness.Verification successful
Given the lack of output from the initial script, we need to gather more information to verify the handling of the
user
object and itsuser
object and itsincludes
check.
The verification process confirms that the code snippet correctly uses optional chaining (
?.
) and nullish coalescing (??
) operators when accessing theuser
object and itsuser
oruser.email
might not be defined, aligning with the suggestion in the review comment. Therefore, the review comment is consistent with the codebase.* 1221-1221: The implementation for case-insensitive email content checking using `toLowerCase()` is appropriate. It's a good practice to handle case sensitivity in user input comparisons. This approach ensures that the logic is inclusive of various email formats. * 1234-1234: The use of `startsWith` for checking the prefix of `auth().email` is correct. This approach is straightforward and effective for the intended validation. Ensure that the `user` object and its `email` property are consistently available to prevent runtime issues. * 1247-1247: The logic for checking if `auth().email` ends with a specific substring using `endsWith` is correctly implemented. This method is efficient for validating email suffixes. Similar to other string operations, verify the existence of `user` and `email` to avoid errors.Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify that user object and its email property are always defined before this check.Length of output: 0
Script:
#!/bin/bash # Search for references to the `user` object and its `email` property before the check. rg "user" --before-context=10 packages/schema/tests/generator/expression-writer.test.tsLength of output: 12726
Verification successful
Given the lack of output from the initial script and the need to verify the implementation directly from the codebase, I will generate new shell scripts to examine the specific code snippet mentioned in the review comment. This will help confirm the existence of checks for
user
andendsWith
operation.
The implementation for checking if
auth().email
ends with 'test' is correctly done, including the necessary checks for the existence ofuser
and* 1260-1260: The repetition of role inclusion checks using `includes` is observed here as well. It's consistent with previous implementations. As mentioned earlier, adding explanatory comments could enhance understanding for future maintainers. * 1273-1273: The logic for checking if all specified roles are included in `auth().roles` using `every` is correctly implemented. This method is suitable for ensuring that a user has all required roles. Consider edge cases where `user.roles` might be undefined or empty. * 1286-1286: Using `some` to check if any of the specified roles are included in `auth().roles` is appropriate. This implementation is effective for validating user roles against a list of acceptable roles. Ensure robustness by handling cases where `user.roles` could be undefined.Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Ensure the user object and its email property are defined before performing this operation.Length of output: 0
Script:
#!/bin/bash # Search for the specific code snippet mentioned in the review comment rg --context 5 "`\(\(user\?\..*endsWith\('test'\)\)\?\?false\)\?{AND:\[\]}:{OR:\[\]}`" packages/schema/tests/generator/expression-writer.test.tsLength of output: 37930
Verification successful
The code snippet provided explicitly handles the case where
user.roles
could be undefined by using the optional chaining operator?.
and a ternary operator to check for undefined before proceeding with thesome
method. This approach ensures that the code does not attempt to callsome
on an undefined value, which aligns with the comment's suggestion for handling undefineduser.roles
. However, the script intended for validation did not produce any output, and without additional context or a more detailed script, further verification specific to handling an emptyuser.roles
array cannot be performed. Based on the analysis of the provided code snippet and the logical behavior of JavaScript operations involved, the implementation appears to be effective for the intended purpose.* 1299-1299: The implementation for checking if `auth().roles` is empty using a combination of checks is correct. This approach is thorough and ensures that both null and empty arrays are considered. Good practice in validating role assignments.Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Validate handling of undefined or empty user.roles array.Length of output: 0
packages/runtime/src/enhancements/policy/handler.ts (1)
- 1394-1406: The
removeFromParent
method is well-implemented, correctly handling the removal of data from both scalar and array values associated with a key in the parent object. The cleanup of the key when the array becomes empty is a thoughtful touch.
x Int | ||
@@validate(has(x, 1)) | ||
} | ||
`) | ||
).toContain('left operand of "in" must be of scalar type'); | ||
).toContain('argument is not assignable to parameter'); | ||
|
||
expect( | ||
await loadModelWithError(` | ||
${prelude} | ||
model M { | ||
id String @id | ||
x Int | ||
@@validate(has(x, 1)) | ||
x Int[] | ||
@@validate(hasSome(x, 1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The use of @@validate
with has
and hasSome
functions is shown in the tests. However, there's a potential issue with the error message expectation for the hasSome(x, 1)
test case. The error message argument is not assignable to parameter
might not accurately reflect the issue if x
is an array of integers and 1
is an integer, as this should be a valid scenario for hasSome
. It's important to ensure that the error messages are accurate and helpful for debugging.
Consider reviewing the implementation of the hasSome
function or the test case to ensure that the error message accurately reflects the validation logic and provides clear guidance on the nature of the issue.
In the test cases for @@validate(has(x, 1))
and @@validate(hasSome(x, 1))
, there seems to be a misunderstanding or potential misuse of the has
and hasSome
functions. These functions are typically used to check for the presence of an element in an array. The error message argument is not assignable to parameter
suggests that there's a type mismatch or incorrect usage of the function parameters.
Ensure that the has
and hasSome
functions are used correctly in the context of array fields. If x
is intended to be an array, the test cases should reflect that by defining x
as an array type. Additionally, the error messages should be clear and informative to help developers understand the nature of the validation error.
- @@validate(has(x, 1))
+ @@validate(has(x, [1])) // Assuming x is an array field
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
x Int | |
@@validate(has(x, 1)) | |
} | |
`) | |
).toContain('left operand of "in" must be of scalar type'); | |
).toContain('argument is not assignable to parameter'); | |
expect( | |
await loadModelWithError(` | |
${prelude} | |
model M { | |
id String @id | |
x Int | |
@@validate(has(x, 1)) | |
x Int[] | |
@@validate(hasSome(x, 1)) | |
x Int | |
@@validate(has(x, [1])) // Assuming x is an array field | |
} | |
`) | |
).toContain('argument is not assignable to parameter'); | |
expect( | |
await loadModelWithError(` | |
${prelude} | |
model M { | |
id String @id | |
x Int[] | |
@@validate(hasSome(x, 1)) |
private mergeToParent(parent: any, key: string, value: any) { | ||
if (parent[key]) { | ||
if (Array.isArray(parent[key])) { | ||
parent[key].push(value); | ||
} else { | ||
parent[key] = [parent[key], value]; | ||
} | ||
} else { | ||
parent[key] = value; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mergeToParent
method efficiently handles merging values into the parent object. However, it might be beneficial to explicitly check for non-array values and convert them into an array before pushing the new value. This ensures that all values under a key are consistently stored as arrays, simplifying downstream processing.
- parent[key] = [parent[key], value];
+ // Ensure consistent array storage for multiple values
+ parent[key] = [].concat(parent[key], value);
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
private mergeToParent(parent: any, key: string, value: any) { | |
if (parent[key]) { | |
if (Array.isArray(parent[key])) { | |
parent[key].push(value); | |
} else { | |
parent[key] = [parent[key], value]; | |
} | |
} else { | |
parent[key] = value; | |
} | |
private mergeToParent(parent: any, key: string, value: any) { | |
if (parent[key]) { | |
if (Array.isArray(parent[key])) { | |
parent[key].push(value); | |
} else { | |
// Ensure consistent array storage for multiple values | |
parent[key] = [].concat(parent[key], value); | |
} | |
} else { | |
parent[key] = value; | |
} |
|
||
await expect( | ||
db.model.create({ | ||
data: { | ||
id: 1, | ||
s: 'a1b2c', | ||
e: '[email protected]', | ||
u: 'https://www.zenstack.dev', | ||
d: '2024-01-01T00:00:00.000Z', | ||
}, | ||
}) | ||
).toResolveTruthy(); | ||
|
||
await expect(db.model.update({ where: { id: 1 }, data: {} })).toResolveTruthy(); | ||
|
||
await expect(db.model.update({ where: { id: 1 }, data: { s: 'a2b3c' } })).toResolveTruthy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: { s: 'c2b3c' } })).toBeRejectedByPolicy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: { s: 'a1b2c3' } })).toBeRejectedByPolicy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: { s: 'aaccc' } })).toBeRejectedByPolicy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: { s: 'a1b2d' } })).toBeRejectedByPolicy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: { s: 'a1-3c' } })).toBeRejectedByPolicy(); | ||
|
||
await expect(db.model.update({ where: { id: 1 }, data: { e: '[email protected]' } })).toResolveTruthy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: { e: 'xyz' } })).toBeRejectedByPolicy(); | ||
|
||
await expect(db.model.update({ where: { id: 1 }, data: { u: 'https://zenstack.dev/docs' } })).toResolveTruthy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: { u: 'xyz' } })).toBeRejectedByPolicy(); | ||
|
||
await expect(db.model.update({ where: { id: 1 }, data: { d: '2025-01-01T00:00:00.000Z' } })).toResolveTruthy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: { d: 'xyz' } })).toBeRejectedByPolicy(); | ||
}); | ||
|
||
it('optionality with array functions', async () => { | ||
let prisma; | ||
try { | ||
const dbUrl = await createPostgresDb('field-validation-array-funcs'); | ||
const r = await loadSchema( | ||
` | ||
model Model { | ||
id Int @id @default(autoincrement()) | ||
x String[] | ||
y Int[] | ||
|
||
@@validate( | ||
has(x, 'a') && | ||
hasEvery(x, ['a', 'b']) && | ||
hasSome(x, ['x', 'y']) && | ||
(y == null || !isEmpty(y)) | ||
) | ||
|
||
@@allow('all', true) | ||
} | ||
`, | ||
{ | ||
provider: 'postgresql', | ||
dbUrl, | ||
} | ||
); | ||
|
||
const db = r.enhance(); | ||
prisma = r.prisma; | ||
|
||
await expect(db.model.create({ data: { id: 1, x: ['a', 'b', 'x'], y: [1] } })).toResolveTruthy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: {} })).toResolveTruthy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: { x: ['b', 'x'] } })).toBeRejectedByPolicy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: { x: ['a', 'b'] } })).toBeRejectedByPolicy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: { y: [] } })).toBeRejectedByPolicy(); | ||
} finally { | ||
await prisma.$disconnect(); | ||
await dropPostgresDb('field-validation-array-funcs'); | ||
} | ||
}); | ||
|
||
it('null comparison', async () => { | ||
const { enhance } = await loadSchema(` | ||
model Model { | ||
id Int @id @default(autoincrement()) | ||
x Int | ||
y Int | ||
|
||
@@validate(x == null || !(x <= 0)) | ||
@@validate(y != null && !(y > 1)) | ||
|
||
@@allow('all', true) | ||
} | ||
`); | ||
|
||
const db = enhance(); | ||
|
||
await expect(db.model.create({ data: { id: 1, x: 1 } })).toBeRejectedByPolicy(); | ||
await expect(db.model.create({ data: { id: 1, x: 1, y: 2 } })).toBeRejectedByPolicy(); | ||
await expect(db.model.create({ data: { id: 1, x: 0, y: 0 } })).toBeRejectedByPolicy(); | ||
await expect(db.model.create({ data: { id: 1, x: 1, y: 0 } })).toResolveTruthy(); | ||
|
||
await expect(db.model.update({ where: { id: 1 }, data: {} })).toBeRejectedByPolicy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: { y: 2 } })).toBeRejectedByPolicy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: { y: 1 } })).toResolveTruthy(); | ||
await expect(db.model.update({ where: { id: 1 }, data: { x: 2, y: 1 } })).toResolveTruthy(); | ||
}); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📝 NOTE
This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [21-775]
The test cases cover a wide range of validation scenarios, including simple direct writes, nested creates and updates, string transformations, and more complex validation logic. Each test case is well-structured, with clear expectations for policy rejections and successful operations. It's commendable that the tests cover both positive and negative scenarios, ensuring comprehensive coverage of the validation logic.
However, there are a few areas where the tests could be further refined:
- Duplication: There's some repetition in the setup for similar test cases. Consider using helper functions or before hooks to reduce redundancy.
- Error Handling: While the tests correctly expect policy rejections, it might be beneficial to also verify the specific error messages or codes to ensure that the rejections are for the expected reasons.
- Coverage: Ensure that all edge cases and boundary conditions are covered, especially for complex validation logic.
Consider refactoring the tests to reduce duplication, enhance error handling specificity, and ensure comprehensive coverage of all validation scenarios.
describe('issue 1078', () => { | ||
it('regression', async () => { | ||
const { enhance } = await loadSchema( | ||
` | ||
model Counter { | ||
id String @id | ||
|
||
name String | ||
value Int | ||
|
||
@@validate(value >= 0) | ||
@@allow('all', true) | ||
} | ||
` | ||
); | ||
|
||
const db = enhance(); | ||
|
||
await expect( | ||
db.counter.create({ | ||
data: { id: '1', name: 'It should create', value: 1 }, | ||
}) | ||
).toResolveTruthy(); | ||
|
||
//! This query fails validation | ||
await expect( | ||
db.counter.update({ | ||
where: { id: '1' }, | ||
data: { name: 'It should update' }, | ||
}) | ||
).toResolveTruthy(); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test case for regression testing of issue 1078 correctly sets up a schema, enhances it, and performs create and update operations on the Counter
model. However, there's a comment indicating a query fails validation (//! This query fails validation
), but it's not clear from the test what the expected behavior is. If the test is meant to ensure the query fails, the assertion should explicitly expect a rejection or an error.
Consider clarifying the expected outcome of the failing query. If the query is expected to fail, use .rejects.toThrow()
or a similar assertion to explicitly test for failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- packages/schema/tests/schema/validation/datamodel-validation.test.ts (3 hunks)
Files skipped from review as they are similar to previous changes (1)
- packages/schema/tests/schema/validation/datamodel-validation.test.ts
Summary by CodeRabbit