Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge from dev #1110

Merged
merged 33 commits into from
Mar 9, 2024
Merged

merge from dev #1110

merged 33 commits into from
Mar 9, 2024

Conversation

ymc9
Copy link
Member

@ymc9 ymc9 commented Mar 9, 2024

Summary by CodeRabbit

  • New Features
    • Added reverse iteration functionality in data handling for improved operation sequence.
    • Introduced enhanced policy handling methods for data connections and updates.
    • Implemented new validation checks and methods for attribute and data model integrity.
    • Expanded TypeScript expression handling capabilities for better validation and transformation logic.
  • Enhancements
    • Improved AST node traversal and utility functions for more precise code analysis and manipulation.
  • Bug Fixes
    • Refined conditional expression logic in tests to accurately reflect role and email validation scenarios.
    • Updated validation logic and error messages in schema tests for clearer feedback on failures.
  • Tests
    • Added integration tests for regression scenarios and database operation validations to ensure reliability and correctness.

ymc9 and others added 30 commits February 11, 2024 22:05
Copy link
Contributor

coderabbitai bot commented Mar 9, 2024

Walkthrough

Walkthrough

The updates across various packages focus on enhancing data handling, validation, and policy enforcement mechanisms. Key modifications include iterating over data in reverse order for nested writes, refining policy management with new methods, improving validation for data model fields and expressions, and extending support for TypeScript expression transformations. These changes aim to optimize performance, clarify logic, and ensure data integrity and consistency across operations.

Changes

Files Summary
.../nested-write-visitor.ts Replaced enumerate with enumerateReverse for iterating over data in reverse order.
.../policy/handler.ts Introduced mergeToParent and removeFromParent methods in PolicyProxyHandler for efficient policy handling.
.../attribute-application-validator.ts, .../expression-validator.ts Added validation checks and methods for handling @@validate conditions and data model attributes.
.../schema-gen.ts Included a check using isDataModelFieldReference in makeValidationRefinements function.
.../ast-utils.ts, .../pkg-utils.ts Added findUpAst function and refined FindUp type for improved AST traversal and path searching logic.
.../expression-writer.test.ts Updated conditional expressions in tests for better evaluation related to user roles and email properties.
.../attribute-validation.test.ts, .../datamodel-validation.test.ts Modified validation logic and updated error messages for model attributes and field attributes.
.../typescript-expression-transformer.ts Enhanced expression handling with new methods and adjustments to existing ones for better validation context handling.
.../field-validation.test.ts Added model-level validation scenarios for field validation covering a range of conditions.
.../issue-1078.test.ts, .../issue-1080.test.ts Introduced tests for regression scenarios and database operations related to issues 1078 and 1080.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

Note: Auto-reply has been disabled for this repository by the repository owner. The CodeRabbit bot will not respond to your replies unless it is explicitly tagged.

  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 5

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 9f9d277 and 841989d.
Files selected for processing (14)
  • packages/runtime/src/cross/nested-write-visitor.ts (11 hunks)
  • packages/runtime/src/enhancements/policy/handler.ts (4 hunks)
  • packages/schema/src/language-server/validator/attribute-application-validator.ts (2 hunks)
  • packages/schema/src/language-server/validator/expression-validator.ts (4 hunks)
  • packages/schema/src/plugins/zod/utils/schema-gen.ts (2 hunks)
  • packages/schema/src/utils/ast-utils.ts (1 hunks)
  • packages/schema/src/utils/pkg-utils.ts (3 hunks)
  • packages/schema/tests/generator/expression-writer.test.ts (9 hunks)
  • packages/schema/tests/schema/validation/attribute-validation.test.ts (3 hunks)
  • packages/schema/tests/schema/validation/datamodel-validation.test.ts (9 hunks)
  • packages/sdk/src/typescript-expression-transformer.ts (9 hunks)
  • tests/integration/tests/enhancements/with-policy/field-validation.test.ts (2 hunks)
  • tests/integration/tests/regression/issue-1078.test.ts (1 hunks)
  • tests/integration/tests/regression/issue-1080.test.ts (1 hunks)
Additional comments: 37
tests/integration/tests/regression/issue-1078.test.ts (1)
  • 36-54: The test case for reading a Post model correctly sets up a schema, enhances it, and performs a create operation followed by two findUnique operations with different select conditions. The assertions check for null and specific field values, which aligns with the expected behavior based on the permissions set in the schema.

The test logic and assertions are correctly implemented and align with the expected behavior for the Post model's read operations.

tests/integration/tests/regression/issue-1080.test.ts (1)
  • 3-131: The test case for regression testing of issue 1080 thoroughly tests the create and update operations on a Project model with related Field models. The use of upsert and update operations to modify the Fields of a Project is well-tested through multiple scenarios, including updating existing fields, adding new fields, and modifying fields in various ways.

The test case is comprehensive and covers a wide range of scenarios for the Project model's create and update operations with related Field models. The assertions correctly validate the expected outcomes.

packages/schema/src/utils/pkg-utils.ts (2)
  • 9-16: The refinement of the FindUp type to handle boolean extensions more explicitly is a good improvement. It enhances type safety and clarity by distinguishing between single and multiple path results based on the boolean parameter.

The changes to the FindUp type are well-implemented and improve the clarity and usability of the type.

  • 6-23: > 📝 NOTE

This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [18-31]

The improvements to the findUp function's parameter handling and logic are well thought out. The function now more clearly handles the distinction between searching for a single path versus multiple paths, and it correctly returns undefined when no paths are found. The early return when no names are provided (if (!names.some((name) => !!name)) return undefined;) is a good practice for avoiding unnecessary work.

The enhancements to the findUp function improve its clarity, consistency, and performance. The changes are correctly implemented.

packages/schema/src/utils/ast-utils.ts (1)
  • 234-245: The addition of the findUpAst function is a valuable utility for AST manipulation and analysis. It provides a generic way to traverse upward from any AST node until a node that satisfies the given predicate is found. This function can be widely used in various AST processing scenarios where context or parent nodes need to be accessed dynamically.

The implementation of the findUpAst function is correct and follows best practices for tree traversal. It is a useful addition to the AST utilities.

packages/schema/src/plugins/zod/utils/schema-gen.ts (1)
  • 207-217: The update to the makeValidationRefinements function to include a check for simple field references using isDataModelFieldReference is a thoughtful improvement. This change ensures that in the validation context, where all fields are optional, comparing any field against null is allowed and treated correctly. The fallback to true for undefined values aligns with the expected behavior in validation scenarios.

The refinement to handle simple field references and treat undefined as true in the validation context is correctly implemented and enhances the function's flexibility and correctness.

packages/schema/src/language-server/validator/expression-validator.ts (1)
  • 227-229: The addition of the isInValidationContext method is a significant enhancement to the expression validation logic. By utilizing the findUpAst utility, this method efficiently determines if a given node is within a validation context, specifically looking for the @@validate attribute. This approach improves the validator's ability to contextually analyze expressions and apply appropriate validation rules.

The implementation of the isInValidationContext method is correct and effectively leverages the findUpAst utility for context determination. It is a valuable addition to the expression validator's capabilities.

packages/schema/src/language-server/validator/attribute-application-validator.ts (2)
  • 18-18: The import of isDataModelFieldReference is correctly added to support the new validation logic in the _checkValidate method. This ensures that the method can utilize this utility function to check if a node is a reference to a data model field.
  • 154-165: The _checkValidate method correctly implements validation logic for the @@validate condition involving relation fields. It uses streamAst to traverse the condition's AST and isDataModelFieldReference to check if any node is a reference to a data model field, which is then further validated with isDataModel to ensure it's a relation field. If such a node is found, an error is accepted, indicating that @@validate conditions cannot use relation fields. This addition enhances the validation capabilities of the AttributeApplicationValidator class by enforcing constraints on the use of relation fields in validation conditions.

However, it's important to ensure that the error message provided is clear and actionable for the developer. The current message, "@@validate condition cannot use relation fields", effectively communicates the restriction but could be enhanced by suggesting alternative approaches or referencing documentation for more details.

Consider enhancing the error message with suggestions for alternative approaches or a reference to documentation for more details on handling validation conditions involving relation fields.

packages/runtime/src/cross/nested-write-visitor.ts (2)
  • 219-233: > 📝 NOTE

This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [158-293]

The replacement of enumerate with enumerateReverse in various actions (create, connectOrCreate, connect, disconnect, set, update, updateMany, upsert, delete, and deleteMany) within the NestedWriteVisitor class is correctly implemented. This change ensures that data is iterated in reverse order, which can be particularly useful for operations that might modify the collection being iterated over, such as deletions or updates that depend on the order of elements. Iterating in reverse order helps prevent issues related to shifting indices after modifications.

This strategic modification enhances the robustness of data handling within the NestedWriteVisitor class, especially in scenarios where the order of operations is critical or where modifications to the data structure during iteration could lead to unexpected behavior.

  • 344-352: The implementation of the enumerateReverse method is correctly done, providing a way to iterate over a collection in reverse order. This method checks if the input data is an array and iterates over it in reverse order using a for loop. If data is not an array, it yields the data directly. This implementation is efficient and correctly handles both array and non-array inputs, ensuring that the reverse iteration logic is applied only when applicable.

This utility method is a key part of the changes made to the NestedWriteVisitor class, enabling the reverse order iteration functionality that supports the modifications in data handling logic.

packages/sdk/src/typescript-expression-transformer.ts (4)
  • 16-21: The addition of utility methods isArrayExpr, isLiteralExpr, isNullExpr, and isThisExpr to the imports from @zenstackhq/language/ast is correctly done. These methods are essential for the new logic introduced in the TypeScriptExpressionTransformer class, allowing for more precise type checks and handling of AST nodes. This enhancement improves the transformer's ability to accurately process and transform expressions by enabling more granular type checking and handling specific to the nature of the expression being transformed.
  • 174-184: The adjustments made to the _length function handler, specifically the use of ensureBooleanTernary to handle different cases based on the presence of min and max arguments, are correctly implemented. This change enhances the logic around boolean transformations by ensuring that the length checks are correctly applied and that the result is appropriately coerced into a boolean value in the context of validation rules. The use of ensureBooleanTernary helps in handling cases where the field might be undefined, ensuring that the validation logic remains robust and accurate.
  • 218-224: The use of ensureBooleanTernary in the _regex and _email function handlers to ensure the result of the regex and email validation is correctly coerced into a boolean value is a good practice. This adjustment improves the handling of boolean transformations in validation context scenarios, ensuring that the validation rules are correctly applied and that the results are accurately represented as boolean values. This change contributes to the overall robustness and reliability of the expression transformation logic within the TypeScriptExpressionTransformer class.
  • 340-351: The adjustments made to the unary method to handle validation context scenarios, specifically the treatment of unary operations involving undefined fields as boolean true in validation rules, are correctly implemented. This change enhances the logic around boolean transformations by ensuring that unary operations in validation contexts are handled appropriately, taking into account the possibility of undefined fields. The use of ensureBooleanTernary in this context ensures that the transformation logic remains accurate and robust, contributing to the overall effectiveness of the expression transformer in handling validation scenarios.
packages/schema/tests/schema/validation/datamodel-validation.test.ts (8)
  • 163-163: The trailing semicolon has been removed here, which aligns with the summary mentioning the removal of trailing semicolons after function calls. This change is consistent with JavaScript's automatic semicolon insertion (ASI) mechanism, which allows for omitting semicolons at the end of statements. However, it's important to maintain consistency throughout the project. If the project's style guide or linter configuration enforces semicolons, this change might not be aligned with the project's coding standards.

Ensure that this change aligns with the project's coding standards regarding semicolon usage.

  • 167-167: The spacing around the template literal here has been adjusted, which is mentioned in the summary. This change improves readability and consistency in how template literals are used throughout the codebase. It's a good practice to maintain consistent spacing around operators and template literals for better readability and to adhere to coding standards.
  • 197-197: The spacing around the template literal has been adjusted here as well. This change enhances readability and maintains consistency in the codebase. It's important to follow consistent coding practices, especially in a collaborative environment, to ensure that the code is easily understandable by all team members.
  • 300-302: The error message has been updated for the case when @id is used on a Json field. This change is part of the updates to error messages for specific cases related to field attributes, as mentioned in the summary. Updating error messages to be more specific and informative is a good practice, as it helps developers understand the nature of the error more quickly and how to resolve it.

Ensure that all updated error messages are tested to verify their accuracy and helpfulness.

  • 314-315: Another instance of an updated error message, this time for when @id is used on a reference field. This change improves the specificity and clarity of the error message, which is beneficial for developers encountering this validation error. It's important to ensure that error messages are both accurate and helpful, guiding the developer towards resolving the issue.

Ensure that all updated error messages are tested to verify their accuracy and helpfulness.

  • 328-329: This is yet another update to an error message, specifically for when @@id is used on a reference field. Similar to previous comments, updating error messages to be more specific and informative is a positive change. It enhances the developer experience by providing clearer guidance on how to address validation errors.

Ensure that all updated error messages are tested to verify their accuracy and helpfulness.

  • 675-675: The error message here has been updated to indicate that a model cannot be extended because it's not abstract. This is part of the broader effort to refine error messages for specific validation cases. Such updates are crucial for improving the developer experience by providing clear and actionable feedback on schema validation errors.

Ensure that all updated error messages are tested to verify their accuracy and helpfulness.

  • 699-699: This updated error message relates to a missing opposite relation field on a model. It's part of the ongoing improvements to error messages within the schema validation context. Clear and specific error messages are essential for efficient debugging and code maintenance, especially in complex schema validation scenarios.

Ensure that all updated error messages are tested to verify their accuracy and helpfulness.

tests/integration/tests/enhancements/with-policy/field-validation.test.ts (3)
  • 1-2: The imports from @zenstackhq/runtime and @zenstackhq/testtools are correctly used to set up the integration tests. It's good practice to keep these imports clean and organized for better readability and maintenance.
  • 1-5: > 📝 NOTE

This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [3-11]

The beforeAll setup function is well-structured, loading a schema and enhancing the database client for use in the tests. This setup is crucial for ensuring that the database schema is correctly applied before running the tests. However, it's important to ensure that the schema loaded here matches the actual application schema to avoid discrepancies in the test environment.

Verification successful

The verification process has confirmed that both the application and the test environments use the same schema file located at ./packages/plugins/trpc/tests/projects/t3-trpc-v10/prisma/schema.prisma. This ensures that there are no discrepancies between the test environment and the actual application schema, as they are inherently the same. Therefore, the initial review comment concern is addressed.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that the schema loaded in tests matches the actual application schema.
diff tests/integration/tests/enhancements/with-policy/schema.prisma src/schema.prisma

Length of output: 234



Script:

#!/bin/bash
# Locate the application schema file
fd schema.prisma --exec echo Application schema: {}

# Locate the test schema file
fd schema.prisma --exec echo Test schema: {}

Length of output: 281

* 1-5: > 📝 **NOTE** > This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [12-20]

The use of beforeEach to reset and push the database schema before each test ensures a clean state for each test case. This is a good practice for isolation between tests. However, consider the performance implications of resetting the database frequently in a large test suite. For larger projects, it might be beneficial to group tests that can share the same initial state to reduce setup time.

Consider optimizing database setup and teardown processes to improve test suite performance, especially for larger projects.

packages/schema/tests/schema/validation/attribute-validation.test.ts (1)
  • 1016-1027: The test case for @@validate(n.value > 0) attempts to use a relation field (n) in a validation condition, which is not supported as indicated by the error message @@validate condition cannot use relation fields. This is an important limitation to be aware of when designing validation logic in schemas.

This test case correctly identifies a limitation in the schema validation logic, ensuring that developers are aware of the constraints around using relation fields in validation conditions. It's crucial for maintaining the integrity and reliability of the schema validation process.

packages/schema/tests/generator/expression-writer.test.ts (9)
  • 1181-1181: The logic for checking if a role is included in the user's roles using includes is correct. However, consider adding a comment to explain the ternary operation's purpose, especially for readers unfamiliar with this pattern. This will improve code readability and maintainability.
  • 1208-1208: The use of includes for checking substring presence in auth().email is correct. However, ensure that the user object and its email property are always defined to avoid potential runtime errors. Adding null checks or using optional chaining (?.) could enhance robustness.
Verification successful

Given the lack of output from the initial script, we need to gather more information to verify the handling of the user object and its email property before the check. Let's execute a new script to search the codebase for references to the user object and its email property, focusing on their definitions, assignments, and any null checks or optional chaining applied before the includes check.


The verification process confirms that the code snippet correctly uses optional chaining (?.) and nullish coalescing (??) operators when accessing the user object and its email property. This ensures robust handling of cases where user or user.email might not be defined, aligning with the suggestion in the review comment. Therefore, the review comment is consistent with the codebase.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that user object and its email property are always defined before this check.

Length of output: 0



Script:

#!/bin/bash
# Search for references to the `user` object and its `email` property before the check.
rg "user" --before-context=10 packages/schema/tests/generator/expression-writer.test.ts

Length of output: 12726

* 1221-1221: The implementation for case-insensitive email content checking using `toLowerCase()` is appropriate. It's a good practice to handle case sensitivity in user input comparisons. This approach ensures that the logic is inclusive of various email formats. * 1234-1234: The use of `startsWith` for checking the prefix of `auth().email` is correct. This approach is straightforward and effective for the intended validation. Ensure that the `user` object and its `email` property are consistently available to prevent runtime issues. * 1247-1247: The logic for checking if `auth().email` ends with a specific substring using `endsWith` is correctly implemented. This method is efficient for validating email suffixes. Similar to other string operations, verify the existence of `user` and `email` to avoid errors.
Verification successful

Given the lack of output from the initial script and the need to verify the implementation directly from the codebase, I will generate new shell scripts to examine the specific code snippet mentioned in the review comment. This will help confirm the existence of checks for user and email before performing the endsWith operation.


The implementation for checking if auth().email ends with 'test' is correctly done, including the necessary checks for the existence of user and email to avoid errors. The logic uses optional chaining and nullish coalescing to safely perform the operation, which aligns with the review comment.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Ensure the user object and its email property are defined before performing this operation.

Length of output: 0



Script:

#!/bin/bash
# Search for the specific code snippet mentioned in the review comment
rg --context 5 "`\(\(user\?\..*endsWith\('test'\)\)\?\?false\)\?{AND:\[\]}:{OR:\[\]}`" packages/schema/tests/generator/expression-writer.test.ts

Length of output: 37930

* 1260-1260: The repetition of role inclusion checks using `includes` is observed here as well. It's consistent with previous implementations. As mentioned earlier, adding explanatory comments could enhance understanding for future maintainers. * 1273-1273: The logic for checking if all specified roles are included in `auth().roles` using `every` is correctly implemented. This method is suitable for ensuring that a user has all required roles. Consider edge cases where `user.roles` might be undefined or empty. * 1286-1286: Using `some` to check if any of the specified roles are included in `auth().roles` is appropriate. This implementation is effective for validating user roles against a list of acceptable roles. Ensure robustness by handling cases where `user.roles` could be undefined.
Verification successful

The code snippet provided explicitly handles the case where user.roles could be undefined by using the optional chaining operator ?. and a ternary operator to check for undefined before proceeding with the some method. This approach ensures that the code does not attempt to call some on an undefined value, which aligns with the comment's suggestion for handling undefined user.roles. However, the script intended for validation did not produce any output, and without additional context or a more detailed script, further verification specific to handling an empty user.roles array cannot be performed. Based on the analysis of the provided code snippet and the logical behavior of JavaScript operations involved, the implementation appears to be effective for the intended purpose.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Validate handling of undefined or empty user.roles array.

Length of output: 0

* 1299-1299: The implementation for checking if `auth().roles` is empty using a combination of checks is correct. This approach is thorough and ensures that both null and empty arrays are considered. Good practice in validating role assignments.
packages/runtime/src/enhancements/policy/handler.ts (1)
  • 1394-1406: The removeFromParent method is well-implemented, correctly handling the removal of data from both scalar and array values associated with a key in the parent object. The cleanup of the key when the array becomes empty is a thoughtful touch.

Comment on lines +994 to +1006
x Int
@@validate(has(x, 1))
}
`)
).toContain('left operand of "in" must be of scalar type');
).toContain('argument is not assignable to parameter');

expect(
await loadModelWithError(`
${prelude}
model M {
id String @id
x Int
@@validate(has(x, 1))
x Int[]
@@validate(hasSome(x, 1))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The use of @@validate with has and hasSome functions is shown in the tests. However, there's a potential issue with the error message expectation for the hasSome(x, 1) test case. The error message argument is not assignable to parameter might not accurately reflect the issue if x is an array of integers and 1 is an integer, as this should be a valid scenario for hasSome. It's important to ensure that the error messages are accurate and helpful for debugging.

Consider reviewing the implementation of the hasSome function or the test case to ensure that the error message accurately reflects the validation logic and provides clear guidance on the nature of the issue.


In the test cases for @@validate(has(x, 1)) and @@validate(hasSome(x, 1)), there seems to be a misunderstanding or potential misuse of the has and hasSome functions. These functions are typically used to check for the presence of an element in an array. The error message argument is not assignable to parameter suggests that there's a type mismatch or incorrect usage of the function parameters.

Ensure that the has and hasSome functions are used correctly in the context of array fields. If x is intended to be an array, the test cases should reflect that by defining x as an array type. Additionally, the error messages should be clear and informative to help developers understand the nature of the validation error.

- @@validate(has(x, 1))
+ @@validate(has(x, [1])) // Assuming x is an array field

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
x Int
@@validate(has(x, 1))
}
`)
).toContain('left operand of "in" must be of scalar type');
).toContain('argument is not assignable to parameter');
expect(
await loadModelWithError(`
${prelude}
model M {
id String @id
x Int
@@validate(has(x, 1))
x Int[]
@@validate(hasSome(x, 1))
x Int
@@validate(has(x, [1])) // Assuming x is an array field
}
`)
).toContain('argument is not assignable to parameter');
expect(
await loadModelWithError(`
${prelude}
model M {
id String @id
x Int[]
@@validate(hasSome(x, 1))

Comment on lines +1382 to +1391
private mergeToParent(parent: any, key: string, value: any) {
if (parent[key]) {
if (Array.isArray(parent[key])) {
parent[key].push(value);
} else {
parent[key] = [parent[key], value];
}
} else {
parent[key] = value;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mergeToParent method efficiently handles merging values into the parent object. However, it might be beneficial to explicitly check for non-array values and convert them into an array before pushing the new value. This ensures that all values under a key are consistently stored as arrays, simplifying downstream processing.

-                parent[key] = [parent[key], value];
+                // Ensure consistent array storage for multiple values
+                parent[key] = [].concat(parent[key], value);

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
private mergeToParent(parent: any, key: string, value: any) {
if (parent[key]) {
if (Array.isArray(parent[key])) {
parent[key].push(value);
} else {
parent[key] = [parent[key], value];
}
} else {
parent[key] = value;
}
private mergeToParent(parent: any, key: string, value: any) {
if (parent[key]) {
if (Array.isArray(parent[key])) {
parent[key].push(value);
} else {
// Ensure consistent array storage for multiple values
parent[key] = [].concat(parent[key], value);
}
} else {
parent[key] = value;
}

Comment on lines 609 to +940

await expect(
db.model.create({
data: {
id: 1,
s: 'a1b2c',
e: '[email protected]',
u: 'https://www.zenstack.dev',
d: '2024-01-01T00:00:00.000Z',
},
})
).toResolveTruthy();

await expect(db.model.update({ where: { id: 1 }, data: {} })).toResolveTruthy();

await expect(db.model.update({ where: { id: 1 }, data: { s: 'a2b3c' } })).toResolveTruthy();
await expect(db.model.update({ where: { id: 1 }, data: { s: 'c2b3c' } })).toBeRejectedByPolicy();
await expect(db.model.update({ where: { id: 1 }, data: { s: 'a1b2c3' } })).toBeRejectedByPolicy();
await expect(db.model.update({ where: { id: 1 }, data: { s: 'aaccc' } })).toBeRejectedByPolicy();
await expect(db.model.update({ where: { id: 1 }, data: { s: 'a1b2d' } })).toBeRejectedByPolicy();
await expect(db.model.update({ where: { id: 1 }, data: { s: 'a1-3c' } })).toBeRejectedByPolicy();

await expect(db.model.update({ where: { id: 1 }, data: { e: '[email protected]' } })).toResolveTruthy();
await expect(db.model.update({ where: { id: 1 }, data: { e: 'xyz' } })).toBeRejectedByPolicy();

await expect(db.model.update({ where: { id: 1 }, data: { u: 'https://zenstack.dev/docs' } })).toResolveTruthy();
await expect(db.model.update({ where: { id: 1 }, data: { u: 'xyz' } })).toBeRejectedByPolicy();

await expect(db.model.update({ where: { id: 1 }, data: { d: '2025-01-01T00:00:00.000Z' } })).toResolveTruthy();
await expect(db.model.update({ where: { id: 1 }, data: { d: 'xyz' } })).toBeRejectedByPolicy();
});

it('optionality with array functions', async () => {
let prisma;
try {
const dbUrl = await createPostgresDb('field-validation-array-funcs');
const r = await loadSchema(
`
model Model {
id Int @id @default(autoincrement())
x String[]
y Int[]

@@validate(
has(x, 'a') &&
hasEvery(x, ['a', 'b']) &&
hasSome(x, ['x', 'y']) &&
(y == null || !isEmpty(y))
)

@@allow('all', true)
}
`,
{
provider: 'postgresql',
dbUrl,
}
);

const db = r.enhance();
prisma = r.prisma;

await expect(db.model.create({ data: { id: 1, x: ['a', 'b', 'x'], y: [1] } })).toResolveTruthy();
await expect(db.model.update({ where: { id: 1 }, data: {} })).toResolveTruthy();
await expect(db.model.update({ where: { id: 1 }, data: { x: ['b', 'x'] } })).toBeRejectedByPolicy();
await expect(db.model.update({ where: { id: 1 }, data: { x: ['a', 'b'] } })).toBeRejectedByPolicy();
await expect(db.model.update({ where: { id: 1 }, data: { y: [] } })).toBeRejectedByPolicy();
} finally {
await prisma.$disconnect();
await dropPostgresDb('field-validation-array-funcs');
}
});

it('null comparison', async () => {
const { enhance } = await loadSchema(`
model Model {
id Int @id @default(autoincrement())
x Int
y Int

@@validate(x == null || !(x <= 0))
@@validate(y != null && !(y > 1))

@@allow('all', true)
}
`);

const db = enhance();

await expect(db.model.create({ data: { id: 1, x: 1 } })).toBeRejectedByPolicy();
await expect(db.model.create({ data: { id: 1, x: 1, y: 2 } })).toBeRejectedByPolicy();
await expect(db.model.create({ data: { id: 1, x: 0, y: 0 } })).toBeRejectedByPolicy();
await expect(db.model.create({ data: { id: 1, x: 1, y: 0 } })).toResolveTruthy();

await expect(db.model.update({ where: { id: 1 }, data: {} })).toBeRejectedByPolicy();
await expect(db.model.update({ where: { id: 1 }, data: { y: 2 } })).toBeRejectedByPolicy();
await expect(db.model.update({ where: { id: 1 }, data: { y: 1 } })).toResolveTruthy();
await expect(db.model.update({ where: { id: 1 }, data: { x: 2, y: 1 } })).toResolveTruthy();
});
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📝 NOTE
This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [21-775]

The test cases cover a wide range of validation scenarios, including simple direct writes, nested creates and updates, string transformations, and more complex validation logic. Each test case is well-structured, with clear expectations for policy rejections and successful operations. It's commendable that the tests cover both positive and negative scenarios, ensuring comprehensive coverage of the validation logic.

However, there are a few areas where the tests could be further refined:

  1. Duplication: There's some repetition in the setup for similar test cases. Consider using helper functions or before hooks to reduce redundancy.
  2. Error Handling: While the tests correctly expect policy rejections, it might be beneficial to also verify the specific error messages or codes to ensure that the rejections are for the expected reasons.
  3. Coverage: Ensure that all edge cases and boundary conditions are covered, especially for complex validation logic.

Consider refactoring the tests to reduce duplication, enhance error handling specificity, and ensure comprehensive coverage of all validation scenarios.

Comment on lines +3 to +34
describe('issue 1078', () => {
it('regression', async () => {
const { enhance } = await loadSchema(
`
model Counter {
id String @id

name String
value Int

@@validate(value >= 0)
@@allow('all', true)
}
`
);

const db = enhance();

await expect(
db.counter.create({
data: { id: '1', name: 'It should create', value: 1 },
})
).toResolveTruthy();

//! This query fails validation
await expect(
db.counter.update({
where: { id: '1' },
data: { name: 'It should update' },
})
).toResolveTruthy();
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test case for regression testing of issue 1078 correctly sets up a schema, enhances it, and performs create and update operations on the Counter model. However, there's a comment indicating a query fails validation (//! This query fails validation), but it's not clear from the test what the expected behavior is. If the test is meant to ensure the query fails, the assertion should explicitly expect a rejection or an error.

Consider clarifying the expected outcome of the failing query. If the query is expected to fail, use .rejects.toThrow() or a similar assertion to explicitly test for failure.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 841989d and 8441f7a.
Files selected for processing (1)
  • packages/schema/tests/schema/validation/datamodel-validation.test.ts (3 hunks)
Files skipped from review as they are similar to previous changes (1)
  • packages/schema/tests/schema/validation/datamodel-validation.test.ts

@ymc9 ymc9 merged commit df07830 into v2 Mar 9, 2024
13 checks passed
@ymc9 ymc9 deleted the chore/merge-dev-0308 branch March 9, 2024 06:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants