-
-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
merge dev to main (v2.7.0) #1782
Conversation
…ps with zenstack (#1765)
…ring optimistic update (#1775)
Signed-off-by: Théo Frison <[email protected]> Co-authored-by: ymc9 <[email protected]>
Co-authored-by: ymc9 <[email protected]>
…njected into proper hierarchy (#1776)
📝 Walkthrough📝 WalkthroughWalkthroughThe changes in this pull request involve updates to multiple files, primarily focusing on version increments, enhancements to OpenAPI specification generation, and modifications to test cases. The version of the JetBrains plugin is updated from "2.6.2" to "2.7.0". The Changes
Possibly related PRs
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post
🛑 Comments failed to post (62)
packages/plugins/trpc/tests/projects/nuxt-trpc-v10/nuxt.config.ts (1)
4-4: 🛠️ Refactor suggestion
Consider environment-based configuration for devtools.
Enabling devtools is beneficial for development, but it's generally recommended to disable them in production environments for security and performance reasons.
Consider using an environment variable to conditionally enable devtools. For example:
- devtools: { enabled: true }, + devtools: { enabled: process.env.NODE_ENV !== 'production' },This way, devtools will be automatically disabled in production builds.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.devtools: { enabled: process.env.NODE_ENV !== 'production' },
packages/plugins/trpc/tests/projects/nuxt-trpc-v11/nuxt.config.ts (2)
1-8: 🛠️ Refactor suggestion
Consider additional Nuxt.js configuration options.
The current configuration covers basic setup for compatibility, devtools, and tRPC integration. However, depending on the project requirements, you might want to consider adding more configuration options such as:
- Modules: If you're using any Nuxt modules, they should be added to the configuration.
- Runtime config: For environment-specific variables.
- Server options: If you need to customize server behavior.
- Router options: If you need to modify the default routing behavior.
Here's an example of how you might expand the configuration:
export default defineNuxtConfig({ compatibilityDate: '2024-04-03', devtools: { enabled: process.env.NODE_ENV === 'development' }, build: { transpile: ['trpc-nuxt'], }, modules: [ // Add any required modules here ], runtimeConfig: { // Add any runtime configuration here }, // Add any other necessary configuration options });Consider expanding the configuration based on your project's specific needs.
4-4: 🛠️ Refactor suggestion
Consider conditionally enabling devtools based on the environment.
While enabling devtools is beneficial for development, it's generally not recommended for production environments. Consider modifying this configuration to conditionally enable devtools based on the current environment.
Here's a suggested modification:
- devtools: { enabled: true }, + devtools: { enabled: process.env.NODE_ENV === 'development' },This change ensures that devtools are only enabled in the development environment.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.devtools: { enabled: process.env.NODE_ENV === 'development' },
packages/plugins/trpc/res/client/v10/nuxt.ts (1)
10-12: 💡 Codebase verification
ClientType is not defined or imported in the file.
The
ClientType
used in the type casting is not defined or imported in this file. Please ensure thatClientType
is properly imported from the relevant module.🔗 Analysis chain
LGTM with a minor query: Implementation looks good, but clarify ClientType.
The implementation effectively wraps the original function while enhancing type information. The use of
DeepOverrideAtPath
allows for precise type modifications.However, there's an undefined
ClientType
being used. Could you please clarify whereClientType
is defined or imported from?To verify the
ClientType
usage, we can run the following script:🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for ClientType definition or import # Test 1: Search for ClientType import echo "Searching for ClientType import:" rg --type typescript "import.*ClientType" ./packages/plugins/trpc # Test 2: Search for ClientType type definition echo "Searching for ClientType type definition:" rg --type typescript "type\s+ClientType" ./packages/plugins/trpc # Test 3: Search for ClientType interface definition echo "Searching for ClientType interface definition:" rg --type typescript "interface\s+ClientType" ./packages/plugins/trpcLength of output: 582
Script:
#!/bin/bash # Description: Check for ClientType definition or import without using --type typescript # Test 1: Search for ClientType import echo "Searching for ClientType import:" rg "import.*ClientType" ./packages/plugins/trpc # Test 2: Search for ClientType type definition echo "Searching for ClientType type definition:" rg "type\s+ClientType" ./packages/plugins/trpc # Test 3: Search for ClientType interface definition echo "Searching for ClientType interface definition:" rg "interface\s+ClientType" ./packages/plugins/trpcLength of output: 4164
packages/plugins/trpc/res/client/v11/nuxt.ts (1)
11-11: 💡 Codebase verification
Clarify the
ClientType<TRouter>
type.The
ClientType<TRouter>
type is not imported or defined innuxt.ts
. Please ensure that it is properly imported from its source.🔗 Analysis chain
Clarify the
ClientType<TRouter>
type.The type assertion uses
ClientType<TRouter>
, but this type is not imported or defined in the visible code. Please clarify where this type comes from and consider importing it if it's from an external module.To verify the existence and usage of
ClientType
, you can run the following script:If
ClientType
is defined in another file, consider adding an import statement:import type { ClientType } from './path-to-file';🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for the definition or import of ClientType rg --type typescript "type\s+ClientType|import.*ClientType" -g '!**/node_modules/**'Length of output: 121
Script:
#!/bin/bash # Search for the definition or import of ClientType using correct file types rg --type ts --type tsx "type\s+ClientType|import.*ClientType" -g '!**/node_modules/**'Length of output: 117
Script:
#!/bin/bash # Search for the definition or import of ClientType in .ts and .tsx files rg "type\s+ClientType|import.*ClientType" -g '*.ts' -g '*.tsx' -g '!**/node_modules/**'Length of output: 1385
script/test-scaffold.ts (1)
22-22: 💡 Codebase verification
Inconsistent Prisma Version References Found
Multiple Prisma versions are referenced across the project. Please ensure all Prisma dependencies are updated to version
5.21.x
for consistency.
packages/schema/package.json
packages/plugins/trpc/tests/projects/nuxt-trpc-v11/package.json
packages/plugins/trpc/tests/projects/nuxt-trpc-v10/package.json
packages/plugins/trpc/tests/projects/t3-trpc-v11/package.json
packages/plugins/trpc/tests/projects/t3-trpc-v10/package.json
tests/integration/test-run/package.json
tests/integration/tests/frameworks/nextjs/test-project/package.json
tests/integration/tests/frameworks/trpc/test-project/package.json
🔗 Analysis chain
Version update for Prisma packages
The versions of
prisma
and@prisma/client
have been updated from5.20.x
to5.21.x
. This change keeps the test scaffold up-to-date with the latest minor version of Prisma.To ensure this change is consistent across the project, let's check for other Prisma version references:
Please review the output of this script to ensure all Prisma version references are consistent.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for other Prisma version references in the project # Search for Prisma version references in package.json files echo "Searching for Prisma version references in package.json files:" rg --type json '"prisma":\s*"[^"]*"' -g 'package*.json' # Search for Prisma version references in other files echo "Searching for Prisma version references in other files:" rg '(?i)prisma.*?(?:5\.2[01]|5\.20)' --type-not jsonLength of output: 4593
packages/plugins/trpc/tests/projects/t3-trpc-v10/src/pages/index.tsx (2)
14-20:
⚠️ Potential issueRefactor mutation function for production readiness.
While the
mutation
function correctly implements the post creation, there are a few areas for improvement:
- The post data is hardcoded. In a production environment, this should come from user input or props.
- Logging the author's email to the console could be a privacy concern.
- There's no error handling for the async operation.
Consider refactoring the function like this:
const mutation = async (postData: PostData) => { try { const created = await createPost({ data: postData, include: { author: true }, }); // Handle successful creation (e.g., show a success message) console.log(`Post "${created.name}" created successfully`); } catch (error) { // Handle error (e.g., show an error message) console.error('Failed to create post:', error); } };Replace
PostData
with the appropriate type for your post data.
27-29:
⚠️ Potential issueAddress potential privacy concerns and improve button functionality.
Two issues to address:
- Displaying full email addresses could be a privacy concern.
- The "Create post" button uses a hardcoded mutation function.
Consider these improvements:
- Instead of displaying full email addresses, show only a portion or use a username:
{post.name} by {post.author.email.split('@')[0]}
- Modify the button to accept user input for creating posts:
<button onClick={() => mutation({ name: 'User input name', published: true, authorId: 'User input or context-based ID' })}>Create post</button>Implement a form or modal to collect user input for creating posts.
Also applies to: 34-35
packages/plugins/trpc/tests/projects/nuxt-trpc-v10/app.vue (4)
4-11: 🛠️ Refactor suggestion
Consider using
onMounted
for initial data fetchingThe immediate API call on component creation might affect the initial load performance. Consider moving this call into an
onMounted
hook for better control over when it's executed.Here's a suggested refactor:
+import { onMounted } from 'vue'; const { $client } = useNuxtApp(); -$client.post.findFirst - .query({ - where: { id: '1' }, - include: { author: true }, - }) - .then((post) => { - console.log('Author:', post?.author.email); - }); +onMounted(() => { + $client.post.findFirst + .query({ + where: { id: '1' }, + include: { author: true }, + }) + .then((post) => { + console.log('Author:', post?.author.email); + }) + .catch((error) => { + console.error('Error fetching post:', error); + }); +});Also, consider adding type information for better type safety:
type Post = { id: string; author: { email: string; }; // Add other relevant fields }; // Then in the query: .then((post: Post | null) => { console.log('Author:', post?.author.email); })
13-15: 🛠️ Refactor suggestion
Add error handling to the
useQuery
callThe
useQuery
call lacks error handling. Consider adding error handling to improve the robustness of your component.Here's a suggested improvement:
-const { data: posts } = $client.post.findMany.useQuery({ +const { data: posts, error: postsError } = $client.post.findMany.useQuery({ include: { author: true }, }); +if (postsError.value) { + console.error('Error fetching posts:', postsError.value); + // Handle the error appropriately (e.g., show an error message to the user) +}Also, consider adding type information:
type Post = { id: string; title: string; author: { email: string; }; // Add other relevant fields }; const { data: posts, error: postsError } = $client.post.findMany.useQuery<Post[]>({ include: { author: true }, });
21-41: 🛠️ Refactor suggestion
Refactor
onCreate
function to reduce duplicationThe
onCreate
function performs two similar create operations. Consider refactoring to reduce code duplication and improve maintainability.Here's a suggested refactor:
const createPost = async (client: typeof $client | typeof mutate) => { const result = await client({ data: { title: 'New Post', content: 'This is a new post', author: { connect: { id: '1' } }, }, include: { author: true }, }); console.log('Created by:', result.author.email); return result; }; const onCreate = async () => { await createPost($client.post.create.mutate); await createPost(mutate); };This refactoring reduces duplication and makes the code more maintainable. Also, consider adding error handling:
const onCreate = async () => { try { await createPost($client.post.create.mutate); await createPost(mutate); } catch (error) { console.error('Error creating post:', error); // Handle the error appropriately (e.g., show an error message to the user) } };
44-52: 🛠️ Refactor suggestion
Enhance template with loading and error states
The current template doesn't account for loading states or potential errors. Consider adding these to improve the user experience.
Here's a suggested improvement:
<template> <h1>Home</h1> <div v-if="postsError"> <p>Error loading posts: {{ postsError.message }}</p> </div> <div v-else> <p v-if="!posts">Loading posts...</p> <ul v-else-if="posts.length"> <li v-for="post in posts" :key="post.id">{{ post.title }} by {{ post.author.email }}</li> </ul> <p v-else>No posts found.</p> <button @click="onCreate" :disabled="isCreating"> {{ isCreating ? 'Creating...' : 'Create Post' }} </button> </div> </template>This template now handles loading and error states, and also disables the "Create Post" button while a post is being created. You'll need to add
isCreating
andpostsError
to your component's state:const isCreating = ref(false); const onCreate = async () => { isCreating.value = true; try { await createPost($client.post.create.mutate); await createPost(mutate); } catch (error) { console.error('Error creating post:', error); // Handle the error appropriately (e.g., show an error message to the user) } finally { isCreating.value = false; } };tests/regression/tests/issue-1770.test.ts (1)
3-49: 🛠️ Refactor suggestion
Enhance test coverage with additional scenarios.
While the current test case provides a good starting point for regression testing issue 1770, consider adding more test cases to improve coverage and robustness.
Some suggestions for additional test cases:
- Test creating and querying nested relationships (e.g., User -> OrgUser -> Organization).
- Verify the behavior of the
@@allow
rule in the BaseAuth model.- Test the
@@delegate
behavior in the Resource model.- Add edge cases, such as querying with empty or invalid data.
These additional tests will help ensure that all aspects of the fix for issue 1770 are properly validated.
packages/plugins/trpc/tests/nuxt.test.ts (2)
18-29: 🛠️ Refactor suggestion
Consider enhancing the test case for tRPC v10.
While the test case covers the basic setup and build process, there are a few areas for improvement:
- The use of relative paths might make the test fragile. Consider using path.resolve() for more robust path handling.
- The test lacks assertions to validate the expected outcome. Add assertions to check if the build was successful or if certain files were generated.
- Consider adding error handling for the run() calls to provide more informative test failures.
Here's a suggested improvement:
it('project test trpc v10', () => { const ver = require(path.resolve(__dirname, '../package.json')).version; const projectDir = path.resolve(__dirname, './projects/nuxt-trpc-v10'); process.chdir(projectDir); const deps = ['zenstackhq-language', 'zenstackhq-runtime', 'zenstackhq-sdk', 'zenstack']; for (const dep of deps) { const depPath = path.resolve(__dirname, '../../../../.build/', `${dep}-${ver}.tgz`); expect(() => run(`npm install ${depPath}`)).not.toThrow(); } expect(() => run('npx zenstack generate')).not.toThrow(); expect(() => run('npm run build')).not.toThrow(); // Add assertions to check for expected output files expect(fs.existsSync(path.join(projectDir, 'some-expected-output-file'))).toBeTruthy(); });
31-42: 🛠️ Refactor suggestion
Refactor to reduce duplication between test cases.
The test cases for tRPC v10 and v11 are nearly identical, leading to code duplication. Consider refactoring to improve maintainability and readability:
- Extract the common logic into a separate function.
- Use parameterized tests to run the same test for different versions.
Here's a suggested refactoring:
function runTRPCTest(version: string) { const ver = require(path.resolve(__dirname, '../package.json')).version; const projectDir = path.resolve(__dirname, `./projects/nuxt-trpc-${version}`); process.chdir(projectDir); const deps = ['zenstackhq-language', 'zenstackhq-runtime', 'zenstackhq-sdk', 'zenstack']; for (const dep of deps) { const depPath = path.resolve(__dirname, '../../../../.build/', `${dep}-${ver}.tgz`); expect(() => run(`npm install ${depPath}`)).not.toThrow(); } expect(() => run('npx zenstack generate')).not.toThrow(); expect(() => run('npm run build')).not.toThrow(); // Add assertions to check for expected output files expect(fs.existsSync(path.join(projectDir, 'some-expected-output-file'))).toBeTruthy(); } it.each(['v10', 'v11'])('project test trpc %s', (version) => { runTRPCTest(version); });This refactoring reduces duplication, improves maintainability, and makes it easier to add tests for new versions in the future.
packages/plugins/trpc/tests/projects/t3-trpc-v11/src/app/_components/post.tsx (1)
24-24: 💡 Codebase verification
🛠️ Refactor suggestion
Refine the debug statement and remove unnecessary optional chaining.
The verification confirms that
latestPost1
is always defined due to the usage ofuseSuspenseQuery
, making the optional chaining (?.
) redundant. Additionally,console.log
statements should be avoided in production code to maintain performance and cleanliness.Recommendations:
Remove the optional chaining:
console.log(latestPost1.author.email);Eliminate or conditionally include
console.log
statements based on the environment:if (process.env.NODE_ENV !== 'production') { console.log('Author email:', latestPost1.author.email); }🔗 Analysis chain
Consider refining the debug statement and error handling.
The addition of optional chaining (
?.
) improves code safety by preventing potential runtime errors iflatestPost1
is null or undefined. However, there are a few points to consider:
- Since
useSuspenseQuery
is used,latestPost1
should always be defined, potentially making the optional chaining unnecessary.- Console.log statements are generally not recommended in production code.
Consider refactoring this line as follows:
if (process.env.NODE_ENV !== 'production') { console.assert(latestPost1, 'latestPost1 is undefined'); console.log('Author email:', latestPost1.author.email); }This approach:
- Removes the console.log in production builds
- Uses
console.assert
to catch unexpectedundefined
values during development- Removes the optional chaining, as it should be unnecessary with
useSuspenseQuery
To ensure
latestPost1
is always defined, we can check its usage elsewhere in the file:🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check usage of latestPost1 in the file rg --type typescript 'latestPost1' packages/plugins/trpc/tests/projects/t3-trpc-v11/src/app/_components/post.tsxLength of output: 147
Script:
#!/bin/bash # Description: Check usage of latestPost1 in the file rg --type tsx 'latestPost1' packages/plugins/trpc/tests/projects/t3-trpc-v11/src/app/_components/post.tsxLength of output: 133
Script:
#!/bin/bash # Description: Check usage of latestPost1 in the file rg 'latestPost1' packages/plugins/trpc/tests/projects/t3-trpc-v11/src/app/_components/post.tsxLength of output: 201
packages/ide/jetbrains/build.gradle.kts (1)
12-12: 💡 Codebase verification
Changelog Update Required
The changelog does not include an entry for version 2.7.0. Please update the
CHANGELOG.md
to include the new version entry.🔗 Analysis chain
Version update looks good.
The version increment from 2.6.2 to 2.7.0 is consistent with the PR title and follows semantic versioning conventions. This indicates a minor release with new features but no breaking changes.
To ensure the changelog is up-to-date with this version change, please run the following script:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that the changelog includes an entry for version 2.7.0 # Test: Check if the changelog contains an entry for version 2.7.0 # Expect: An entry for version 2.7.0 should be present if grep -q "\[2\.7\.0\]" CHANGELOG.md; then echo "Changelog entry for version 2.7.0 found." else echo "Warning: Changelog entry for version 2.7.0 not found. Please update the changelog." fiLength of output: 211
tests/regression/tests/issue-1745.test.ts (1)
60-63: 💡 Codebase verification
Clarify or Remove the Redundant 'type' Field in the Buyer Model
The
type
field in the Buyer model appears to be redundant since the Company model already includes abuyerType
field. No other instances of'type: String'
were found in the codebase, suggesting that thetype
field might be unnecessary and could lead to confusion.
- File: tests/regression/tests/issue-1745.test.ts
- Lines: 60-63
🔗 Analysis chain
Clarify the purpose of the 'type' field in the Buyer model.
The Buyer model extends Company, which already has a 'buyerType' field. The additional 'type' field in Buyer seems redundant and might lead to confusion. Consider removing it or clarifying its purpose if it's different from 'buyerType'.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if 'type' field is used elsewhere in the codebase rg --type typescript 'type\s*:\s*String' -g '!**/issue-1745.test.ts'Length of output: 105
Script:
#!/bin/bash # Correctly search for 'type: String' in TypeScript files excluding issue-1745.test.ts rg 'type\s*:\s*String' -g '*.ts' -g '*.tsx' -g '!tests/regression/tests/issue-1745.test.ts'Length of output: 263
packages/plugins/trpc/tests/projects/nuxt-trpc-v10/schema.zmodel (1)
26-26:
⚠️ Potential issueReconsider the maximum length constraint for email addresses
The maximum length of
32
characters for thepackages/plugins/trpc/res/client/v11/utils.ts (1)
1-1:
⚠️ Potential issueFix invalid syntax or add missing import statement at line 1.
The line
../v10/utils.ts
is not valid TypeScript syntax. If you intend to import utilities from'../v10/utils.ts'
, please use a proper import statement, such as:import * as utils from '../v10/utils';If this line was added unintentionally, consider removing it to prevent syntax errors.
Apply this diff to correct the issue:
-../v10/utils.ts +// Removed invalid line or added proper import statement if neededpackages/server/src/hono/handler.ts (4)
31-31:
⚠️ Potential issuePotential off-by-one error in path extraction
The expression
ctx.req.path.substring(ctx.req.routePath.length - 1)
may incorrectly extract the path due to an off-by-one error. Please verify that subtracting 1 is correct and that it correctly extracts the desired path segment.
39-43:
⚠️ Potential issueHandle JSON parsing errors appropriately
Currently, if parsing the JSON body fails, the error is silently ignored (
// noop
). This could lead to issues ifrequestBody
is expected to be used later. Consider handling the error by returning a400 Bad Request
response to indicate invalid JSON.Suggested change:
} catch { - // noop + return ctx.json({ message: 'Invalid JSON body' }, 400); }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.try { requestBody = await ctx.req.json(); } catch { return ctx.json({ message: 'Invalid JSON body' }, 400); }
57-59:
⚠️ Potential issueAvoid returning detailed error messages to the client
Returning the error message
${err}
in the response may expose sensitive information. It's best practice to log the error internally and return a generic error message to the client.Suggested change:
- return ctx.json({ message: `An unhandled error occurred: ${err}` }, 500); + console.error('An unhandled error occurred:', err); + return ctx.json({ message: 'Internal Server Error' }, 500);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.} catch (err) { console.error('An unhandled error occurred:', err); return ctx.json({ message: 'Internal Server Error' }, 500); }
23-26:
⚠️ Potential issueAvoid exposing internal implementation details in error responses
Returning a 500 status code with a specific error message may leak internal details. Consider returning a generic error message and use an appropriate status code, such as 503 Service Unavailable, when the Prisma client is unavailable.
Suggested change:
- return ctx.json({ message: 'unable to get prisma from request context' }, 500); + console.error('Prisma client is unavailable'); + return ctx.json({ message: 'Service Unavailable' }, 503);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.const prisma = (await options.getPrisma(ctx)) as DbClientContract; if (!prisma) { console.error('Prisma client is unavailable'); return ctx.json({ message: 'Service Unavailable' }, 503); }
packages/plugins/trpc/src/client-helper/nuxt.ts (1)
7-10:
⚠️ Potential issueMissing imports for 'TRPCClientErrorLike' and 'TRPCRequestOptions'
The types
TRPCClientErrorLike
andTRPCRequestOptions
are used in the code but have not been imported, which will lead to TypeScript errors.To fix this, add the following import statements in the
generateRouterTypingImports
function:import type { MaybeRefOrGetter, UnwrapRef } from 'vue'; import type { AsyncData, AsyncDataOptions } from 'nuxt/app'; import type { KeysOf, PickFrom } from './utils'; +import type { TRPCClientErrorLike, TRPCRequestOptions } from '@trpc/client';
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.`import type { MaybeRefOrGetter, UnwrapRef } from 'vue';`, `import type { AsyncData, AsyncDataOptions } from 'nuxt/app';`, `import type { KeysOf, PickFrom } from './utils';`, `import type { TRPCClientErrorLike, TRPCRequestOptions } from '@trpc/client';`, ]);
packages/plugins/trpc/src/client-helper/react.ts (2)
83-84:
⚠️ Potential issueDefine or import 'Context' to prevent undefined type references in mutation options.
The
Context
type is referenced in the mutation definitions but is neither defined nor imported. This will cause type errors whereContext
is used inUseTRPCMutationOptions
andUseTRPCMutationResult
.You can resolve this by importing or defining the
Context
type. IfContext
is defined elsewhere, import it:+import type { Context } from '../../path/to/your/context'; // Update the import path accordingly
If the context is not needed for your mutation options, you can remove it from the type definitions:
- ${resultType}, - Context + ${resultType}And adjust the subsequent code accordingly.
Also applies to: 87-87
28-29:
⚠️ Potential issueImport 'TRPCClientErrorLike' and 'AppRouter' to resolve reference errors.
The type
TRPCClientErrorLike
andAppRouter
are used in definingerrorType
, but they are not imported or defined in this file. This will result in reference errors during type checking.Apply the following diff to import the missing types:
+import type { TRPCClientErrorLike } from '@trpc/client'; +import type { AppRouter } from '../../path/to/your/router'; // Update the import path accordinglyCommittable suggestion was skipped due to low confidence.
packages/plugins/trpc/src/utils.ts (6)
27-32: 🛠️ Refactor suggestion
⚠️ Potential issueAvoid casting 'input' to 'any'; resolve TypeScript type errors properly
In the query procedure,
input
is cast toany
to circumvent a TypeScript compiler error. Casting toany
can undermine type safety and may lead to runtime errors. Consider refining the type definitions or using more specific type assertions to address the compiler error without resorting toany
.
34-39: 🛠️ Refactor suggestion
⚠️ Potential issueAvoid casting 'input' to 'any' in mutation procedures
Similarly, in the mutation procedure, avoid casting
input
toany
. Ensuring proper type alignment will maintain type safety and prevent potential issues. Adjust the type definitions or use appropriate type guards to handle the compiler error.
142-143: 🛠️ Refactor suggestion
Review regular expressions for accuracy and maintainability
The regular expressions used (
modelAttributeRegex
,attributeNameRegex
,attributeArgsRegex
) may not cover all necessary cases and could be simplified or improved for readability. Complex regex patterns can be hard to maintain and error-prone. Consider using a parser or more precise patterns to enhance reliability.
141-172: 🛠️ Refactor suggestion
⚠️ Potential issueAdd error handling when parsing model documentation
The
resolveModelsComments
function may throw exceptions if the model documentation does not match the expected patterns or ifJSON.parse
fails due to invalid JSON. To enhance reliability, add error handling mechanisms to gracefully handle parsing errors and unexpected formats.For example, you can wrap the parsing logic in try-catch blocks:
+ try { const attribute = model.documentation?.match(modelAttributeRegex)?.[0]; // existing code... + } catch (error) { + // Handle parsing error or log a warning + }Committable suggestion was skipped due to low confidence.
112-138: 🛠️ Refactor suggestion
⚠️ Potential issueEnsure 'procType' is defined for all operation names
Similarly, in the
getProcedureTypeByOpName
function, ifopName
does not match any known cases,procType
will beundefined
. This may cause issues whereprocType
is expected to be either'query'
or'mutation'
. Adding a default case that handles unexpectedopName
values will improve robustness.+ default: + throw new Error(`Unsupported operation name: ${opName}`);Committable suggestion was skipped due to low confidence.
57-110: 🛠️ Refactor suggestion
⚠️ Potential issueHandle undefined 'inputType' for unsupported operation names
In the
getInputSchemaByOpName
function, ifopName
does not match any case in the switch statement,inputType
will beundefined
. This could lead to runtime errors wheninputType
is used. Consider adding a default case to handle unsupported operations or throw an explicit error to alert developers.+ default: + throw new Error(`Unsupported operation name: ${opName}`);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.export const getInputSchemaByOpName = (opName: string, modelName: string) => { let inputType; const capModelName = upperCaseFirst(modelName); switch (opName) { case 'findUnique': inputType = `$Schema.${capModelName}InputSchema.findUnique`; break; case 'findFirst': inputType = `$Schema.${capModelName}InputSchema.findFirst.optional()`; break; case 'findMany': inputType = `$Schema.${capModelName}InputSchema.findMany.optional()`; break; case 'findRaw': inputType = `$Schema.${capModelName}InputSchema.findRawObject`; break; case 'createOne': inputType = `$Schema.${capModelName}InputSchema.create`; break; case 'createMany': inputType = `$Schema.${capModelName}InputSchema.createMany.optional()`; break; case 'deleteOne': inputType = `$Schema.${capModelName}InputSchema.delete`; break; case 'updateOne': inputType = `$Schema.${capModelName}InputSchema.update`; break; case 'deleteMany': inputType = `$Schema.${capModelName}InputSchema.deleteMany.optional()`; break; case 'updateMany': inputType = `$Schema.${capModelName}InputSchema.updateMany`; break; case 'upsertOne': inputType = `$Schema.${capModelName}InputSchema.upsert`; break; case 'aggregate': inputType = `$Schema.${capModelName}InputSchema.aggregate`; break; case 'aggregateRaw': inputType = `$Schema.${capModelName}InputSchema.aggregateRawObject`; break; case 'groupBy': inputType = `$Schema.${capModelName}InputSchema.groupBy`; break; case 'count': inputType = `$Schema.${capModelName}InputSchema.count.optional()`; break; default: throw new Error(`Unsupported operation name: ${opName}`); } return inputType; };
packages/server/tests/adapter/hono.test.ts (6)
69-78:
⚠️ Potential issueHandle potential errors in groupBy operation.
When performing a
groupBy
operation without providing necessary aggregations or groupings, it could lead to runtime errors depending on the Prisma version. Ensure that the parameters provided are sufficient and handle any potential exceptions.Consider adding error handling or validations to ensure the
groupBy
operation is called correctly.
169-169:
⚠️ Potential issueDuplicate
makeUrl
call in DELETE request.The
makeUrl
function is called twice when constructing the URL for the DELETE request, resulting in a malformed URL.Apply this diff to correct the URL construction:
r = await handler(makeRequest('DELETE', makeUrl(makeUrl('/api/user/user1')))); r = await handler(makeRequest('DELETE', makeUrl('/api/user/user1')));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.r = await handler(makeRequest('DELETE', makeUrl('/api/user/user1')));
23-37: 🛠️ Refactor suggestion
Consider removing unused include parameter in user creation.
In the
makeRequest
call for creating a user, you include{ include: { posts: true } }
in the request body. Since this is acreate
operation, and the response does not seem to utilize the included posts, you might consider removing theinclude
parameter if it's not necessary.Apply this diff to remove the unnecessary include:
makeRequest('POST', '/api/user/create', { - include: { posts: true }, data: { id: 'user1', email: '[email protected]', posts: { create: [ { title: 'post1', published: true, viewCount: 1 }, { title: 'post2', published: false, viewCount: 2 }, ], }, }, })📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.r = await handler( makeRequest('POST', '/api/user/create', { data: { id: 'user1', email: '[email protected]', posts: { create: [ { title: 'post1', published: true, viewCount: 1 }, { title: 'post2', published: false, viewCount: 2 }, ], }, }, }) );
117-126:
⚠️ Potential issuePotential misconfiguration of REST handler endpoint.
In the
createHonoHandler
configuration, thehandler
is set up withRest({ endpoint: 'http://localhost/api' })
. This endpoint should typically be a relative path unless there is a specific reason to include the full URL.Consider changing the endpoint to a relative path to avoid potential issues with endpoint resolution:
handler: Rest({ endpoint: '/api' }),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.const { prisma, modelMeta, zodSchemas } = await loadSchema(schema); const handler = await createHonoApp( createHonoHandler({ getPrisma: () => prisma, handler: Rest({ endpoint: '/api' }), modelMeta, zodSchemas, }) );
180-183:
⚠️ Potential issueAdd error handling for JSON parsing in
unmarshal
.Parsing response bodies with
JSON.parse
can throw an error if the response is not valid JSON. To make tests more robust, consider adding try-catch blocks to handle parsing errors gracefully.Update the
unmarshal
function to handle parsing exceptions:async function unmarshal(r: Response, useSuperJson = false) { const text = await r.text(); - return (useSuperJson ? superjson.parse(text) : JSON.parse(text)) as any; + try { + return (useSuperJson ? superjson.parse(text) : JSON.parse(text)) as any; + } catch (e) { + throw new Error(`Failed to parse JSON response: ${e.message}\nResponse text: ${text}`); + } }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.async function unmarshal(r: Response, useSuperJson = false) { const text = await r.text(); try { return (useSuperJson ? superjson.parse(text) : JSON.parse(text)) as any; } catch (e) { throw new Error(`Failed to parse JSON response: ${e.message}\nResponse text: ${text}`); } }
85-112: 🛠️ Refactor suggestion
Duplicate test case with custom load path.
The test
custom load path
appears to be creating a user similar to the previous test. If the intention is to test the custom output path (./zen
), ensure that the focus is on validating the path and related configurations rather than repeating user creation logic.Consider refactoring the test to emphasize the custom load path's effect, possibly by asserting that the
modelMeta
andzodSchemas
are loaded correctly from the custom path.Example:
it('custom load path', async () => { const { prisma, projectDir } = await loadSchema(schema, { output: './zen' }); const handler = await createHonoApp( createHonoHandler({ getPrisma: () => prisma, - modelMeta: require(path.join(projectDir, './zen/model-meta')).default, - zodSchemas: require(path.join(projectDir, './zen/zod')), + modelMeta: require(path.join(projectDir, 'zen', 'model-meta')).default, + zodSchemas: require(path.join(projectDir, 'zen', 'zod')), }) ); + expect(modelMeta).toBeDefined(); + expect(zodSchemas).toBeDefined(); });Committable suggestion was skipped due to low confidence.
packages/schema/src/utils/ast-utils.ts (2)
119-129:
⚠️ Potential issueEnsure compatibility of
Array.prototype.findLast
On line 123,
Array.prototype.findLast
is used:clone.$inheritedFrom = delegateBases.findLast((base) => base.fields.some((f) => f.name === node.name));The
findLast
method is part of ECMAScript 2022 and is supported in Node.js v18 and above. If your project needs to support earlier environments, consider an alternative implementation:- clone.$inheritedFrom = delegateBases.findLast((base) => base.fields.some((f) => f.name === node.name)); + const reversedBases = [...delegateBases].reverse(); + clone.$inheritedFrom = reversedBases.find((base) => base.fields.some((f) => f.name === node.name));This alternative reverses the array and uses the widely supported
find
method.
195-195: 🛠️ Refactor suggestion
Replace
console.log
with a proper logging mechanism or removeOn line 195, there is a direct
console.log
statement:console.log('Loaded import from:', imp.path);Using
console.log
in library code can clutter the console output for consumers of your library. Consider using a configurable logging utility or removing the log statement if it's not essential.Apply this diff to address the issue:
- console.log('Loaded import from:', imp.path); + // Consider using a logging library or remove this log statementpackages/plugins/trpc/src/client-helper/index.ts (1)
53-54:
⚠️ Potential issueMissing import for
AnyRouter
from@trpc/server
The type
AnyRouter
is used but not imported. This will lead to TypeScript errors.Please add the following import statement at the top of your file:
import type { AnyRouter } from '@trpc/server';Also applies to: 120-121
packages/testtools/src/schema.ts (2)
267-271: 🛠️ Refactor suggestion
Refactor nested ternary operators for improved readability
The nested ternary operators used in assigning
prismaLoadPath
can make the code harder to read and maintain. Consider refactoring the logic to enhance clarity.Proposed refactor:
let prismaLoadPath; if (options?.prismaLoadPath) { if (path.isAbsolute(options.prismaLoadPath)) { prismaLoadPath = options.prismaLoadPath; } else { prismaLoadPath = path.join(projectDir, options.prismaLoadPath); } } else { prismaLoadPath = path.join(projectDir, 'node_modules/.prisma/client'); }Alternatively, use
path.resolve
to simplify the path handling:const prismaLoadPath = options?.prismaLoadPath ? path.resolve(projectDir, options.prismaLoadPath) : path.join(projectDir, 'node_modules/.prisma/client');This refactor improves readability and makes the code easier to understand.
272-272:
⚠️ Potential issueAdd error handling for dynamic module loading
Loading a module dynamically using a potentially user-provided path (
prismaLoadPath
) can lead to runtime errors if the module cannot be found or loaded.Consider wrapping the
require(prismaLoadPath);
statement in atry-catch
block to handle errors gracefully:let prismaModule; try { prismaModule = require(prismaLoadPath); } catch (error) { throw new Error(`Failed to load Prisma client from path "${prismaLoadPath}": ${error.message}`); }This ensures that any issues in loading the module are caught, and a meaningful error message is provided.
packages/plugins/tanstack-query/src/runtime/common.ts (1)
459-469: 🛠️ Refactor suggestion
Enhance type safety by adding checks for all elements in
isZenStackQueryKey
While the function checks the length and the first element of
queryKey
, consider adding type validations for the remaining elements to ensure they match the expected types. This will further prevent potential runtime errors due to unexpected types.Apply this diff to add additional type checks:
function isZenStackQueryKey(queryKey: readonly unknown[]): queryKey is QueryKey { if (queryKey.length < 5) { return false; } if (typeof queryKey[0] !== 'string' || queryKey[0] !== QUERY_KEY_PREFIX) { return false; } + if (typeof queryKey[1] !== 'string') { + return false; + } + if (typeof queryKey[2] !== 'string') { + return false; + } + // `queryKey[3]` can be any value (args), so no type check is necessary. + if (typeof queryKey[4] !== 'object' || queryKey[4] === null) { + return false; + } return true; }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.function isZenStackQueryKey(queryKey: readonly unknown[]): queryKey is QueryKey { if (queryKey.length < 5) { return false; } if (typeof queryKey[0] !== 'string' || queryKey[0] !== QUERY_KEY_PREFIX) { return false; } if (typeof queryKey[1] !== 'string') { return false; } if (typeof queryKey[2] !== 'string') { return false; } // `queryKey[3]` can be any value (args), so no type check is necessary. if (typeof queryKey[4] !== 'object' || queryKey[4] === null) { return false; } return true; }
packages/plugins/trpc/src/generator.ts (2)
97-98: 🛠️ Refactor suggestion
Avoid unnecessary type casting of
generateClientHelpers
When calling
createAppRouter
, you're castinggenerateClientHelpers
toSupportedClientHelpers[] | undefined
. Since you've already validated the contents ofgenerateClientHelpers
, consider ensuring it's correctly typed from the start to avoid repeated casting.Adjust the parsing and validation to assign the correct type:
const generateClientHelpersOption = parseOptionAsStrings(options, 'generateClientHelpers', name); let generateClientHelpers: SupportedClientHelpers[] | undefined; if (generateClientHelpersOption) { if ( !generateClientHelpersOption.every((v) => AllSupportedClientHelpers.includes(v as SupportedClientHelpers)) ) { throw new PluginError( name, `Option "generateClientHelpers" only supports the following values: ${AllSupportedClientHelpers.map( (n) => '"' + n + '"' ).join(', ')}.` ); } else { generateClientHelpers = generateClientHelpersOption as SupportedClientHelpers[]; } }This ensures
generateClientHelpers
is properly typed, eliminating the need for casting in subsequent function calls.
41-50: 🛠️ Refactor suggestion
Simplify type checking for
generateClientHelpers
In the validation of
generateClientHelpers
, the cast toSupportedClientHelpers
inside theincludes
method is unnecessary sinceAllSupportedClientHelpers
is an array of strings. Removing the cast simplifies the condition without affecting functionality.Apply this diff to simplify the condition:
if ( generateClientHelpers && - !generateClientHelpers.every((v) => AllSupportedClientHelpers.includes(v as SupportedClientHelpers)) + !generateClientHelpers.every((v) => AllSupportedClientHelpers.includes(v)) ) { throw new PluginError( name, `Option "generateClientHelpers" only supports the following values: ${AllSupportedClientHelpers.map( (n) => '"' + n + '"' ).join(', ')}.` ); }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if ( generateClientHelpers && !generateClientHelpers.every((v) => AllSupportedClientHelpers.includes(v)) ) { throw new PluginError( name, `Option "generateClientHelpers" only supports the following values: ${AllSupportedClientHelpers.map( (n) => '"' + n + '"' ).join(', ')}.` );
packages/plugins/tanstack-query/tests/react-hooks.test.tsx (4)
4-5: 🛠️ Refactor suggestion
Consider Avoiding Disabling ESLint Rules
The ESLint rules
@typescript-eslint/ban-ts-comment
and@typescript-eslint/no-explicit-any
have been disabled. While there are cases where this might be necessary, it's generally advisable to minimize disabling linting rules globally. Instead, consider addressing the underlying issues:
- For
no-explicit-any
: Replaceany
types with more specific type annotations to take full advantage of TypeScript's type safety.- For
ban-ts-comment
: Investigate the need for@ts-ignore
comments and refactor the code to eliminate the need for bypassing the compiler checks.This will enhance code quality and maintainability.
394-454: 🛠️ Refactor suggestion
Refactor Repetitive Test Setup Code
The test case
optimistic upsert - create
includes setup code that is similar to other test cases, such as initializingdata
, configuringnock
for API mocking, and rendering hooks. To improve maintainability and reduce duplication, consider extracting common setup logic into reusable helper functions or test utilities. This refactoring can make your tests cleaner and easier to maintain.
456-508: 🛠️ Refactor suggestion
Refactor Repetitive Test Setup Code
The test case
optimistic upsert - update
also contains repeated setup code similar to other tests, including initializing variables, configuringnock
, and rendering hooks. To enhance code maintainability and reduce redundancy, consider abstracting common patterns into reusable functions or fixtures. This will simplify your test cases and make future updates more manageable.
417-423:
⚠️ Potential issueMocked Upsert API Response Should Return Created Data
In the mocked
upsert
API response, the returned data is{ data: null }
. This means the optimistic data in the cache will not be replaced with actual server data, potentially causing inconsistencies if the optimistic data differs from the server response. To more accurately simulate the API behavior and ensure the cache updates correctly, the mock should return the created user data.Apply this diff to fix the mock response:
nock(makeUrl('User', 'upsert')) .post(/.*/) .reply(200, () => { console.log('Mocking upsert response'); - return { data: null }; + data.push({ id: '1', name: 'foo' }); + return { data: data[0] }; });Committable suggestion was skipped due to low confidence.
packages/plugins/tanstack-query/tests/react-hooks-v5.test.tsx (2)
535-585:
⚠️ Potential issueUpdate mocked data to reflect mutation in upsert operation
In the test
'optimistic upsert - update'
, the mocked response for theupsert
operation (lines 559-564) does not modify thedata
. Since theupdate
operation should change the user'sname
to'bar'
, the mocked response should reflect this change to accurately test the optimistic update behavior.Modify the mocked response to update the user's name:
nock(makeUrl('User', 'upsert')) .post(/.*/) .reply(200, () => { console.log('Mocking upsert response'); + data.name = 'bar'; return { data }; });
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.it('optimistic upsert - update', async () => { const { queryClient, wrapper } = createWrapper(); const queryArgs = { where: { id: '1' } }; const data = { id: '1', name: 'foo' }; nock(makeUrl('User', 'findUnique', queryArgs)) .get(/.*/) .reply(200, () => { console.log('Querying data:', JSON.stringify(data)); return { data }; }) .persist(); const { result } = renderHook( () => useModelQuery('User', makeUrl('User', 'findUnique'), queryArgs, { optimisticUpdate: true }), { wrapper, } ); await waitFor(() => { expect(result.current.data).toMatchObject({ name: 'foo' }); }); nock(makeUrl('User', 'upsert')) .post(/.*/) .reply(200, () => { console.log('Mocking upsert response'); data.name = 'bar'; return { data }; }); const { result: mutationResult } = renderHook( () => useModelMutation('User', 'POST', makeUrl('User', 'upsert'), modelMeta, { optimisticUpdate: true, invalidateQueries: false, }), { wrapper, } ); act(() => mutationResult.current.mutate({ ...queryArgs, update: { name: 'bar' }, create: { name: 'zee' } })); await waitFor(() => { const cacheData = queryClient.getQueryData( getQueryKey('User', 'findUnique', queryArgs, { infinite: false, optimisticUpdate: true }) ); expect(cacheData).toMatchObject({ name: 'bar', $optimistic: true }); }); });
475-533:
⚠️ Potential issueEnsure mocked
upsert
response returns expected dataIn the test case
'optimistic upsert - create'
, the mocked response for theupsert
operation (lines 498-504) returns{ data: null }
. Typically, anupsert
operation should return the created or updated data. Returningnull
may not accurately simulate the server behavior and could affect the test's validity.Consider updating the mocked response to return the expected data:
nock(makeUrl('User', 'upsert')) .post(/.*/) .reply(200, () => { console.log('Mocking upsert response'); - return { data: null }; + data.push({ id: '1', name: 'foo' }); + return { data: data[0] }; });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.it('optimistic upsert - create', async () => { const { queryClient, wrapper } = createWrapper(); const data: any[] = []; nock(makeUrl('User', 'findMany')) .get(/.*/) .reply(200, () => { console.log('Querying data:', JSON.stringify(data)); return { data }; }) .persist(); const { result } = renderHook( () => useModelQuery('User', makeUrl('User', 'findMany'), undefined, { optimisticUpdate: true }), { wrapper, } ); await waitFor(() => { expect(result.current.data).toHaveLength(0); }); nock(makeUrl('User', 'upsert')) .post(/.*/) .reply(200, () => { console.log('Mocking upsert response'); data.push({ id: '1', name: 'foo' }); return { data: data[0] }; }); const { result: mutationResult } = renderHook( () => useModelMutation('User', 'POST', makeUrl('User', 'upsert'), modelMeta, { optimisticUpdate: true, invalidateQueries: false, }), { wrapper, } ); act(() => mutationResult.current.mutate({ where: { id: '1' }, create: { id: '1', name: 'foo' }, update: { name: 'bar' }, }) ); await waitFor(() => { const cacheData: any = queryClient.getQueryData( getQueryKey('User', 'findMany', undefined, { infinite: false, optimisticUpdate: true }) ); expect(cacheData).toHaveLength(1); expect(cacheData[0].$optimistic).toBe(true); expect(cacheData[0].id).toBeTruthy(); expect(cacheData[0].name).toBe('foo'); }); });
packages/schema/tests/schema/validation/attribute-validation.test.ts (1)
819-822: 🛠️ Refactor suggestion
Potential Naming Conflict with
User
ModelDefining a
User
model within test cases may lead to naming collisions or unintended interactions if other tests or modules also define aUser
model. Consider using a more test-specific model name to prevent potential conflicts.packages/runtime/src/enhancements/node/delegate.ts (1)
369-370: 🛠️ Refactor suggestion
Address the TODO: Resolve Redundant Policy Rule Injections
The TODO comment indicates that both the delegate base's and the concrete model's policy rules are being injected, which is redundant. Refactoring the code to prevent this redundancy could improve performance and maintainability.
Would you like assistance in refactoring this code to eliminate the redundancy or should I open a GitHub issue to track this task?
packages/server/tests/api/rest.test.ts (1)
1936-1937:
⚠️ Potential issueRemove
attributes
from the relationship update requestWhen updating relationships using the
PATCH
method to/post/1/relationships/likes
, the JSON API specification expects thedata
field to contain only resource identifiers (type
andid
). Includingattributes
is not appropriate in this context and may cause unexpected behavior.Apply this diff to correct the request body:
data: [ { type: 'postLike', id: `1${idDivider}user1`, - attributes: { superLike: true } } ]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.data: [{ type: 'postLike', id: `1${idDivider}user1` }], },
packages/server/src/api/rest/index.ts (6)
35-36: 🛠️ Refactor suggestion
Consider making
idDivider
configurableThe constant
idDivider
is set to'_'
. If flexibility is needed for different ID formats, consider making this value configurable through the options.
1197-1210:
⚠️ Potential issueAdd validation for composite IDs in
makeIdFilter
When handling composite IDs, ensure that
resourceId
splits into the correct number of parts matchingidFields
. Consider adding validation and error handling for mismatches to prevent runtime errors.
763-765:
⚠️ Potential issuePotential issue with handling composite IDs in single relationship
When connecting a single relationship (lines 763-765), ensure that
data.data.id
correctly represents composite IDs.Proposed fix:
connect: { - [this.makeIdKey(relationInfo.idFields)]: data.data.id, + ...this.makeIdFilter(relationInfo.idFields, data.data.id), },📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.connect: { ...this.makeIdFilter(relationInfo.idFields, data.data.id), },
936-938:
⚠️ Potential issuePotential issue with handling composite IDs in single relationship update
Ensure that
data.data.id
correctly represents composite IDs when updating a single relationship (lines 936-938).Proposed fix:
set: { - [this.makeIdKey(relationInfo.idFields)]: data.data.id, + ...this.makeIdFilter(relationInfo.idFields, data.data.id), },📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.set: { ...this.makeIdFilter(relationInfo.idFields, data.data.id), },
755-755:
⚠️ Potential issuePotential issue with handling composite IDs in relationship connections
In
processCreate
, when connecting relationships (line 755),item.id
is used as the identifier. For models with composite primary keys,item.id
may not correctly represent the composite key. Consider updating the code to handle composite IDs appropriately.Proposed fix:
- [this.makeIdKey(relationInfo.idFields)]: item.id, + ...this.makeIdFilter(relationInfo.idFields, item.id),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements....this.makeIdFilter(relationInfo.idFields, item.id),
928-928:
⚠️ Potential issuePotential issue with handling composite IDs in relationship updates
When setting relationships in
processUpdate
(line 928), ensure thatitem.id
correctly represents composite IDs.Proposed fix:
- [this.makeIdKey(relationInfo.idFields)]: item.id, + ...this.makeIdFilter(relationInfo.idFields, item.id),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements....this.makeIdFilter(relationInfo.idFields, item.id),
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (2)
packages/server/src/hono/handler.ts (2)
18-21
: LGTM: Function initialization looks good. Consider adding a type annotation.The
createHonoHandler
function is well-structured, loading assets and setting up the request handler correctly. It appropriately uses a defaultRPCApiHandler
if no custom handler is provided.Consider adding a return type annotation to the
createHonoHandler
function for improved clarity:-export function createHonoHandler(options: HonoOptions): MiddlewareHandler { +export function createHonoHandler(options: HonoOptions): MiddlewareHandler {
23-35
: LGTM: Initial request processing looks good. Consider enhancing error messages.The code correctly retrieves and checks for the Prisma instance, processes URL and query parameters, and validates the path parameter. These are all crucial steps for ensuring valid requests.
Consider enhancing the error messages to provide more specific information:
- return ctx.json({ message: 'unable to get prisma from request context' }, 500); + return ctx.json({ message: 'Unable to get Prisma instance from request context' }, 500); - return ctx.json({ message: 'missing path parameter' }, 400); + return ctx.json({ message: 'Missing or invalid path parameter' }, 400);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (3)
- packages/server/src/hono/handler.ts (1 hunks)
- packages/server/src/nuxt/handler.ts (1 hunks)
- packages/server/src/sveltekit/handler.ts (1 hunks)
🧰 Additional context used
🔇 Additional comments (2)
packages/server/src/hono/handler.ts (1)
1-16
: LGTM: Import statements and interface definition are well-structured.The import statements are concise and relevant to the functionality being implemented. The
HonoOptions
interface is well-defined, extendingAdapterBaseOptions
and including the essentialgetPrisma
method for retrieving the Prisma instance.packages/server/src/sveltekit/handler.ts (1)
65-65
: LGTM: Logger addition enhances request handling capabilities.The addition of the
logger
property to therequestHandler
function call is a positive change. It allows for better logging during request handling, which can be crucial for debugging and monitoring in production environments.To ensure this change is consistent with the rest of the codebase, let's verify the
HandlerOptions
interface:
…mpound id is read as a relation (#1784)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (12)
packages/server/src/api/rest/index.ts (6)
59-59
: Approved: Type updates for composite ID support.The changes to
RelationshipInfo
andModelInfo
types, replacingidField: string
withidFields: FieldInfo[]
, are crucial for supporting composite IDs. This is a significant improvement in flexibility.However, this is a breaking change that will require updates in code using these types.
Consider adding a comment or updating the documentation to explain the new structure and its implications for code using these types.
Also applies to: 65-65
726-726
: Approved: Added modelMeta parameter to processCreate.The addition of the
modelMeta: ModelMeta
parameter to theprocessCreate
method is a good improvement. It allows the method to access model metadata, which is likely necessary for handling composite IDs and other model-specific operations.As this is a breaking change, consider:
- Updating the method's documentation to explain the new parameter and its usage.
- Reviewing and updating any code that calls this method to ensure it passes the required
modelMeta
argument.
896-896
: Approved: Added modelMeta parameter to processUpdate.The addition of the
modelMeta: ModelMeta
parameter to theprocessUpdate
method is a good improvement, consistent with the changes made toprocessCreate
. It allows the method to access model metadata, which is necessary for handling composite IDs and other model-specific operations.As this is a breaking change:
- Update the method's documentation to explain the new parameter and its usage.
- Review and update any code that calls this method to ensure it passes the required
modelMeta
argument.- Consider adding a brief comment explaining why this parameter is necessary, to help future maintainers understand its importance.
1209-1222
: LGTM: Improved makeIdFilter for composite ID support.The changes to the
makeIdFilter
method are well-implemented and crucial for supporting both single and composite IDs:
- The method now correctly handles both single and multiple ID fields.
- It uses the
idDivider
constant for splitting composite IDs, ensuring consistency.- The implementation is flexible and should work well with various ID structures.
Consider a small optimization: For the single ID case, you could directly return the object without creating an intermediate variable. For example:
if (idFields.length === 1) { return { [idFields[0].name]: this.coerce(idFields[0].type, resourceId) }; } // ... rest of the methodThis minor change could slightly improve readability and performance.
1144-1167
: LGTM: New injectCompoundId method for handling composite IDs.The addition of the
injectCompoundId
method is a crucial improvement for handling composite IDs:
- It recursively traverses the entity structure, ensuring all levels of nested relationships are properly handled.
- The method creates synthetic ID fields for models with compound IDs, which is essential for consistent ID handling throughout the API.
- The implementation is thorough and correctly handles both single and composite ID scenarios.
Consider a small optimization: You could use early returns to simplify the nested if statements. For example:
private injectCompoundId(model: string, items: unknown) { const typeInfo = this.typeMap[lowerCaseFirst(model)]; if (!typeInfo) { return; } enumerate(items).forEach((item: any) => { if (!item) { return; } if (typeInfo.idFields.length > 1) { item[this.makeIdKey(typeInfo.idFields)] = this.makeCompoundId(typeInfo.idFields, item); } for (const [key, value] of Object.entries(item)) { if (typeInfo.relationships[key]) { this.injectCompoundId(typeInfo.relationships[key].type, value); } } }); }This structure can make the code slightly more readable and reduce nesting.
Line range hint
1-1728
: Overall: Excellent implementation of composite ID support.The changes made to this file significantly enhance the API's capability to handle composite IDs. Key improvements include:
- Consistent use of
idFields
instead of singleidField
throughout the file.- New utility methods like
makeIdKey
,makeCompoundId
, andinjectCompoundId
for handling composite IDs.- Updates to core methods like
processCreate
,processUpdate
,makeIdFilter
, andmakeIdSelect
to support composite IDs.- Modifications to relationship handling and filtering to accommodate composite IDs.
These changes are well-implemented, maintaining consistency and readability throughout the file. The new functionality is integrated seamlessly with the existing code structure.
Consider the following to further improve the codebase:
- Performance: With the added complexity of composite IDs, consider profiling the API performance, especially for operations involving deep nested relationships or large datasets.
- Documentation: Ensure that API documentation is updated to reflect the new support for composite IDs, including any changes to request/response formats.
- Testing: Develop comprehensive test cases that cover various scenarios with composite IDs, including edge cases and complex nested structures.
- Refactoring: As the file has grown in complexity, consider breaking it down into smaller, more focused modules to improve maintainability.
These changes represent a significant enhancement to the API's flexibility and should greatly improve its ability to handle complex data models.
packages/server/tests/api/rest.test.ts (6)
306-334
: New test case added for fetching a related resource with a compound IDThis new test case is a valuable addition to the test suite. It verifies the functionality of fetching a related resource (likes) for a post using a compound ID. The test covers the creation of necessary data, making the API request, and asserting the expected response.
A few observations and suggestions:
- The test correctly sets up the required data (user, post, and postLike) before making the API request.
- The assertion checks for the correct status code and the structure of the response body.
- The use of the
idDivider
in constructing the compound ID is a good practice for maintaining consistency.Consider adding an additional assertion to verify the
links.related
field in the response, as it's an important part of the JSON:API specification for relationships.
1675-1717
: New test cases added for creating relations with compound IDsThese new test cases are important additions to ensure the correct handling of compound IDs in the API. They cover two scenarios: creating a relation with a compound ID and creating a single entity with a compound ID.
Observations and suggestions:
- Both test cases correctly set up the required data (user and post) before making the API requests.
- The tests verify that the API can handle compound IDs in the request payload.
- The assertions check for the correct status code (201 Created) for successful creation.
Consider adding the following improvements to these test cases:
- Add assertions to verify the structure and content of the response body, not just the status code.
- After creating the entities, make a GET request to ensure the data was persisted correctly with the compound ID.
- Consider testing edge cases, such as attempting to create a relation with an invalid compound ID format.
Example for point 1:
expect(r.body).toMatchObject({ data: { type: 'postLike', id: `1${idDivider}user1`, attributes: { userId: 'user1', postId: 1, superLike: false }, }, });
1856-1875
: New test case added for updating an item with a compound IDThis test case is a crucial addition to ensure that the API can correctly handle updates to entities with compound IDs.
Observations and suggestions:
- The test correctly sets up the required data (user, post, and postLike) before making the update request.
- It verifies that the API can handle a PUT request with a compound ID in the URL.
- The test checks for the correct status code (200 OK) for a successful update.
Consider enhancing this test case with the following improvements:
- Add assertions to verify the structure and content of the response body, not just the status code.
- After updating the entity, make a GET request to ensure the data was updated correctly.
- Test updating different attributes of the entity to ensure all fields can be modified.
Example for point 1:
expect(r.body).toMatchObject({ data: { type: 'postLike', id: `1${idDivider}user1`, attributes: { userId: 'user1', postId: 1, superLike: true }, }, });Example for point 2:
const getResponse = await handler({ method: 'get', path: `/postLike/1${idDivider}user1`, prisma, }); expect(getResponse.status).toBe(200); expect(getResponse.body.data.attributes.superLike).toBe(true);
1960-1976
: New test case added for updating a collection of relations with compound IDsThis test case is an important addition to verify the API's ability to handle updates to collections of relations that use compound IDs.
Observations and suggestions:
- The test correctly sets up the required data (user, post, and postLike) before making the update request.
- It verifies that the API can handle a PATCH request to update a collection of relations.
- The test checks for the correct status code (200 OK) for a successful update.
Consider enhancing this test case with the following improvements:
- Add assertions to verify the structure and content of the response body, not just the status code.
- After updating the relations, make a GET request to ensure the data was updated correctly.
- Test updating multiple relations in a single request to ensure the API can handle batch updates.
Example for point 1:
expect(r.body).toMatchObject({ data: [ { type: 'postLike', id: `1${idDivider}user1`, attributes: { superLike: true }, }, ], });Example for point 2:
const getResponse = await handler({ method: 'get', path: '/post/1/likes', prisma, }); expect(getResponse.status).toBe(200); expect(getResponse.body.data[0].attributes.superLike).toBe(true);
2056-2069
: New test case added for deleting an item with a compound IDThis test case is a crucial addition to ensure that the API can correctly handle deletion of entities with compound IDs.
Observations and suggestions:
- The test correctly sets up the required data (user, post, and postLike) before making the delete request.
- It verifies that the API can handle a DELETE request with a compound ID in the URL.
- The test checks for the correct status code (204 No Content) for a successful deletion.
Consider enhancing this test case with the following improvements:
- After deleting the entity, make a GET request to ensure the data was actually deleted and returns a 404 Not Found status.
- Test deleting an entity with a non-existent compound ID to ensure proper error handling.
Example for point 1:
const getResponse = await handler({ method: 'get', path: `/postLike/1${idDivider}user1`, prisma, }); expect(getResponse.status).toBe(404);Example for point 2:
const deleteNonExistentResponse = await handler({ method: 'delete', path: `/postLike/999${idDivider}nonexistent`, prisma, }); expect(deleteNonExistentResponse.status).toBe(404);
Line range hint
1-2568
: Overall improvement in test coverage for compound ID handlingThe changes made to this test file significantly enhance the test coverage for API operations involving compound IDs. These additions are crucial for ensuring the robustness and correctness of the API when dealing with more complex data structures.
Key improvements:
- New test cases for creating, updating, and deleting entities with compound IDs.
- Tests for fetching related resources using compound IDs.
- Verification of API behavior for collections of relations with compound IDs.
These changes will help catch potential issues related to compound ID handling early in the development process.
For future improvements, consider:
- Adding more edge cases and error scenarios for compound ID handling.
- Testing pagination and filtering with compound IDs.
- Ensuring consistent error handling across all compound ID operations.
- Adding performance tests for operations involving large numbers of compound ID entities.
These additional tests would further strengthen the reliability and performance of the API when working with compound IDs.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
- packages/server/src/api/rest/index.ts (32 hunks)
- packages/server/tests/api/rest.test.ts (11 hunks)
🧰 Additional context used
🔇 Additional comments (7)
packages/server/src/api/rest/index.ts (7)
8-8
: LGTM: New import and constant for handling composite IDs.The addition of the
clone
import and theidDivider
constant are good improvements:
clone
will likely be used for creating deep copies, promoting immutability.idDivider
provides a consistent way to handle composite IDs throughout the code.These changes align well with the file's overall modifications for supporting composite IDs.
Also applies to: 36-37
756-756
: LGTM: Improved ID handling in relationship processing.The changes in the
processCreate
method for handling relationships are well-implemented:
- Using
this.makeIdKey(relationInfo.idFields)
instead of a hardcoded 'id' supports composite IDs.- This approach is more flexible and consistent with the overall changes for composite ID support.
These modifications enhance the method's ability to handle various ID structures across different models.
Also applies to: 765-766
1225-1230
: LGTM: Updated makeIdSelect for composite ID support.The changes to the
makeIdSelect
method are well-implemented:
- The method now correctly handles multiple ID fields, supporting composite IDs.
- The implementation is concise and uses a functional approach with
reduce
, which is efficient and readable.- The error handling for cases with no ID fields is maintained, ensuring robustness.
These changes align well with the overall modifications for supporting composite IDs throughout the API.
1232-1237
: LGTM: New utility methods for composite ID handling.The addition of
makeIdKey
andmakeCompoundId
methods is a good improvement:
makeIdKey
provides a consistent way to generate a key from multiple ID fields.makeCompoundId
creates a compound ID from multiple fields in an item.- Both methods use the
idDivider
constant, ensuring consistency across the codebase.These utility methods will help standardize composite ID handling throughout the API, improving maintainability and reducing the chance of errors.
1246-1246
: LGTM: Updated includeRelationshipIds for composite ID support.The change in the
includeRelationshipIds
method is well-implemented:
- Using
this.makeIdSelect(relationInfo.idFields)
instead of{ id: true }
allows for proper handling of composite IDs in relationship selections.- This change is consistent with other modifications in the file for supporting composite IDs.
- It ensures that all ID fields are correctly included when selecting relationship data.
This update enhances the flexibility of the API to work with various ID structures across different models and their relationships.
1473-1476
: LGTM: Updated buildTypeMap for composite ID support in relationships.The changes in the
buildTypeMap
method are well-implemented:
- The method now correctly handles multiple ID fields for relationships, supporting composite IDs.
- It uses a
reduce
function to create an object with all ID fields set to the sorting direction, which is an efficient and flexible approach.- This change is consistent with other modifications in the file for supporting composite IDs.
These updates ensure that the type map correctly represents models and relationships with composite IDs, which is crucial for proper API functionality.
1560-1561
: LGTM: Updated makeFilterValue for composite ID support in filters.The changes in the
makeFilterValue
method are well-implemented:
- The method now uses
this.makeIdFilter
for handling relation filters, which correctly supports composite IDs.- It handles both single and multiple values for to-many relations, using the appropriate filter structure.
- For to-one relations, it correctly uses the
is
operator with the composite ID filter.These updates ensure that filtering operations work correctly with composite IDs, maintaining consistency with the rest of the API's composite ID support.
Also applies to: 1564-1564
No description provided.