Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge dev to main (v2.7.0) #1782

Merged
merged 18 commits into from
Oct 16, 2024
Merged

merge dev to main (v2.7.0) #1782

merged 18 commits into from
Oct 16, 2024

Conversation

ymc9
Copy link
Member

@ymc9 ymc9 commented Oct 16, 2024

No description provided.

Copy link
Contributor

coderabbitai bot commented Oct 16, 2024

📝 Walkthrough
📝 Walkthrough

Walkthrough

The changes in this pull request involve updates to multiple files, primarily focusing on version increments, enhancements to OpenAPI specification generation, and modifications to test cases. The version of the JetBrains plugin is updated from "2.6.2" to "2.7.0". The RESTfulOpenAPIGenerator class is enhanced to better handle multiple ID fields, required fields, and response generation. New test cases are added to validate these changes, and several import statements are cleaned up across different files.

Changes

File Path Change Summary
packages/ide/jetbrains/build.gradle.kts Version updated from "2.6.2" to "2.7.0". Configuration for IntelliJ plugin remains consistent. JVM compatibility set to version "17". Updated patchPluginXml task for build range. changelog section updated to include new features.
packages/plugins/openapi/src/rest-generator.ts Enhancements to RESTfulOpenAPIGenerator: added hasMultipleIds for filter parameters, updated model entity generation for compound IDs, modified required fields logic, adjusted response generation to always include ID fields, and updated makeFilterParameter method. Method signatures updated for generateFilterParameters and generateModelEntity.
packages/plugins/openapi/tests/openapi-restful.test.ts New model PostLike added, relationships established with User and post_Item. Updated test cases for new relationships and endpoints, including checks for GET, POST, and PATCH methods. New test case for compound ID fields in User model.
packages/plugins/tanstack-query/src/runtime/common.ts Added isZenStackQueryKey function to validate query keys. Updated optimisticUpdate function to utilize this new function and simplify destructuring of queryKey.
packages/plugins/tanstack-query/tests/react-hooks-v5.test.tsx Added new test cases for optimistic upsert operations and enhanced existing tests to include checks for optimistic updates and interactions with non-Zenstack queries.
packages/plugins/trpc/res/client/v10/next.ts Removed import of ClientType from ../routers; function createTRPCNext remains unchanged.
packages/plugins/trpc/res/client/v10/nuxt.ts Introduced createTRPCNuxtClient function as a wrapper for _createTRPCNuxtClient, providing enhanced type safety with generic parameters.
packages/plugins/trpc/res/client/v10/react.ts Removed import of ClientType from ../routers; function createTRPCReact remains unchanged.

Possibly related PRs


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

‼️ IMPORTANT
Auto-reply has been disabled for this repository in the CodeRabbit settings. The CodeRabbit bot will not respond to your replies unless it is explicitly tagged.

  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post

🛑 Comments failed to post (62)
packages/plugins/trpc/tests/projects/nuxt-trpc-v10/nuxt.config.ts (1)

4-4: 🛠️ Refactor suggestion

Consider environment-based configuration for devtools.

Enabling devtools is beneficial for development, but it's generally recommended to disable them in production environments for security and performance reasons.

Consider using an environment variable to conditionally enable devtools. For example:

-    devtools: { enabled: true },
+    devtools: { enabled: process.env.NODE_ENV !== 'production' },

This way, devtools will be automatically disabled in production builds.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    devtools: { enabled: process.env.NODE_ENV !== 'production' },
packages/plugins/trpc/tests/projects/nuxt-trpc-v11/nuxt.config.ts (2)

1-8: 🛠️ Refactor suggestion

Consider additional Nuxt.js configuration options.

The current configuration covers basic setup for compatibility, devtools, and tRPC integration. However, depending on the project requirements, you might want to consider adding more configuration options such as:

  1. Modules: If you're using any Nuxt modules, they should be added to the configuration.
  2. Runtime config: For environment-specific variables.
  3. Server options: If you need to customize server behavior.
  4. Router options: If you need to modify the default routing behavior.

Here's an example of how you might expand the configuration:

export default defineNuxtConfig({
  compatibilityDate: '2024-04-03',
  devtools: { enabled: process.env.NODE_ENV === 'development' },
  build: {
    transpile: ['trpc-nuxt'],
  },
  modules: [
    // Add any required modules here
  ],
  runtimeConfig: {
    // Add any runtime configuration here
  },
  // Add any other necessary configuration options
});

Consider expanding the configuration based on your project's specific needs.


4-4: 🛠️ Refactor suggestion

Consider conditionally enabling devtools based on the environment.

While enabling devtools is beneficial for development, it's generally not recommended for production environments. Consider modifying this configuration to conditionally enable devtools based on the current environment.

Here's a suggested modification:

-    devtools: { enabled: true },
+    devtools: { enabled: process.env.NODE_ENV === 'development' },

This change ensures that devtools are only enabled in the development environment.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    devtools: { enabled: process.env.NODE_ENV === 'development' },
packages/plugins/trpc/res/client/v10/nuxt.ts (1)

10-12: 💡 Codebase verification

ClientType is not defined or imported in the file.

The ClientType used in the type casting is not defined or imported in this file. Please ensure that ClientType is properly imported from the relevant module.

🔗 Analysis chain

LGTM with a minor query: Implementation looks good, but clarify ClientType.

The implementation effectively wraps the original function while enhancing type information. The use of DeepOverrideAtPath allows for precise type modifications.

However, there's an undefined ClientType being used. Could you please clarify where ClientType is defined or imported from?

To verify the ClientType usage, we can run the following script:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for ClientType definition or import

# Test 1: Search for ClientType import
echo "Searching for ClientType import:"
rg --type typescript "import.*ClientType" ./packages/plugins/trpc

# Test 2: Search for ClientType type definition
echo "Searching for ClientType type definition:"
rg --type typescript "type\s+ClientType" ./packages/plugins/trpc

# Test 3: Search for ClientType interface definition
echo "Searching for ClientType interface definition:"
rg --type typescript "interface\s+ClientType" ./packages/plugins/trpc

Length of output: 582


Script:

#!/bin/bash
# Description: Check for ClientType definition or import without using --type typescript

# Test 1: Search for ClientType import
echo "Searching for ClientType import:"
rg "import.*ClientType" ./packages/plugins/trpc

# Test 2: Search for ClientType type definition
echo "Searching for ClientType type definition:"
rg "type\s+ClientType" ./packages/plugins/trpc

# Test 3: Search for ClientType interface definition
echo "Searching for ClientType interface definition:"
rg "interface\s+ClientType" ./packages/plugins/trpc

Length of output: 4164

packages/plugins/trpc/res/client/v11/nuxt.ts (1)

11-11: 💡 Codebase verification

Clarify the ClientType<TRouter> type.

The ClientType<TRouter> type is not imported or defined in nuxt.ts. Please ensure that it is properly imported from its source.

🔗 Analysis chain

Clarify the ClientType<TRouter> type.

The type assertion uses ClientType<TRouter>, but this type is not imported or defined in the visible code. Please clarify where this type comes from and consider importing it if it's from an external module.

To verify the existence and usage of ClientType, you can run the following script:

If ClientType is defined in another file, consider adding an import statement:

import type { ClientType } from './path-to-file';
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for the definition or import of ClientType
rg --type typescript "type\s+ClientType|import.*ClientType" -g '!**/node_modules/**'

Length of output: 121


Script:

#!/bin/bash
# Search for the definition or import of ClientType using correct file types
rg --type ts --type tsx "type\s+ClientType|import.*ClientType" -g '!**/node_modules/**'

Length of output: 117


Script:

#!/bin/bash
# Search for the definition or import of ClientType in .ts and .tsx files
rg "type\s+ClientType|import.*ClientType" -g '*.ts' -g '*.tsx' -g '!**/node_modules/**'

Length of output: 1385

script/test-scaffold.ts (1)

22-22: 💡 Codebase verification

Inconsistent Prisma Version References Found

Multiple Prisma versions are referenced across the project. Please ensure all Prisma dependencies are updated to version 5.21.x for consistency.

  • packages/schema/package.json
  • packages/plugins/trpc/tests/projects/nuxt-trpc-v11/package.json
  • packages/plugins/trpc/tests/projects/nuxt-trpc-v10/package.json
  • packages/plugins/trpc/tests/projects/t3-trpc-v11/package.json
  • packages/plugins/trpc/tests/projects/t3-trpc-v10/package.json
  • tests/integration/test-run/package.json
  • tests/integration/tests/frameworks/nextjs/test-project/package.json
  • tests/integration/tests/frameworks/trpc/test-project/package.json
🔗 Analysis chain

Version update for Prisma packages

The versions of prisma and @prisma/client have been updated from 5.20.x to 5.21.x. This change keeps the test scaffold up-to-date with the latest minor version of Prisma.

To ensure this change is consistent across the project, let's check for other Prisma version references:

Please review the output of this script to ensure all Prisma version references are consistent.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for other Prisma version references in the project

# Search for Prisma version references in package.json files
echo "Searching for Prisma version references in package.json files:"
rg --type json '"prisma":\s*"[^"]*"' -g 'package*.json'

# Search for Prisma version references in other files
echo "Searching for Prisma version references in other files:"
rg '(?i)prisma.*?(?:5\.2[01]|5\.20)' --type-not json

Length of output: 4593

packages/plugins/trpc/tests/projects/t3-trpc-v10/src/pages/index.tsx (2)

14-20: ⚠️ Potential issue

Refactor mutation function for production readiness.

While the mutation function correctly implements the post creation, there are a few areas for improvement:

  1. The post data is hardcoded. In a production environment, this should come from user input or props.
  2. Logging the author's email to the console could be a privacy concern.
  3. There's no error handling for the async operation.

Consider refactoring the function like this:

const mutation = async (postData: PostData) => {
  try {
    const created = await createPost({
      data: postData,
      include: { author: true },
    });
    // Handle successful creation (e.g., show a success message)
    console.log(`Post "${created.name}" created successfully`);
  } catch (error) {
    // Handle error (e.g., show an error message)
    console.error('Failed to create post:', error);
  }
};

Replace PostData with the appropriate type for your post data.


27-29: ⚠️ Potential issue

Address potential privacy concerns and improve button functionality.

Two issues to address:

  1. Displaying full email addresses could be a privacy concern.
  2. The "Create post" button uses a hardcoded mutation function.

Consider these improvements:

  1. Instead of displaying full email addresses, show only a portion or use a username:
{post.name} by {post.author.email.split('@')[0]}
  1. Modify the button to accept user input for creating posts:
<button onClick={() => mutation({
  name: 'User input name',
  published: true,
  authorId: 'User input or context-based ID'
})}>Create post</button>

Implement a form or modal to collect user input for creating posts.

Also applies to: 34-35

packages/plugins/trpc/tests/projects/nuxt-trpc-v10/app.vue (4)

4-11: 🛠️ Refactor suggestion

Consider using onMounted for initial data fetching

The immediate API call on component creation might affect the initial load performance. Consider moving this call into an onMounted hook for better control over when it's executed.

Here's a suggested refactor:

+import { onMounted } from 'vue';

 const { $client } = useNuxtApp();

-$client.post.findFirst
-    .query({
-        where: { id: '1' },
-        include: { author: true },
-    })
-    .then((post) => {
-        console.log('Author:', post?.author.email);
-    });
+onMounted(() => {
+    $client.post.findFirst
+        .query({
+            where: { id: '1' },
+            include: { author: true },
+        })
+        .then((post) => {
+            console.log('Author:', post?.author.email);
+        })
+        .catch((error) => {
+            console.error('Error fetching post:', error);
+        });
+});

Also, consider adding type information for better type safety:

type Post = {
  id: string;
  author: {
    email: string;
  };
  // Add other relevant fields
};

// Then in the query:
.then((post: Post | null) => {
  console.log('Author:', post?.author.email);
})

13-15: 🛠️ Refactor suggestion

Add error handling to the useQuery call

The useQuery call lacks error handling. Consider adding error handling to improve the robustness of your component.

Here's a suggested improvement:

-const { data: posts } = $client.post.findMany.useQuery({
+const { data: posts, error: postsError } = $client.post.findMany.useQuery({
   include: { author: true },
 });

+if (postsError.value) {
+  console.error('Error fetching posts:', postsError.value);
+  // Handle the error appropriately (e.g., show an error message to the user)
+}

Also, consider adding type information:

type Post = {
  id: string;
  title: string;
  author: {
    email: string;
  };
  // Add other relevant fields
};

const { data: posts, error: postsError } = $client.post.findMany.useQuery<Post[]>({
  include: { author: true },
});

21-41: 🛠️ Refactor suggestion

Refactor onCreate function to reduce duplication

The onCreate function performs two similar create operations. Consider refactoring to reduce code duplication and improve maintainability.

Here's a suggested refactor:

const createPost = async (client: typeof $client | typeof mutate) => {
  const result = await client({
    data: {
      title: 'New Post',
      content: 'This is a new post',
      author: { connect: { id: '1' } },
    },
    include: { author: true },
  });
  console.log('Created by:', result.author.email);
  return result;
};

const onCreate = async () => {
  await createPost($client.post.create.mutate);
  await createPost(mutate);
};

This refactoring reduces duplication and makes the code more maintainable. Also, consider adding error handling:

const onCreate = async () => {
  try {
    await createPost($client.post.create.mutate);
    await createPost(mutate);
  } catch (error) {
    console.error('Error creating post:', error);
    // Handle the error appropriately (e.g., show an error message to the user)
  }
};

44-52: 🛠️ Refactor suggestion

Enhance template with loading and error states

The current template doesn't account for loading states or potential errors. Consider adding these to improve the user experience.

Here's a suggested improvement:

<template>
  <h1>Home</h1>
  <div v-if="postsError">
    <p>Error loading posts: {{ postsError.message }}</p>
  </div>
  <div v-else>
    <p v-if="!posts">Loading posts...</p>
    <ul v-else-if="posts.length">
      <li v-for="post in posts" :key="post.id">{{ post.title }} by {{ post.author.email }}</li>
    </ul>
    <p v-else>No posts found.</p>
    <button @click="onCreate" :disabled="isCreating">
      {{ isCreating ? 'Creating...' : 'Create Post' }}
    </button>
  </div>
</template>

This template now handles loading and error states, and also disables the "Create Post" button while a post is being created. You'll need to add isCreating and postsError to your component's state:

const isCreating = ref(false);

const onCreate = async () => {
  isCreating.value = true;
  try {
    await createPost($client.post.create.mutate);
    await createPost(mutate);
  } catch (error) {
    console.error('Error creating post:', error);
    // Handle the error appropriately (e.g., show an error message to the user)
  } finally {
    isCreating.value = false;
  }
};
tests/regression/tests/issue-1770.test.ts (1)

3-49: 🛠️ Refactor suggestion

Enhance test coverage with additional scenarios.

While the current test case provides a good starting point for regression testing issue 1770, consider adding more test cases to improve coverage and robustness.

Some suggestions for additional test cases:

  1. Test creating and querying nested relationships (e.g., User -> OrgUser -> Organization).
  2. Verify the behavior of the @@allow rule in the BaseAuth model.
  3. Test the @@delegate behavior in the Resource model.
  4. Add edge cases, such as querying with empty or invalid data.

These additional tests will help ensure that all aspects of the fix for issue 1770 are properly validated.

packages/plugins/trpc/tests/nuxt.test.ts (2)

18-29: 🛠️ Refactor suggestion

Consider enhancing the test case for tRPC v10.

While the test case covers the basic setup and build process, there are a few areas for improvement:

  1. The use of relative paths might make the test fragile. Consider using path.resolve() for more robust path handling.
  2. The test lacks assertions to validate the expected outcome. Add assertions to check if the build was successful or if certain files were generated.
  3. Consider adding error handling for the run() calls to provide more informative test failures.

Here's a suggested improvement:

it('project test trpc v10', () => {
    const ver = require(path.resolve(__dirname, '../package.json')).version;
    const projectDir = path.resolve(__dirname, './projects/nuxt-trpc-v10');
    process.chdir(projectDir);

    const deps = ['zenstackhq-language', 'zenstackhq-runtime', 'zenstackhq-sdk', 'zenstack'];
    for (const dep of deps) {
        const depPath = path.resolve(__dirname, '../../../../.build/', `${dep}-${ver}.tgz`);
        expect(() => run(`npm install ${depPath}`)).not.toThrow();
    }

    expect(() => run('npx zenstack generate')).not.toThrow();
    expect(() => run('npm run build')).not.toThrow();

    // Add assertions to check for expected output files
    expect(fs.existsSync(path.join(projectDir, 'some-expected-output-file'))).toBeTruthy();
});

31-42: 🛠️ Refactor suggestion

Refactor to reduce duplication between test cases.

The test cases for tRPC v10 and v11 are nearly identical, leading to code duplication. Consider refactoring to improve maintainability and readability:

  1. Extract the common logic into a separate function.
  2. Use parameterized tests to run the same test for different versions.

Here's a suggested refactoring:

function runTRPCTest(version: string) {
    const ver = require(path.resolve(__dirname, '../package.json')).version;
    const projectDir = path.resolve(__dirname, `./projects/nuxt-trpc-${version}`);
    process.chdir(projectDir);

    const deps = ['zenstackhq-language', 'zenstackhq-runtime', 'zenstackhq-sdk', 'zenstack'];
    for (const dep of deps) {
        const depPath = path.resolve(__dirname, '../../../../.build/', `${dep}-${ver}.tgz`);
        expect(() => run(`npm install ${depPath}`)).not.toThrow();
    }

    expect(() => run('npx zenstack generate')).not.toThrow();
    expect(() => run('npm run build')).not.toThrow();

    // Add assertions to check for expected output files
    expect(fs.existsSync(path.join(projectDir, 'some-expected-output-file'))).toBeTruthy();
}

it.each(['v10', 'v11'])('project test trpc %s', (version) => {
    runTRPCTest(version);
});

This refactoring reduces duplication, improves maintainability, and makes it easier to add tests for new versions in the future.

packages/plugins/trpc/tests/projects/t3-trpc-v11/src/app/_components/post.tsx (1)

24-24: 💡 Codebase verification

🛠️ Refactor suggestion

Refine the debug statement and remove unnecessary optional chaining.

The verification confirms that latestPost1 is always defined due to the usage of useSuspenseQuery, making the optional chaining (?.) redundant. Additionally, console.log statements should be avoided in production code to maintain performance and cleanliness.

Recommendations:

  • Remove the optional chaining:

    console.log(latestPost1.author.email);
  • Eliminate or conditionally include console.log statements based on the environment:

    if (process.env.NODE_ENV !== 'production') {
      console.log('Author email:', latestPost1.author.email);
    }
🔗 Analysis chain

Consider refining the debug statement and error handling.

The addition of optional chaining (?.) improves code safety by preventing potential runtime errors if latestPost1 is null or undefined. However, there are a few points to consider:

  1. Since useSuspenseQuery is used, latestPost1 should always be defined, potentially making the optional chaining unnecessary.
  2. Console.log statements are generally not recommended in production code.

Consider refactoring this line as follows:

if (process.env.NODE_ENV !== 'production') {
  console.assert(latestPost1, 'latestPost1 is undefined');
  console.log('Author email:', latestPost1.author.email);
}

This approach:

  • Removes the console.log in production builds
  • Uses console.assert to catch unexpected undefined values during development
  • Removes the optional chaining, as it should be unnecessary with useSuspenseQuery

To ensure latestPost1 is always defined, we can check its usage elsewhere in the file:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check usage of latestPost1 in the file

rg --type typescript 'latestPost1' packages/plugins/trpc/tests/projects/t3-trpc-v11/src/app/_components/post.tsx

Length of output: 147


Script:

#!/bin/bash
# Description: Check usage of latestPost1 in the file

rg --type tsx 'latestPost1' packages/plugins/trpc/tests/projects/t3-trpc-v11/src/app/_components/post.tsx

Length of output: 133


Script:

#!/bin/bash
# Description: Check usage of latestPost1 in the file

rg 'latestPost1' packages/plugins/trpc/tests/projects/t3-trpc-v11/src/app/_components/post.tsx

Length of output: 201

packages/ide/jetbrains/build.gradle.kts (1)

12-12: 💡 Codebase verification

Changelog Update Required

The changelog does not include an entry for version 2.7.0. Please update the CHANGELOG.md to include the new version entry.

🔗 Analysis chain

Version update looks good.

The version increment from 2.6.2 to 2.7.0 is consistent with the PR title and follows semantic versioning conventions. This indicates a minor release with new features but no breaking changes.

To ensure the changelog is up-to-date with this version change, please run the following script:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that the changelog includes an entry for version 2.7.0

# Test: Check if the changelog contains an entry for version 2.7.0
# Expect: An entry for version 2.7.0 should be present
if grep -q "\[2\.7\.0\]" CHANGELOG.md; then
    echo "Changelog entry for version 2.7.0 found."
else
    echo "Warning: Changelog entry for version 2.7.0 not found. Please update the changelog."
fi

Length of output: 211

tests/regression/tests/issue-1745.test.ts (1)

60-63: 💡 Codebase verification

Clarify or Remove the Redundant 'type' Field in the Buyer Model

The type field in the Buyer model appears to be redundant since the Company model already includes a buyerType field. No other instances of 'type: String' were found in the codebase, suggesting that the type field might be unnecessary and could lead to confusion.

  • File: tests/regression/tests/issue-1745.test.ts
    • Lines: 60-63
🔗 Analysis chain

Clarify the purpose of the 'type' field in the Buyer model.

The Buyer model extends Company, which already has a 'buyerType' field. The additional 'type' field in Buyer seems redundant and might lead to confusion. Consider removing it or clarifying its purpose if it's different from 'buyerType'.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if 'type' field is used elsewhere in the codebase
rg --type typescript 'type\s*:\s*String' -g '!**/issue-1745.test.ts'

Length of output: 105


Script:

#!/bin/bash
# Correctly search for 'type: String' in TypeScript files excluding issue-1745.test.ts
rg 'type\s*:\s*String' -g '*.ts' -g '*.tsx' -g '!tests/regression/tests/issue-1745.test.ts'

Length of output: 263

packages/plugins/trpc/tests/projects/nuxt-trpc-v10/schema.zmodel (1)

26-26: ⚠️ Potential issue

Reconsider the maximum length constraint for email addresses

The maximum length of 32 characters for the email field might be too restrictive. Valid email addresses can exceed 32 characters. Consider increasing the maximum length or removing the upper limit to accommodate longer emails.

packages/plugins/trpc/res/client/v11/utils.ts (1)

1-1: ⚠️ Potential issue

Fix invalid syntax or add missing import statement at line 1.

The line ../v10/utils.ts is not valid TypeScript syntax. If you intend to import utilities from '../v10/utils.ts', please use a proper import statement, such as:

import * as utils from '../v10/utils';

If this line was added unintentionally, consider removing it to prevent syntax errors.

Apply this diff to correct the issue:

-../v10/utils.ts
+// Removed invalid line or added proper import statement if needed
packages/server/src/hono/handler.ts (4)

31-31: ⚠️ Potential issue

Potential off-by-one error in path extraction

The expression ctx.req.path.substring(ctx.req.routePath.length - 1) may incorrectly extract the path due to an off-by-one error. Please verify that subtracting 1 is correct and that it correctly extracts the desired path segment.


39-43: ⚠️ Potential issue

Handle JSON parsing errors appropriately

Currently, if parsing the JSON body fails, the error is silently ignored (// noop). This could lead to issues if requestBody is expected to be used later. Consider handling the error by returning a 400 Bad Request response to indicate invalid JSON.

Suggested change:

} catch {
-    // noop
+    return ctx.json({ message: 'Invalid JSON body' }, 400);
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

            try {
                requestBody = await ctx.req.json();
            } catch {
                return ctx.json({ message: 'Invalid JSON body' }, 400);
            }

57-59: ⚠️ Potential issue

Avoid returning detailed error messages to the client

Returning the error message ${err} in the response may expose sensitive information. It's best practice to log the error internally and return a generic error message to the client.

Suggested change:

- return ctx.json({ message: `An unhandled error occurred: ${err}` }, 500);
+ console.error('An unhandled error occurred:', err);
+ return ctx.json({ message: 'Internal Server Error' }, 500);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

        } catch (err) {
            console.error('An unhandled error occurred:', err);
            return ctx.json({ message: 'Internal Server Error' }, 500);
        }

23-26: ⚠️ Potential issue

Avoid exposing internal implementation details in error responses

Returning a 500 status code with a specific error message may leak internal details. Consider returning a generic error message and use an appropriate status code, such as 503 Service Unavailable, when the Prisma client is unavailable.

Suggested change:

- return ctx.json({ message: 'unable to get prisma from request context' }, 500);
+ console.error('Prisma client is unavailable');
+ return ctx.json({ message: 'Service Unavailable' }, 503);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

        const prisma = (await options.getPrisma(ctx)) as DbClientContract;
        if (!prisma) {
            console.error('Prisma client is unavailable');
            return ctx.json({ message: 'Service Unavailable' }, 503);
        }
packages/plugins/trpc/src/client-helper/nuxt.ts (1)

7-10: ⚠️ Potential issue

Missing imports for 'TRPCClientErrorLike' and 'TRPCRequestOptions'

The types TRPCClientErrorLike and TRPCRequestOptions are used in the code but have not been imported, which will lead to TypeScript errors.

To fix this, add the following import statements in the generateRouterTypingImports function:

 import type { MaybeRefOrGetter, UnwrapRef } from 'vue';
 import type { AsyncData, AsyncDataOptions } from 'nuxt/app';
 import type { KeysOf, PickFrom } from './utils';
+import type { TRPCClientErrorLike, TRPCRequestOptions } from '@trpc/client';
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

        `import type { MaybeRefOrGetter, UnwrapRef } from 'vue';`,
        `import type { AsyncData, AsyncDataOptions } from 'nuxt/app';`,
        `import type { KeysOf, PickFrom } from './utils';`,
        `import type { TRPCClientErrorLike, TRPCRequestOptions } from '@trpc/client';`,
    ]);
packages/plugins/trpc/src/client-helper/react.ts (2)

83-84: ⚠️ Potential issue

Define or import 'Context' to prevent undefined type references in mutation options.

The Context type is referenced in the mutation definitions but is neither defined nor imported. This will cause type errors where Context is used in UseTRPCMutationOptions and UseTRPCMutationResult.

You can resolve this by importing or defining the Context type. If Context is defined elsewhere, import it:

+import type { Context } from '../../path/to/your/context'; // Update the import path accordingly

If the context is not needed for your mutation options, you can remove it from the type definitions:

-    ${resultType},
-    Context
+    ${resultType}

And adjust the subsequent code accordingly.

Also applies to: 87-87


28-29: ⚠️ Potential issue

Import 'TRPCClientErrorLike' and 'AppRouter' to resolve reference errors.

The type TRPCClientErrorLike and AppRouter are used in defining errorType, but they are not imported or defined in this file. This will result in reference errors during type checking.

Apply the following diff to import the missing types:

+import type { TRPCClientErrorLike } from '@trpc/client';
+import type { AppRouter } from '../../path/to/your/router'; // Update the import path accordingly

Committable suggestion was skipped due to low confidence.

packages/plugins/trpc/src/utils.ts (6)

27-32: 🛠️ Refactor suggestion

⚠️ Potential issue

Avoid casting 'input' to 'any'; resolve TypeScript type errors properly

In the query procedure, input is cast to any to circumvent a TypeScript compiler error. Casting to any can undermine type safety and may lead to runtime errors. Consider refining the type definitions or using more specific type assertions to address the compiler error without resorting to any.


34-39: 🛠️ Refactor suggestion

⚠️ Potential issue

Avoid casting 'input' to 'any' in mutation procedures

Similarly, in the mutation procedure, avoid casting input to any. Ensuring proper type alignment will maintain type safety and prevent potential issues. Adjust the type definitions or use appropriate type guards to handle the compiler error.


142-143: 🛠️ Refactor suggestion

Review regular expressions for accuracy and maintainability

The regular expressions used (modelAttributeRegex, attributeNameRegex, attributeArgsRegex) may not cover all necessary cases and could be simplified or improved for readability. Complex regex patterns can be hard to maintain and error-prone. Consider using a parser or more precise patterns to enhance reliability.


141-172: 🛠️ Refactor suggestion

⚠️ Potential issue

Add error handling when parsing model documentation

The resolveModelsComments function may throw exceptions if the model documentation does not match the expected patterns or if JSON.parse fails due to invalid JSON. To enhance reliability, add error handling mechanisms to gracefully handle parsing errors and unexpected formats.

For example, you can wrap the parsing logic in try-catch blocks:

+ try {
    const attribute = model.documentation?.match(modelAttributeRegex)?.[0];
    // existing code...
+ } catch (error) {
+     // Handle parsing error or log a warning
+ }

Committable suggestion was skipped due to low confidence.


112-138: 🛠️ Refactor suggestion

⚠️ Potential issue

Ensure 'procType' is defined for all operation names

Similarly, in the getProcedureTypeByOpName function, if opName does not match any known cases, procType will be undefined. This may cause issues where procType is expected to be either 'query' or 'mutation'. Adding a default case that handles unexpected opName values will improve robustness.

+ default:
+     throw new Error(`Unsupported operation name: ${opName}`);

Committable suggestion was skipped due to low confidence.


57-110: 🛠️ Refactor suggestion

⚠️ Potential issue

Handle undefined 'inputType' for unsupported operation names

In the getInputSchemaByOpName function, if opName does not match any case in the switch statement, inputType will be undefined. This could lead to runtime errors when inputType is used. Consider adding a default case to handle unsupported operations or throw an explicit error to alert developers.

+ default:
+     throw new Error(`Unsupported operation name: ${opName}`);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

export const getInputSchemaByOpName = (opName: string, modelName: string) => {
    let inputType;
    const capModelName = upperCaseFirst(modelName);
    switch (opName) {
        case 'findUnique':
            inputType = `$Schema.${capModelName}InputSchema.findUnique`;
            break;
        case 'findFirst':
            inputType = `$Schema.${capModelName}InputSchema.findFirst.optional()`;
            break;
        case 'findMany':
            inputType = `$Schema.${capModelName}InputSchema.findMany.optional()`;
            break;
        case 'findRaw':
            inputType = `$Schema.${capModelName}InputSchema.findRawObject`;
            break;
        case 'createOne':
            inputType = `$Schema.${capModelName}InputSchema.create`;
            break;
        case 'createMany':
            inputType = `$Schema.${capModelName}InputSchema.createMany.optional()`;
            break;
        case 'deleteOne':
            inputType = `$Schema.${capModelName}InputSchema.delete`;
            break;
        case 'updateOne':
            inputType = `$Schema.${capModelName}InputSchema.update`;
            break;
        case 'deleteMany':
            inputType = `$Schema.${capModelName}InputSchema.deleteMany.optional()`;
            break;
        case 'updateMany':
            inputType = `$Schema.${capModelName}InputSchema.updateMany`;
            break;
        case 'upsertOne':
            inputType = `$Schema.${capModelName}InputSchema.upsert`;
            break;
        case 'aggregate':
            inputType = `$Schema.${capModelName}InputSchema.aggregate`;
            break;
        case 'aggregateRaw':
            inputType = `$Schema.${capModelName}InputSchema.aggregateRawObject`;
            break;
        case 'groupBy':
            inputType = `$Schema.${capModelName}InputSchema.groupBy`;
            break;
        case 'count':
            inputType = `$Schema.${capModelName}InputSchema.count.optional()`;
            break;
        default:
            throw new Error(`Unsupported operation name: ${opName}`);
    }
    return inputType;
};
packages/server/tests/adapter/hono.test.ts (6)

69-78: ⚠️ Potential issue

Handle potential errors in groupBy operation.

When performing a groupBy operation without providing necessary aggregations or groupings, it could lead to runtime errors depending on the Prisma version. Ensure that the parameters provided are sufficient and handle any potential exceptions.

Consider adding error handling or validations to ensure the groupBy operation is called correctly.


169-169: ⚠️ Potential issue

Duplicate makeUrl call in DELETE request.

The makeUrl function is called twice when constructing the URL for the DELETE request, resulting in a malformed URL.

Apply this diff to correct the URL construction:

 r = await handler(makeRequest('DELETE', makeUrl(makeUrl('/api/user/user1'))));
 r = await handler(makeRequest('DELETE', makeUrl('/api/user/user1')));
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

        r = await handler(makeRequest('DELETE', makeUrl('/api/user/user1')));

23-37: 🛠️ Refactor suggestion

Consider removing unused include parameter in user creation.

In the makeRequest call for creating a user, you include { include: { posts: true } } in the request body. Since this is a create operation, and the response does not seem to utilize the included posts, you might consider removing the include parameter if it's not necessary.

Apply this diff to remove the unnecessary include:

 makeRequest('POST', '/api/user/create', {
-    include: { posts: true },
     data: {
         id: 'user1',
         email: '[email protected]',
         posts: {
             create: [
                 { title: 'post1', published: true, viewCount: 1 },
                 { title: 'post2', published: false, viewCount: 2 },
             ],
         },
     },
 })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

        r = await handler(
            makeRequest('POST', '/api/user/create', {
                data: {
                    id: 'user1',
                    email: '[email protected]',
                    posts: {
                        create: [
                            { title: 'post1', published: true, viewCount: 1 },
                            { title: 'post2', published: false, viewCount: 2 },
                        ],
                    },
                },
            })
        );

117-126: ⚠️ Potential issue

Potential misconfiguration of REST handler endpoint.

In the createHonoHandler configuration, the handler is set up with Rest({ endpoint: 'http://localhost/api' }). This endpoint should typically be a relative path unless there is a specific reason to include the full URL.

Consider changing the endpoint to a relative path to avoid potential issues with endpoint resolution:

 handler: Rest({ endpoint: '/api' }),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

        const { prisma, modelMeta, zodSchemas } = await loadSchema(schema);

        const handler = await createHonoApp(
            createHonoHandler({
                getPrisma: () => prisma,
                handler: Rest({ endpoint: '/api' }),
                modelMeta,
                zodSchemas,
            })
        );

180-183: ⚠️ Potential issue

Add error handling for JSON parsing in unmarshal.

Parsing response bodies with JSON.parse can throw an error if the response is not valid JSON. To make tests more robust, consider adding try-catch blocks to handle parsing errors gracefully.

Update the unmarshal function to handle parsing exceptions:

 async function unmarshal(r: Response, useSuperJson = false) {
     const text = await r.text();
-    return (useSuperJson ? superjson.parse(text) : JSON.parse(text)) as any;
+    try {
+        return (useSuperJson ? superjson.parse(text) : JSON.parse(text)) as any;
+    } catch (e) {
+        throw new Error(`Failed to parse JSON response: ${e.message}\nResponse text: ${text}`);
+    }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

async function unmarshal(r: Response, useSuperJson = false) {
    const text = await r.text();
    try {
        return (useSuperJson ? superjson.parse(text) : JSON.parse(text)) as any;
    } catch (e) {
        throw new Error(`Failed to parse JSON response: ${e.message}\nResponse text: ${text}`);
    }
}

85-112: 🛠️ Refactor suggestion

Duplicate test case with custom load path.

The test custom load path appears to be creating a user similar to the previous test. If the intention is to test the custom output path (./zen), ensure that the focus is on validating the path and related configurations rather than repeating user creation logic.

Consider refactoring the test to emphasize the custom load path's effect, possibly by asserting that the modelMeta and zodSchemas are loaded correctly from the custom path.

Example:

 it('custom load path', async () => {
     const { prisma, projectDir } = await loadSchema(schema, { output: './zen' });

     const handler = await createHonoApp(
         createHonoHandler({
             getPrisma: () => prisma,
-            modelMeta: require(path.join(projectDir, './zen/model-meta')).default,
-            zodSchemas: require(path.join(projectDir, './zen/zod')),
+            modelMeta: require(path.join(projectDir, 'zen', 'model-meta')).default,
+            zodSchemas: require(path.join(projectDir, 'zen', 'zod')),
         })
     );

+    expect(modelMeta).toBeDefined();
+    expect(zodSchemas).toBeDefined();
 });

Committable suggestion was skipped due to low confidence.

packages/schema/src/utils/ast-utils.ts (2)

119-129: ⚠️ Potential issue

Ensure compatibility of Array.prototype.findLast

On line 123, Array.prototype.findLast is used:

clone.$inheritedFrom = delegateBases.findLast((base) => base.fields.some((f) => f.name === node.name));

The findLast method is part of ECMAScript 2022 and is supported in Node.js v18 and above. If your project needs to support earlier environments, consider an alternative implementation:

- clone.$inheritedFrom = delegateBases.findLast((base) => base.fields.some((f) => f.name === node.name));
+ const reversedBases = [...delegateBases].reverse();
+ clone.$inheritedFrom = reversedBases.find((base) => base.fields.some((f) => f.name === node.name));

This alternative reverses the array and uses the widely supported find method.


195-195: 🛠️ Refactor suggestion

Replace console.log with a proper logging mechanism or remove

On line 195, there is a direct console.log statement:

console.log('Loaded import from:', imp.path);

Using console.log in library code can clutter the console output for consumers of your library. Consider using a configurable logging utility or removing the log statement if it's not essential.

Apply this diff to address the issue:

- console.log('Loaded import from:', imp.path);
+ // Consider using a logging library or remove this log statement
packages/plugins/trpc/src/client-helper/index.ts (1)

53-54: ⚠️ Potential issue

Missing import for AnyRouter from @trpc/server

The type AnyRouter is used but not imported. This will lead to TypeScript errors.

Please add the following import statement at the top of your file:

import type { AnyRouter } from '@trpc/server';

Also applies to: 120-121

packages/testtools/src/schema.ts (2)

267-271: 🛠️ Refactor suggestion

Refactor nested ternary operators for improved readability

The nested ternary operators used in assigning prismaLoadPath can make the code harder to read and maintain. Consider refactoring the logic to enhance clarity.

Proposed refactor:

let prismaLoadPath;
if (options?.prismaLoadPath) {
    if (path.isAbsolute(options.prismaLoadPath)) {
        prismaLoadPath = options.prismaLoadPath;
    } else {
        prismaLoadPath = path.join(projectDir, options.prismaLoadPath);
    }
} else {
    prismaLoadPath = path.join(projectDir, 'node_modules/.prisma/client');
}

Alternatively, use path.resolve to simplify the path handling:

const prismaLoadPath = options?.prismaLoadPath
    ? path.resolve(projectDir, options.prismaLoadPath)
    : path.join(projectDir, 'node_modules/.prisma/client');

This refactor improves readability and makes the code easier to understand.


272-272: ⚠️ Potential issue

Add error handling for dynamic module loading

Loading a module dynamically using a potentially user-provided path (prismaLoadPath) can lead to runtime errors if the module cannot be found or loaded.

Consider wrapping the require(prismaLoadPath); statement in a try-catch block to handle errors gracefully:

let prismaModule;
try {
    prismaModule = require(prismaLoadPath);
} catch (error) {
    throw new Error(`Failed to load Prisma client from path "${prismaLoadPath}": ${error.message}`);
}

This ensures that any issues in loading the module are caught, and a meaningful error message is provided.

packages/plugins/tanstack-query/src/runtime/common.ts (1)

459-469: 🛠️ Refactor suggestion

Enhance type safety by adding checks for all elements in isZenStackQueryKey

While the function checks the length and the first element of queryKey, consider adding type validations for the remaining elements to ensure they match the expected types. This will further prevent potential runtime errors due to unexpected types.

Apply this diff to add additional type checks:

 function isZenStackQueryKey(queryKey: readonly unknown[]): queryKey is QueryKey {
     if (queryKey.length < 5) {
         return false;
     }

     if (typeof queryKey[0] !== 'string' || queryKey[0] !== QUERY_KEY_PREFIX) {
         return false;
     }

+    if (typeof queryKey[1] !== 'string') {
+        return false;
+    }

+    if (typeof queryKey[2] !== 'string') {
+        return false;
+    }

+    // `queryKey[3]` can be any value (args), so no type check is necessary.

+    if (typeof queryKey[4] !== 'object' || queryKey[4] === null) {
+        return false;
+    }

     return true;
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

function isZenStackQueryKey(queryKey: readonly unknown[]): queryKey is QueryKey {
    if (queryKey.length < 5) {
        return false;
    }

    if (typeof queryKey[0] !== 'string' || queryKey[0] !== QUERY_KEY_PREFIX) {
        return false;
    }

    if (typeof queryKey[1] !== 'string') {
        return false;
    }

    if (typeof queryKey[2] !== 'string') {
        return false;
    }

    // `queryKey[3]` can be any value (args), so no type check is necessary.

    if (typeof queryKey[4] !== 'object' || queryKey[4] === null) {
        return false;
    }

    return true;
}
packages/plugins/trpc/src/generator.ts (2)

97-98: 🛠️ Refactor suggestion

Avoid unnecessary type casting of generateClientHelpers

When calling createAppRouter, you're casting generateClientHelpers to SupportedClientHelpers[] | undefined. Since you've already validated the contents of generateClientHelpers, consider ensuring it's correctly typed from the start to avoid repeated casting.

Adjust the parsing and validation to assign the correct type:

const generateClientHelpersOption = parseOptionAsStrings(options, 'generateClientHelpers', name);
let generateClientHelpers: SupportedClientHelpers[] | undefined;
if (generateClientHelpersOption) {
    if (
        !generateClientHelpersOption.every((v) => AllSupportedClientHelpers.includes(v as SupportedClientHelpers))
    ) {
        throw new PluginError(
            name,
            `Option "generateClientHelpers" only supports the following values: ${AllSupportedClientHelpers.map(
                (n) => '"' + n + '"'
            ).join(', ')}.`
        );
    } else {
        generateClientHelpers = generateClientHelpersOption as SupportedClientHelpers[];
    }
}

This ensures generateClientHelpers is properly typed, eliminating the need for casting in subsequent function calls.


41-50: 🛠️ Refactor suggestion

Simplify type checking for generateClientHelpers

In the validation of generateClientHelpers, the cast to SupportedClientHelpers inside the includes method is unnecessary since AllSupportedClientHelpers is an array of strings. Removing the cast simplifies the condition without affecting functionality.

Apply this diff to simplify the condition:

if (
    generateClientHelpers &&
-   !generateClientHelpers.every((v) => AllSupportedClientHelpers.includes(v as SupportedClientHelpers))
+   !generateClientHelpers.every((v) => AllSupportedClientHelpers.includes(v))
) {
    throw new PluginError(
        name,
        `Option "generateClientHelpers" only supports the following values: ${AllSupportedClientHelpers.map(
            (n) => '"' + n + '"'
        ).join(', ')}.`
    );
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    if (
        generateClientHelpers &&
        !generateClientHelpers.every((v) => AllSupportedClientHelpers.includes(v))
    ) {
        throw new PluginError(
            name,
            `Option "generateClientHelpers" only supports the following values: ${AllSupportedClientHelpers.map(
                (n) => '"' + n + '"'
            ).join(', ')}.`
        );
packages/plugins/tanstack-query/tests/react-hooks.test.tsx (4)

4-5: 🛠️ Refactor suggestion

Consider Avoiding Disabling ESLint Rules

The ESLint rules @typescript-eslint/ban-ts-comment and @typescript-eslint/no-explicit-any have been disabled. While there are cases where this might be necessary, it's generally advisable to minimize disabling linting rules globally. Instead, consider addressing the underlying issues:

  • For no-explicit-any: Replace any types with more specific type annotations to take full advantage of TypeScript's type safety.
  • For ban-ts-comment: Investigate the need for @ts-ignore comments and refactor the code to eliminate the need for bypassing the compiler checks.

This will enhance code quality and maintainability.


394-454: 🛠️ Refactor suggestion

Refactor Repetitive Test Setup Code

The test case optimistic upsert - create includes setup code that is similar to other test cases, such as initializing data, configuring nock for API mocking, and rendering hooks. To improve maintainability and reduce duplication, consider extracting common setup logic into reusable helper functions or test utilities. This refactoring can make your tests cleaner and easier to maintain.


456-508: 🛠️ Refactor suggestion

Refactor Repetitive Test Setup Code

The test case optimistic upsert - update also contains repeated setup code similar to other tests, including initializing variables, configuring nock, and rendering hooks. To enhance code maintainability and reduce redundancy, consider abstracting common patterns into reusable functions or fixtures. This will simplify your test cases and make future updates more manageable.


417-423: ⚠️ Potential issue

Mocked Upsert API Response Should Return Created Data

In the mocked upsert API response, the returned data is { data: null }. This means the optimistic data in the cache will not be replaced with actual server data, potentially causing inconsistencies if the optimistic data differs from the server response. To more accurately simulate the API behavior and ensure the cache updates correctly, the mock should return the created user data.

Apply this diff to fix the mock response:

 nock(makeUrl('User', 'upsert'))
     .post(/.*/)
     .reply(200, () => {
         console.log('Mocking upsert response');
-        return { data: null };
+        data.push({ id: '1', name: 'foo' });
+        return { data: data[0] };
     });

Committable suggestion was skipped due to low confidence.

packages/plugins/tanstack-query/tests/react-hooks-v5.test.tsx (2)

535-585: ⚠️ Potential issue

Update mocked data to reflect mutation in upsert operation

In the test 'optimistic upsert - update', the mocked response for the upsert operation (lines 559-564) does not modify the data. Since the update operation should change the user's name to 'bar', the mocked response should reflect this change to accurately test the optimistic update behavior.

Modify the mocked response to update the user's name:

nock(makeUrl('User', 'upsert'))
    .post(/.*/)
    .reply(200, () => {
        console.log('Mocking upsert response');
+       data.name = 'bar';
        return { data };
    });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    it('optimistic upsert - update', async () => {
        const { queryClient, wrapper } = createWrapper();

        const queryArgs = { where: { id: '1' } };
        const data = { id: '1', name: 'foo' };

        nock(makeUrl('User', 'findUnique', queryArgs))
            .get(/.*/)
            .reply(200, () => {
                console.log('Querying data:', JSON.stringify(data));
                return { data };
            })
            .persist();

        const { result } = renderHook(
            () => useModelQuery('User', makeUrl('User', 'findUnique'), queryArgs, { optimisticUpdate: true }),
            {
                wrapper,
            }
        );
        await waitFor(() => {
            expect(result.current.data).toMatchObject({ name: 'foo' });
        });

        nock(makeUrl('User', 'upsert'))
            .post(/.*/)
            .reply(200, () => {
                console.log('Mocking upsert response');
                data.name = 'bar';
                return { data };
            });

        const { result: mutationResult } = renderHook(
            () =>
                useModelMutation('User', 'POST', makeUrl('User', 'upsert'), modelMeta, {
                    optimisticUpdate: true,
                    invalidateQueries: false,
                }),
            {
                wrapper,
            }
        );

        act(() => mutationResult.current.mutate({ ...queryArgs, update: { name: 'bar' }, create: { name: 'zee' } }));

        await waitFor(() => {
            const cacheData = queryClient.getQueryData(
                getQueryKey('User', 'findUnique', queryArgs, { infinite: false, optimisticUpdate: true })
            );
            expect(cacheData).toMatchObject({ name: 'bar', $optimistic: true });
        });
    });

475-533: ⚠️ Potential issue

Ensure mocked upsert response returns expected data

In the test case 'optimistic upsert - create', the mocked response for the upsert operation (lines 498-504) returns { data: null }. Typically, an upsert operation should return the created or updated data. Returning null may not accurately simulate the server behavior and could affect the test's validity.

Consider updating the mocked response to return the expected data:

nock(makeUrl('User', 'upsert'))
    .post(/.*/)
    .reply(200, () => {
        console.log('Mocking upsert response');
-       return { data: null };
+       data.push({ id: '1', name: 'foo' });
+       return { data: data[0] };
    });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    it('optimistic upsert - create', async () => {
        const { queryClient, wrapper } = createWrapper();

        const data: any[] = [];

        nock(makeUrl('User', 'findMany'))
            .get(/.*/)
            .reply(200, () => {
                console.log('Querying data:', JSON.stringify(data));
                return { data };
            })
            .persist();

        const { result } = renderHook(
            () => useModelQuery('User', makeUrl('User', 'findMany'), undefined, { optimisticUpdate: true }),
            {
                wrapper,
            }
        );
        await waitFor(() => {
            expect(result.current.data).toHaveLength(0);
        });

        nock(makeUrl('User', 'upsert'))
            .post(/.*/)
            .reply(200, () => {
                console.log('Mocking upsert response');
                data.push({ id: '1', name: 'foo' });
                return { data: data[0] };
            });

        const { result: mutationResult } = renderHook(
            () =>
                useModelMutation('User', 'POST', makeUrl('User', 'upsert'), modelMeta, {
                    optimisticUpdate: true,
                    invalidateQueries: false,
                }),
            {
                wrapper,
            }
        );

        act(() =>
            mutationResult.current.mutate({
                where: { id: '1' },
                create: { id: '1', name: 'foo' },
                update: { name: 'bar' },
            })
        );

        await waitFor(() => {
            const cacheData: any = queryClient.getQueryData(
                getQueryKey('User', 'findMany', undefined, { infinite: false, optimisticUpdate: true })
            );
            expect(cacheData).toHaveLength(1);
            expect(cacheData[0].$optimistic).toBe(true);
            expect(cacheData[0].id).toBeTruthy();
            expect(cacheData[0].name).toBe('foo');
        });
    });
packages/schema/tests/schema/validation/attribute-validation.test.ts (1)

819-822: 🛠️ Refactor suggestion

Potential Naming Conflict with User Model

Defining a User model within test cases may lead to naming collisions or unintended interactions if other tests or modules also define a User model. Consider using a more test-specific model name to prevent potential conflicts.

packages/runtime/src/enhancements/node/delegate.ts (1)

369-370: 🛠️ Refactor suggestion

Address the TODO: Resolve Redundant Policy Rule Injections

The TODO comment indicates that both the delegate base's and the concrete model's policy rules are being injected, which is redundant. Refactoring the code to prevent this redundancy could improve performance and maintainability.

Would you like assistance in refactoring this code to eliminate the redundancy or should I open a GitHub issue to track this task?

packages/server/tests/api/rest.test.ts (1)

1936-1937: ⚠️ Potential issue

Remove attributes from the relationship update request

When updating relationships using the PATCH method to /post/1/relationships/likes, the JSON API specification expects the data field to contain only resource identifiers (type and id). Including attributes is not appropriate in this context and may cause unexpected behavior.

Apply this diff to correct the request body:

 data: [
     {
         type: 'postLike',
         id: `1${idDivider}user1`,
-        attributes: { superLike: true }
     }
 ]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

                            data: [{ type: 'postLike', id: `1${idDivider}user1` }],
                        },
packages/server/src/api/rest/index.ts (6)

35-36: 🛠️ Refactor suggestion

Consider making idDivider configurable

The constant idDivider is set to '_'. If flexibility is needed for different ID formats, consider making this value configurable through the options.


1197-1210: ⚠️ Potential issue

Add validation for composite IDs in makeIdFilter

When handling composite IDs, ensure that resourceId splits into the correct number of parts matching idFields. Consider adding validation and error handling for mismatches to prevent runtime errors.


763-765: ⚠️ Potential issue

Potential issue with handling composite IDs in single relationship

When connecting a single relationship (lines 763-765), ensure that data.data.id correctly represents composite IDs.

Proposed fix:

connect: {
-   [this.makeIdKey(relationInfo.idFields)]: data.data.id,
+   ...this.makeIdFilter(relationInfo.idFields, data.data.id),
},
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

                        connect: {
                            ...this.makeIdFilter(relationInfo.idFields, data.data.id),
                        },

936-938: ⚠️ Potential issue

Potential issue with handling composite IDs in single relationship update

Ensure that data.data.id correctly represents composite IDs when updating a single relationship (lines 936-938).

Proposed fix:

set: {
-   [this.makeIdKey(relationInfo.idFields)]: data.data.id,
+   ...this.makeIdFilter(relationInfo.idFields, data.data.id),
},
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

                        set: {
                            ...this.makeIdFilter(relationInfo.idFields, data.data.id),
                        },

755-755: ⚠️ Potential issue

Potential issue with handling composite IDs in relationship connections

In processCreate, when connecting relationships (line 755), item.id is used as the identifier. For models with composite primary keys, item.id may not correctly represent the composite key. Consider updating the code to handle composite IDs appropriately.

Proposed fix:

-                                [this.makeIdKey(relationInfo.idFields)]: item.id,
+                                ...this.makeIdFilter(relationInfo.idFields, item.id),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

                            ...this.makeIdFilter(relationInfo.idFields, item.id),

928-928: ⚠️ Potential issue

Potential issue with handling composite IDs in relationship updates

When setting relationships in processUpdate (line 928), ensure that item.id correctly represents composite IDs.

Proposed fix:

-                                [this.makeIdKey(relationInfo.idFields)]: item.id,
+                                ...this.makeIdFilter(relationInfo.idFields, item.id),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

                            ...this.makeIdFilter(relationInfo.idFields, item.id),

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (2)
packages/server/src/hono/handler.ts (2)

18-21: LGTM: Function initialization looks good. Consider adding a type annotation.

The createHonoHandler function is well-structured, loading assets and setting up the request handler correctly. It appropriately uses a default RPCApiHandler if no custom handler is provided.

Consider adding a return type annotation to the createHonoHandler function for improved clarity:

-export function createHonoHandler(options: HonoOptions): MiddlewareHandler {
+export function createHonoHandler(options: HonoOptions): MiddlewareHandler {

23-35: LGTM: Initial request processing looks good. Consider enhancing error messages.

The code correctly retrieves and checks for the Prisma instance, processes URL and query parameters, and validates the path parameter. These are all crucial steps for ensuring valid requests.

Consider enhancing the error messages to provide more specific information:

-            return ctx.json({ message: 'unable to get prisma from request context' }, 500);
+            return ctx.json({ message: 'Unable to get Prisma instance from request context' }, 500);

-            return ctx.json({ message: 'missing path parameter' }, 400);
+            return ctx.json({ message: 'Missing or invalid path parameter' }, 400);
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between d83b7ee and c0ab830.

📒 Files selected for processing (3)
  • packages/server/src/hono/handler.ts (1 hunks)
  • packages/server/src/nuxt/handler.ts (1 hunks)
  • packages/server/src/sveltekit/handler.ts (1 hunks)
🧰 Additional context used
🔇 Additional comments (2)
packages/server/src/hono/handler.ts (1)

1-16: LGTM: Import statements and interface definition are well-structured.

The import statements are concise and relevant to the functionality being implemented. The HonoOptions interface is well-defined, extending AdapterBaseOptions and including the essential getPrisma method for retrieving the Prisma instance.

packages/server/src/sveltekit/handler.ts (1)

65-65: LGTM: Logger addition enhances request handling capabilities.

The addition of the logger property to the requestHandler function call is a positive change. It allows for better logging during request handling, which can be crucial for debugging and monitoring in production environments.

To ensure this change is consistent with the rest of the codebase, let's verify the HandlerOptions interface:

packages/server/src/nuxt/handler.ts Show resolved Hide resolved
packages/server/src/hono/handler.ts Show resolved Hide resolved
packages/server/src/hono/handler.ts Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (12)
packages/server/src/api/rest/index.ts (6)

59-59: Approved: Type updates for composite ID support.

The changes to RelationshipInfo and ModelInfo types, replacing idField: string with idFields: FieldInfo[], are crucial for supporting composite IDs. This is a significant improvement in flexibility.

However, this is a breaking change that will require updates in code using these types.

Consider adding a comment or updating the documentation to explain the new structure and its implications for code using these types.

Also applies to: 65-65


726-726: Approved: Added modelMeta parameter to processCreate.

The addition of the modelMeta: ModelMeta parameter to the processCreate method is a good improvement. It allows the method to access model metadata, which is likely necessary for handling composite IDs and other model-specific operations.

As this is a breaking change, consider:

  1. Updating the method's documentation to explain the new parameter and its usage.
  2. Reviewing and updating any code that calls this method to ensure it passes the required modelMeta argument.

896-896: Approved: Added modelMeta parameter to processUpdate.

The addition of the modelMeta: ModelMeta parameter to the processUpdate method is a good improvement, consistent with the changes made to processCreate. It allows the method to access model metadata, which is necessary for handling composite IDs and other model-specific operations.

As this is a breaking change:

  1. Update the method's documentation to explain the new parameter and its usage.
  2. Review and update any code that calls this method to ensure it passes the required modelMeta argument.
  3. Consider adding a brief comment explaining why this parameter is necessary, to help future maintainers understand its importance.

1209-1222: LGTM: Improved makeIdFilter for composite ID support.

The changes to the makeIdFilter method are well-implemented and crucial for supporting both single and composite IDs:

  1. The method now correctly handles both single and multiple ID fields.
  2. It uses the idDivider constant for splitting composite IDs, ensuring consistency.
  3. The implementation is flexible and should work well with various ID structures.

Consider a small optimization: For the single ID case, you could directly return the object without creating an intermediate variable. For example:

if (idFields.length === 1) {
  return { [idFields[0].name]: this.coerce(idFields[0].type, resourceId) };
}
// ... rest of the method

This minor change could slightly improve readability and performance.


1144-1167: LGTM: New injectCompoundId method for handling composite IDs.

The addition of the injectCompoundId method is a crucial improvement for handling composite IDs:

  1. It recursively traverses the entity structure, ensuring all levels of nested relationships are properly handled.
  2. The method creates synthetic ID fields for models with compound IDs, which is essential for consistent ID handling throughout the API.
  3. The implementation is thorough and correctly handles both single and composite ID scenarios.

Consider a small optimization: You could use early returns to simplify the nested if statements. For example:

private injectCompoundId(model: string, items: unknown) {
    const typeInfo = this.typeMap[lowerCaseFirst(model)];
    if (!typeInfo) {
        return;
    }

    enumerate(items).forEach((item: any) => {
        if (!item) {
            return;
        }

        if (typeInfo.idFields.length > 1) {
            item[this.makeIdKey(typeInfo.idFields)] = this.makeCompoundId(typeInfo.idFields, item);
        }

        for (const [key, value] of Object.entries(item)) {
            if (typeInfo.relationships[key]) {
                this.injectCompoundId(typeInfo.relationships[key].type, value);
            }
        }
    });
}

This structure can make the code slightly more readable and reduce nesting.


Line range hint 1-1728: Overall: Excellent implementation of composite ID support.

The changes made to this file significantly enhance the API's capability to handle composite IDs. Key improvements include:

  1. Consistent use of idFields instead of single idField throughout the file.
  2. New utility methods like makeIdKey, makeCompoundId, and injectCompoundId for handling composite IDs.
  3. Updates to core methods like processCreate, processUpdate, makeIdFilter, and makeIdSelect to support composite IDs.
  4. Modifications to relationship handling and filtering to accommodate composite IDs.

These changes are well-implemented, maintaining consistency and readability throughout the file. The new functionality is integrated seamlessly with the existing code structure.

Consider the following to further improve the codebase:

  1. Performance: With the added complexity of composite IDs, consider profiling the API performance, especially for operations involving deep nested relationships or large datasets.
  2. Documentation: Ensure that API documentation is updated to reflect the new support for composite IDs, including any changes to request/response formats.
  3. Testing: Develop comprehensive test cases that cover various scenarios with composite IDs, including edge cases and complex nested structures.
  4. Refactoring: As the file has grown in complexity, consider breaking it down into smaller, more focused modules to improve maintainability.

These changes represent a significant enhancement to the API's flexibility and should greatly improve its ability to handle complex data models.

packages/server/tests/api/rest.test.ts (6)

306-334: New test case added for fetching a related resource with a compound ID

This new test case is a valuable addition to the test suite. It verifies the functionality of fetching a related resource (likes) for a post using a compound ID. The test covers the creation of necessary data, making the API request, and asserting the expected response.

A few observations and suggestions:

  1. The test correctly sets up the required data (user, post, and postLike) before making the API request.
  2. The assertion checks for the correct status code and the structure of the response body.
  3. The use of the idDivider in constructing the compound ID is a good practice for maintaining consistency.

Consider adding an additional assertion to verify the links.related field in the response, as it's an important part of the JSON:API specification for relationships.


1675-1717: New test cases added for creating relations with compound IDs

These new test cases are important additions to ensure the correct handling of compound IDs in the API. They cover two scenarios: creating a relation with a compound ID and creating a single entity with a compound ID.

Observations and suggestions:

  1. Both test cases correctly set up the required data (user and post) before making the API requests.
  2. The tests verify that the API can handle compound IDs in the request payload.
  3. The assertions check for the correct status code (201 Created) for successful creation.

Consider adding the following improvements to these test cases:

  1. Add assertions to verify the structure and content of the response body, not just the status code.
  2. After creating the entities, make a GET request to ensure the data was persisted correctly with the compound ID.
  3. Consider testing edge cases, such as attempting to create a relation with an invalid compound ID format.

Example for point 1:

expect(r.body).toMatchObject({
  data: {
    type: 'postLike',
    id: `1${idDivider}user1`,
    attributes: { userId: 'user1', postId: 1, superLike: false },
  },
});

1856-1875: New test case added for updating an item with a compound ID

This test case is a crucial addition to ensure that the API can correctly handle updates to entities with compound IDs.

Observations and suggestions:

  1. The test correctly sets up the required data (user, post, and postLike) before making the update request.
  2. It verifies that the API can handle a PUT request with a compound ID in the URL.
  3. The test checks for the correct status code (200 OK) for a successful update.

Consider enhancing this test case with the following improvements:

  1. Add assertions to verify the structure and content of the response body, not just the status code.
  2. After updating the entity, make a GET request to ensure the data was updated correctly.
  3. Test updating different attributes of the entity to ensure all fields can be modified.

Example for point 1:

expect(r.body).toMatchObject({
  data: {
    type: 'postLike',
    id: `1${idDivider}user1`,
    attributes: { userId: 'user1', postId: 1, superLike: true },
  },
});

Example for point 2:

const getResponse = await handler({
  method: 'get',
  path: `/postLike/1${idDivider}user1`,
  prisma,
});

expect(getResponse.status).toBe(200);
expect(getResponse.body.data.attributes.superLike).toBe(true);

1960-1976: New test case added for updating a collection of relations with compound IDs

This test case is an important addition to verify the API's ability to handle updates to collections of relations that use compound IDs.

Observations and suggestions:

  1. The test correctly sets up the required data (user, post, and postLike) before making the update request.
  2. It verifies that the API can handle a PATCH request to update a collection of relations.
  3. The test checks for the correct status code (200 OK) for a successful update.

Consider enhancing this test case with the following improvements:

  1. Add assertions to verify the structure and content of the response body, not just the status code.
  2. After updating the relations, make a GET request to ensure the data was updated correctly.
  3. Test updating multiple relations in a single request to ensure the API can handle batch updates.

Example for point 1:

expect(r.body).toMatchObject({
  data: [
    {
      type: 'postLike',
      id: `1${idDivider}user1`,
      attributes: { superLike: true },
    },
  ],
});

Example for point 2:

const getResponse = await handler({
  method: 'get',
  path: '/post/1/likes',
  prisma,
});

expect(getResponse.status).toBe(200);
expect(getResponse.body.data[0].attributes.superLike).toBe(true);

2056-2069: New test case added for deleting an item with a compound ID

This test case is a crucial addition to ensure that the API can correctly handle deletion of entities with compound IDs.

Observations and suggestions:

  1. The test correctly sets up the required data (user, post, and postLike) before making the delete request.
  2. It verifies that the API can handle a DELETE request with a compound ID in the URL.
  3. The test checks for the correct status code (204 No Content) for a successful deletion.

Consider enhancing this test case with the following improvements:

  1. After deleting the entity, make a GET request to ensure the data was actually deleted and returns a 404 Not Found status.
  2. Test deleting an entity with a non-existent compound ID to ensure proper error handling.

Example for point 1:

const getResponse = await handler({
  method: 'get',
  path: `/postLike/1${idDivider}user1`,
  prisma,
});

expect(getResponse.status).toBe(404);

Example for point 2:

const deleteNonExistentResponse = await handler({
  method: 'delete',
  path: `/postLike/999${idDivider}nonexistent`,
  prisma,
});

expect(deleteNonExistentResponse.status).toBe(404);

Line range hint 1-2568: Overall improvement in test coverage for compound ID handling

The changes made to this test file significantly enhance the test coverage for API operations involving compound IDs. These additions are crucial for ensuring the robustness and correctness of the API when dealing with more complex data structures.

Key improvements:

  1. New test cases for creating, updating, and deleting entities with compound IDs.
  2. Tests for fetching related resources using compound IDs.
  3. Verification of API behavior for collections of relations with compound IDs.

These changes will help catch potential issues related to compound ID handling early in the development process.

For future improvements, consider:

  1. Adding more edge cases and error scenarios for compound ID handling.
  2. Testing pagination and filtering with compound IDs.
  3. Ensuring consistent error handling across all compound ID operations.
  4. Adding performance tests for operations involving large numbers of compound ID entities.

These additional tests would further strengthen the reliability and performance of the API when working with compound IDs.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between c0ab830 and 4fc4cf7.

📒 Files selected for processing (2)
  • packages/server/src/api/rest/index.ts (32 hunks)
  • packages/server/tests/api/rest.test.ts (11 hunks)
🧰 Additional context used
🔇 Additional comments (7)
packages/server/src/api/rest/index.ts (7)

8-8: LGTM: New import and constant for handling composite IDs.

The addition of the clone import and the idDivider constant are good improvements:

  1. clone will likely be used for creating deep copies, promoting immutability.
  2. idDivider provides a consistent way to handle composite IDs throughout the code.

These changes align well with the file's overall modifications for supporting composite IDs.

Also applies to: 36-37


756-756: LGTM: Improved ID handling in relationship processing.

The changes in the processCreate method for handling relationships are well-implemented:

  1. Using this.makeIdKey(relationInfo.idFields) instead of a hardcoded 'id' supports composite IDs.
  2. This approach is more flexible and consistent with the overall changes for composite ID support.

These modifications enhance the method's ability to handle various ID structures across different models.

Also applies to: 765-766


1225-1230: LGTM: Updated makeIdSelect for composite ID support.

The changes to the makeIdSelect method are well-implemented:

  1. The method now correctly handles multiple ID fields, supporting composite IDs.
  2. The implementation is concise and uses a functional approach with reduce, which is efficient and readable.
  3. The error handling for cases with no ID fields is maintained, ensuring robustness.

These changes align well with the overall modifications for supporting composite IDs throughout the API.


1232-1237: LGTM: New utility methods for composite ID handling.

The addition of makeIdKey and makeCompoundId methods is a good improvement:

  1. makeIdKey provides a consistent way to generate a key from multiple ID fields.
  2. makeCompoundId creates a compound ID from multiple fields in an item.
  3. Both methods use the idDivider constant, ensuring consistency across the codebase.

These utility methods will help standardize composite ID handling throughout the API, improving maintainability and reducing the chance of errors.


1246-1246: LGTM: Updated includeRelationshipIds for composite ID support.

The change in the includeRelationshipIds method is well-implemented:

  1. Using this.makeIdSelect(relationInfo.idFields) instead of { id: true } allows for proper handling of composite IDs in relationship selections.
  2. This change is consistent with other modifications in the file for supporting composite IDs.
  3. It ensures that all ID fields are correctly included when selecting relationship data.

This update enhances the flexibility of the API to work with various ID structures across different models and their relationships.


1473-1476: LGTM: Updated buildTypeMap for composite ID support in relationships.

The changes in the buildTypeMap method are well-implemented:

  1. The method now correctly handles multiple ID fields for relationships, supporting composite IDs.
  2. It uses a reduce function to create an object with all ID fields set to the sorting direction, which is an efficient and flexible approach.
  3. This change is consistent with other modifications in the file for supporting composite IDs.

These updates ensure that the type map correctly represents models and relationships with composite IDs, which is crucial for proper API functionality.


1560-1561: LGTM: Updated makeFilterValue for composite ID support in filters.

The changes in the makeFilterValue method are well-implemented:

  1. The method now uses this.makeIdFilter for handling relation filters, which correctly supports composite IDs.
  2. It handles both single and multiple values for to-many relations, using the appropriate filter structure.
  3. For to-one relations, it correctly uses the is operator with the composite ID filter.

These updates ensure that filtering operations work correctly with composite IDs, maintaining consistency with the rest of the API's composite ID support.

Also applies to: 1564-1564

@ymc9 ymc9 merged commit 4e80c29 into main Oct 16, 2024
15 of 16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants