-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat : configure multidatasource using qualifier and defaultCandidate as false #1554
Conversation
WalkthroughThis pull request introduces several changes across multiple files related to a multi-database setup. Key modifications include updating the PostgreSQL image version in the Docker configuration, refactoring the Changes
Possibly related PRs
Suggested labels
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (11)
jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/configuration/SwaggerConfig.java (1)
8-12
: LGTM! Good improvements to configuration class design.The changes improve the configuration class in two ways:
- Setting
proxyBeanMethods = false
optimizes Spring bean creation since this configuration doesn't require inter-bean method calls- Package-private visibility properly encapsulates the configuration class as it only needs to be accessed within the configuration package
Consider applying similar optimizations to other configuration classes that don't require bean method interception or public visibility.
jpa/boot-data-multipledatasources/src/main/resources/db/migration/V1__Init.sql (2)
5-7
: Consider adding an index on member_id column.Since
member_id
appears to be a lookup column connecting to a member table in another datasource, an index would improve query performance.CREATE TABLE card_holder ( id BIGINT AUTO_INCREMENT PRIMARY KEY, member_id VARCHAR(255) NOT NULL, card_number VARCHAR(255) NOT NULL + INDEX idx_member_id (member_id) );
Line range hint
1-7
: Document the multi-datasource relationship.Since this table is part of a multi-datasource setup with an implicit relationship to a member table, consider adding a comment explaining this design decision and the cross-datasource relationship.
-- Init script + -- This table is part of the card_holder datasource and maintains a logical + -- relationship with the member table in the member datasource through member_id. + -- No foreign key constraint is defined due to the multi-datasource architecture. -- DDL CREATE TABLE card_holder (jpa/boot-data-multipledatasources/docker-compose.yml (2)
Line range hint
24-24
: Critical: Invalid MySQL version specifiedThe MySQL version
9.1
appears to be incorrect as the latest MySQL version is 8.x. This will cause container startup failures.- image: mysql:9.1 + image: mysql:8.2
Line range hint
1-46
: Consider enhancing container configuration for production readinessSeveral improvements could make the configuration more robust:
- Add healthchecks for database containers
- Define resource limits
- Use non-root users where possible
- Add restart policies
Example enhancement for the postgres service:
postgres: image: postgres:17.2-alpine + user: postgres + healthcheck: + test: ["CMD-SHELL", "pg_isready -U postgres"] + interval: 10s + timeout: 5s + retries: 5 + restart: unless-stopped + deploy: + resources: + limits: + memory: 1Gjpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/bootstrap/DataInitializer.java (3)
14-14
: Consider making the class public for better testabilityWhile package-private visibility works, making the class public would improve testability and allow for easier extension if needed.
-class DataInitializer { +public class DataInitializer {
Line range hint
25-46
: Add transaction management and extract test dataThe data initialization method could benefit from:
- Transaction management to ensure atomic operations
- Externalized test data
- Error handling for database operations
Suggested implementation:
+ @Transactional @EventListener(ApplicationReadyEvent.class) void loadInitialData() { + try { this.cardHolderRepository.deleteAll(); this.memberRepository.deleteAll(); log.info("Data Boot strapping started"); - String memberId = "1"; + loadTestData().forEach(this::createMemberWithCard); log.info("Data Boot strapping completed"); + } catch (Exception e) { + log.error("Failed to initialize data", e); + throw new RuntimeException("Data initialization failed", e); + } } + private List<TestData> loadTestData() { + return List.of( + new TestData("1", "raja", "1234-5678-9012-3456") + ); + } + + private void createMemberWithCard(TestData data) { Member member = new Member(); - member.setMemberId(memberId); - member.setName("raja"); + member.setMemberId(data.memberId()); + member.setName(data.name()); CardHolder cardHolder = new CardHolder(); - cardHolder.setMemberId(memberId); - cardHolder.setCardNumber("1234-5678-9012-3456"); + cardHolder.setMemberId(data.memberId()); + cardHolder.setCardNumber(data.cardNumber());
Missing transaction management for cross-database operations
The code review reveals important concerns about transaction handling:
- The CardHolder datasource has its own transaction manager (
cardHolderTransactionManager
)- The Member datasource uses the default transaction manager
- The
loadInitialData()
method performs operations across both databases without any transaction management- There's no
ChainedTransactionManager
or JTA configuration to handle distributed transactionsKey issues that need attention:
- Add
@Transactional
annotation toloadInitialData()
method- Configure a
ChainedTransactionManager
to handle transactions across both datasources- Consider implementing compensation logic in case one of the operations fails
🔗 Analysis chain
Line range hint
25-46
: Verify multi-datasource transaction handlingSince this is a multi-datasource setup, we should verify that transactions are properly handled across both databases.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for transaction management configuration echo "Searching for transaction management configuration..." rg -l "TransactionManagement|ChainedTransactionManager|JtaTransactionManager" # Look for potential transaction-related classes echo "Checking for transaction-related configuration classes..." ast-grep --pattern 'class $_ implements TransactionManagementConfigurer { $$$ }' # Check for distributed transaction annotations echo "Checking for distributed transaction annotations..." rg "@Transactional.*propagation.*"Length of output: 650
Script:
#!/bin/bash # Let's check for any transaction configuration in the specific project echo "Checking transaction configuration in the multiple datasources project..." rg -l "TransactionManagement|ChainedTransactionManager|JtaTransactionManager" jpa/boot-data-multipledatasources/ # Look for datasource configuration echo "Checking datasource configuration..." rg -A 10 "@Configuration.*DataSource" jpa/boot-data-multipledatasources/ # Check for any transaction annotations in the DataInitializer echo "Checking transaction handling in DataInitializer..." rg -A 5 "@Transactional" jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/bootstrap/DataInitializer.java # Look for repository configurations echo "Checking repository configurations..." fd . jpa/boot-data-multipledatasources/ -t f -e java | xargs rg "Repository|@EnableJpaRepositories" -lLength of output: 1727
Script:
#!/bin/bash # Let's examine the datasource configurations in detail echo "Checking CardHolder datasource configuration..." cat jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/configuration/CardHolderDataSourceConfiguration.java echo "Checking Member datasource configuration..." cat jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/configuration/MemberDataSourceConfiguration.java # Let's also check the DataInitializer class implementation echo "Checking DataInitializer implementation..." cat jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/bootstrap/DataInitializer.javaLength of output: 5599
jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/configuration/CardHolderDataSourceConfiguration.java (3)
32-39
: LGTM! Consider adding connection pool monitoringThe HikariDataSource configuration is properly set up with correct qualifiers and builder pattern usage.
Consider exposing Hikari metrics for monitoring connection pool health:
@ConfigurationProperties("app.datasource.cardholder.hikari") DataSource cardholderDataSource(@Qualifier("mysql") DataSourceProperties dataSourceProperties) { - return dataSourceProperties + HikariDataSource dataSource = dataSourceProperties .initializeDataSourceBuilder() .type(HikariDataSource.class) .build(); + dataSource.setMetricsTrackerFactory(new PrometheusMetricsTrackerFactory()); + return dataSource; }
41-48
: LGTM! Consider adding custom JPA propertiesThe EntityManagerFactory configuration is well-structured with proper qualifiers and explicit persistence unit naming.
Consider adding custom JPA properties for better performance tuning:
return builder.dataSource(dataSource) .packages(CardHolder.class) .persistenceUnit("cardholder") + .properties(Map.of( + "hibernate.jdbc.batch_size", "50", + "hibernate.jdbc.fetch_size", "50", + "hibernate.query.plan_cache_max_size", "2048" + )) .build();
19-54
: Well-structured multi-datasource configurationThe overall architecture follows best practices for multi-datasource setup:
- Clear separation of concerns
- Consistent qualifier usage
- Explicit configuration over convention
- Proper disabling of default candidates
Consider documenting the multi-datasource setup in README.md to help other developers understand:
- The purpose of each datasource
- The qualifier strategy
- Configuration property requirements
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (8)
jpa/boot-data-multipledatasources/docker-compose.yml
(1 hunks)jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/bootstrap/DataInitializer.java
(1 hunks)jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/configuration/BaseDataSourceConfiguration.java
(0 hunks)jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/configuration/CardHolderDataSourceConfiguration.java
(1 hunks)jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/configuration/MemberDataSourceConfiguration.java
(1 hunks)jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/configuration/SwaggerConfig.java
(1 hunks)jpa/boot-data-multipledatasources/src/main/resources/db/migration/V1__Init.sql
(1 hunks)jpa/boot-data-multipledatasources/src/test/java/com/example/multipledatasources/common/ContainersConfiguration.java
(2 hunks)
💤 Files with no reviewable changes (1)
- jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/configuration/BaseDataSourceConfiguration.java
🔇 Additional comments (9)
jpa/boot-data-multipledatasources/src/main/resources/db/migration/V1__Init.sql (1)
5-5
: LGTM! Using BIGINT for ID is a good practice.
Using BIGINT instead of INT for the primary key is a forward-thinking choice that prevents potential overflow issues as the table grows.
jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/configuration/MemberDataSourceConfiguration.java (1)
9-12
: LGTM! Good practices implemented.
The configuration has been simplified effectively:
- Using
proxyBeanMethods = false
for performance optimization - Package-private visibility for better encapsulation
- Clear and focused component scanning with
@EntityScan
and@EnableJpaRepositories
jpa/boot-data-multipledatasources/src/test/java/com/example/multipledatasources/common/ContainersConfiguration.java (2)
Line range hint 1-60
: Overall implementation looks good!
The class is well-structured with:
- Clear documentation
- Proper container lifecycle management
- Good use of Spring Test infrastructure
- Clean separation of concerns
32-32
: Verify PostgreSQL version compatibility
The PostgreSQL version update to 17.2-alpine seems reasonable, but let's ensure it aligns with the production environment version.
jpa/boot-data-multipledatasources/docker-compose.yml (1)
4-4
: LGTM: PostgreSQL version update
The update to postgres:17.2-alpine is appropriate as it includes the latest security patches.
jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/bootstrap/DataInitializer.java (1)
19-22
: LGTM: Constructor injection
Good practice using constructor injection instead of field injection with @requiredargsconstructor.
jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/configuration/CardHolderDataSourceConfiguration.java (3)
19-24
: LGTM! Good optimization with proxyBeanMethods=false
Setting proxyBeanMethods=false
is a good optimization as it allows Spring to skip creating CGLIB proxies when beans don't have interdependencies.
50-54
: LGTM! Transaction manager is properly configured
The transaction manager is correctly set up with appropriate qualifiers and properly bound to the EntityManagerFactory.
26-30
: Verify consistent qualifier usage across datasource configurations
The @Qualifier("mysql")
annotation suggests this is part of a multi-datasource setup. Let's verify that other datasource configurations are properly qualified to avoid ambiguity.
✅ Verification successful
Let me gather more information about the other datasource configuration to ensure proper qualification.
Let me check for any other datasource-related beans and configurations in the project.
Based on the gathered information, I can now provide the final response:
Qualifier usage is appropriate and consistent
The @Qualifier("mysql")
annotation in CardHolderDataSourceConfiguration is properly implemented with consistent usage across all related beans (DataSourceProperties, DataSource, EntityManagerFactory, and TransactionManager). The MemberDataSourceConfiguration is correctly set up as the default configuration without qualifiers, which complements this setup.
- All mysql-qualified beans in CardHolderDataSourceConfiguration are marked with
defaultCandidate = false
- Qualifier annotations are consistently applied to both bean definitions and injection points
- The separation between default (Member) and qualified (CardHolder) datasource configurations is clear and well-structured
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other datasource configurations and their qualifiers
echo "Searching for datasource configurations and qualifiers..."
rg -l "DataSourceConfiguration|@Qualifier" --type java
# Search specifically for mysql qualifier usage
echo -e "\nChecking mysql qualifier usage..."
rg "@Qualifier\(\"mysql\"\)" --type java
Length of output: 3185
Script:
#!/bin/bash
# Check the content of MemberDataSourceConfiguration.java to see its qualifier
echo "Content of MemberDataSourceConfiguration.java:"
cat jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/configuration/MemberDataSourceConfiguration.java
# Search for any DataSourceProperties bean definitions
echo -e "\nSearching for all DataSourceProperties bean definitions:"
ast-grep --pattern 'DataSourceProperties $_() { $$ }'
Length of output: 1015
Script:
#!/bin/bash
# Search for all Bean definitions in both configuration files
echo "Searching for Bean definitions in CardHolderDataSourceConfiguration.java:"
rg "@Bean" -B 2 -A 5 jpa/boot-data-multipledatasources/src/main/java/com/example/multipledatasources/configuration/CardHolderDataSourceConfiguration.java
# Search for any other Qualifier annotations in the project
echo -e "\nSearching for all Qualifier annotations in the project:"
rg "@Qualifier\(" --type java
# Search for DataSource bean definitions
echo -e "\nSearching for DataSource bean definitions:"
ast-grep --pattern '@Bean $$ DataSource $_($$$) { $$ }'
Length of output: 8398
@Configuration(proxyBeanMethods = false) | ||
@EntityScan(basePackageClasses = Member.class) | ||
@EnableJpaRepositories(basePackageClasses = MemberRepository.class) | ||
class MemberDataSourceConfiguration {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing qualifier and defaultCandidate configurations.
Based on the PR objectives, this configuration should include qualifier and set defaultCandidate to false. Consider adding:
@Qualifier
annotation to distinguish this datasource configuration- Set
defaultCandidate = false
in relevant annotations to prevent it from being picked as default
Here's the suggested implementation:
@Configuration(proxyBeanMethods = false)
@EntityScan(basePackageClasses = Member.class)
-@EnableJpaRepositories(basePackageClasses = MemberRepository.class)
+@EnableJpaRepositories(
+ basePackageClasses = MemberRepository.class,
+ defaultAutowireCandidate = false
+)
+@Qualifier("memberDataSource")
class MemberDataSourceConfiguration {}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@Configuration(proxyBeanMethods = false) | |
@EntityScan(basePackageClasses = Member.class) | |
@EnableJpaRepositories(basePackageClasses = MemberRepository.class) | |
class MemberDataSourceConfiguration {} | |
@Configuration(proxyBeanMethods = false) | |
@EntityScan(basePackageClasses = Member.class) | |
@EnableJpaRepositories( | |
basePackageClasses = MemberRepository.class, | |
defaultAutowireCandidate = false | |
) | |
@Qualifier("memberDataSource") | |
class MemberDataSourceConfiguration {} |
properties.add("spring.datasource.url", postgreSQLContainer::getJdbcUrl); | ||
properties.add("spring.datasource.username", postgreSQLContainer::getUsername); | ||
properties.add("spring.datasource.password", postgreSQLContainer::getPassword); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider maintaining consistent property naming convention
The configuration uses different prefix patterns:
- MySQL:
app.datasource.cardholder.*
- PostgreSQL:
spring.datasource.*
This mixed approach might lead to confusion. Consider using consistent custom prefixes for both datasources to clearly indicate their purpose in a multi-datasource setup.
-properties.add("spring.datasource.url", postgreSQLContainer::getJdbcUrl);
-properties.add("spring.datasource.username", postgreSQLContainer::getUsername);
-properties.add("spring.datasource.password", postgreSQLContainer::getPassword);
+properties.add("app.datasource.member.url", postgreSQLContainer::getJdbcUrl);
+properties.add("app.datasource.member.username", postgreSQLContainer::getUsername);
+properties.add("app.datasource.member.password", postgreSQLContainer::getPassword);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
properties.add("spring.datasource.url", postgreSQLContainer::getJdbcUrl); | |
properties.add("spring.datasource.username", postgreSQLContainer::getUsername); | |
properties.add("spring.datasource.password", postgreSQLContainer::getPassword); | |
properties.add("app.datasource.member.url", postgreSQLContainer::getJdbcUrl); | |
properties.add("app.datasource.member.username", postgreSQLContainer::getUsername); | |
properties.add("app.datasource.member.password", postgreSQLContainer::getPassword); |
|
GitGuardian id | GitGuardian status | Secret | Commit | Filename | |
---|---|---|---|---|---|
9412876 | Triggered | Generic Password | 08515e7 | jpa/boot-data-multipledatasources/src/main/resources/application.properties | View secret |
9412876 | Triggered | Generic Password | 08515e7 | jpa/boot-data-multipledatasources/src/main/resources/application.properties | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secrets safely. Learn here the best practices.
- Revoke and rotate these secrets.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
Quality Gate failedFailed conditions See analysis details on SonarQube Cloud Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE |
#Store cardholder (cardholder name, zip) | ||
spring.datasource.url=jdbc:postgresql://localhost:5432/memberdb?createDatabaseIfNotExist=true | ||
spring.datasource.username=postgres | ||
spring.datasource.password=postgres |
Check failure
Code scanning / SonarCloud
Credentials should not be hard-coded High
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (1)
jpa/boot-data-multipledatasources/src/main/resources/application.properties (1)
Line range hint
4-50
: Review multi-datasource configuration architectureThe current setup uses different configuration approaches for member (Spring standard) and cardholder (custom prefix) datasources. Consider:
- Using consistent property prefixes for both datasources for better maintainability:
# Option 1: Both using custom prefixes app.datasource.member.* app.datasource.cardholder.* # Option 2: Both using Spring standard prefixes spring.member-datasource.* spring.cardholder-datasource.*
- Document the reasoning behind using different database vendors (PostgreSQL and MySQL) for different domains.
Would you like assistance in refactoring the configuration to use a more consistent approach?
🧰 Tools
🪛 GitHub Check: SonarCloud
[failure] 6-6: Credentials should not be hard-coded
Revoke and change this password, as it is compromised.See more on SonarQube Cloud
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
jpa/boot-data-multipledatasources/src/main/resources/application.properties
(3 hunks)
🧰 Additional context used
🪛 GitHub Check: SonarCloud
jpa/boot-data-multipledatasources/src/main/resources/application.properties
[failure] 6-6: Credentials should not be hard-coded
Revoke and change this password, as it is compromised.See more on SonarQube Cloud
🔇 Additional comments (2)
jpa/boot-data-multipledatasources/src/main/resources/application.properties (2)
24-24
: Good addition of ApplicationName property
Adding the application name to the datasource properties improves observability by making it easier to track database connections in monitoring tools.
48-50
: Verify Liquibase configuration changes
The Liquibase properties have been updated to use the new Spring datasource properties. Let's verify that Liquibase migrations still work correctly with these changes.
✅ Verification successful
Liquibase configuration is correctly set up and consistent
The verification shows that:
- The Liquibase properties in
application.properties
correctly reference the Spring datasource properties - The project has proper Liquibase changelog files at
jpa/boot-data-multipledatasources/src/main/resources/db/changelog/
- The database configuration is properly set up with PostgreSQL datasource properties that match the Liquibase configuration
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if Liquibase configuration is consistent and migrations are present
# Check for Liquibase changelog files
echo "Checking for Liquibase changelog files..."
fd -e sql -e xml -e yaml -e json . db/changelog
# Verify no remaining references to old property names
echo "Checking for any remaining old property references..."
rg "app\.datasource\.member" -g '!*.md'
Length of output: 401
Script:
#!/bin/bash
# Let's verify the Liquibase configuration more thoroughly
# Check for Liquibase changelog location in application.properties
echo "Checking Liquibase changelog path configuration..."
rg "spring.liquibase.change-log" -A 1 -B 1
# Check all Liquibase related properties
echo "Checking all Liquibase related properties..."
rg "spring.liquibase" -A 1 -B 1
# Check for any datasource properties to understand the configuration
echo "Checking datasource properties..."
rg "spring.datasource" -A 1 -B 1
# Look for any Liquibase changelog files in the entire project
echo "Looking for Liquibase changelog files anywhere in the project..."
fd -e sql -e xml -e yaml -e json . | rg -i "changelog|liquibase"
Length of output: 38781
spring.datasource.url=jdbc:postgresql://localhost:5432/memberdb?createDatabaseIfNotExist=true | ||
spring.datasource.username=postgres | ||
spring.datasource.password=postgres | ||
spring.datasource.driverClassName=org.postgresql.Driver | ||
spring.datasource.hikari.connectionTimeout=30000 | ||
spring.datasource.hikari.idleTimeout=600000 | ||
spring.datasource.hikari.maxLifetime=1800000 | ||
spring.datasource.hikari.poolName=memberHikariPool | ||
spring.datasource.hikari.data-source-properties.ApplicationName=${spring.application.name} | ||
spring.datasource.hikari.autoCommit=false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider externalizing database credentials and connection properties
The member datasource configuration contains hardcoded credentials and connection properties. While the change to use Spring's standard datasource prefix is good, consider:
- Moving sensitive values to environment variables or a secure configuration service
- Using placeholder syntax:
${MEMBER_DB_PASSWORD}
Example configuration:
spring.datasource.url=${MEMBER_DB_URL:jdbc:postgresql://localhost:5432/memberdb?createDatabaseIfNotExist=true}
spring.datasource.username=${MEMBER_DB_USERNAME}
spring.datasource.password=${MEMBER_DB_PASSWORD}
🧰 Tools
🪛 GitHub Check: SonarCloud
[failure] 6-6: Credentials should not be hard-coded
Revoke and change this password, as it is compromised.See more on SonarQube Cloud
Summary by CodeRabbit
New Features
id
column in thecard_holder
table to support larger values.Bug Fixes
DataInitilizer
toDataInitializer
for clarity.Refactor
MemberDataSourceConfiguration
andCardHolderDataSourceConfiguration
classes for better configuration management.Documentation