Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix code inspection for test module #31216

Merged
merged 1 commit into from
May 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion RELEASE-NOTES.md
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,7 @@
1. Kernel: Fix use Consul in cluster mode start up failure
1. DB Discovery: Close heartbeat job when drop discovery rule
1. Kernel: Fix wrong decide result when execute same sharding condition subquery with SQL federation
1. Kernel: Fix priority problem of UNION, INTERSECT, EXCEPT set operation in SQL Federation for PostgreSQL and openGuass dialect
1. Kernel: Fix priority problem of UNION, INTERSECT, EXCEPT set operation in SQL Federation for PostgreSQL and openGauss dialect
1. Kernel: Fix create view index out of range exception when view contains set operator
1. Kernel: Add XA resource exceeds length check
1. Kernel: Fix transaction support for spring requires_new
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ Currently, more than 170 companies are using ShardingSphere. This article is bas
## System Architecture and Data Flow
As shown in Figure 2, ShardingSphere can be divided into five modules:

1. **dData source:** It enables storage by integrating various databases and currently supports data sources such as MySQL, PostgreSQL, SQL Server, Oracle, MariaDB and openGuass.
1. **dData source:** It enables storage by integrating various databases and currently supports data sources such as MySQL, PostgreSQL, SQL Server, Oracle, MariaDB and openGauss.
2. **Function:** It provides many out-of-the-box features that can be freely added, combined, or deleted as needed.
3. **Governor** is mainly used for configuration management and health monitoring.
4. **SQL engine.** With the complete data sharding SQL engine, all functions are pluggable, and any function can be implemented through a SQL statement.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ Connection failed.

We see that it’s actually due to the authentication protocol.

The psql client requires md5 protocol authentication by default, but because Proxy requires the scram-sha-256 under the openGuass protocol, the negotiation fails and an exception is thrown.
The psql client requires md5 protocol authentication by default, but because Proxy requires the scram-sha-256 under the openGauss protocol, the negotiation fails and an exception is thrown.

## Following Steps

Expand Down Expand Up @@ -229,7 +229,7 @@ Connection succeeded.

![img](https://shardingsphere.apache.org/blog/img/2023_05_18_Enhancing_Database_Security_ShardingSphere-Proxy’s_Authentication.en.md8.jpeg)

Now we see that `psql` has successfully connected to ShardingSphere-Proxy under the openGuass protocol.
Now we see that `psql` has successfully connected to ShardingSphere-Proxy under the openGauss protocol.

![img](https://shardingsphere.apache.org/blog/img/2023_05_18_Enhancing_Database_Security_ShardingSphere-Proxy’s_Authentication.en.md9.jpeg)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Currently, NATIVE and DOCKER are available.
1. NATIVE : Run on developer local machine. Need to start ShardingSphere-Proxy instance and database instance by developer. It could be used for local debugging.
2. DOCKER : Run on docker started by Maven plugin. It could be used for GitHub Actions, and it could be used for local debugging too.

Supported databases: MySQL, PostgreSQL and openGuass.
Supported databases: MySQL, PostgreSQL and openGauss.

## User guide

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ class DataRecordResultConvertUtilsTest {
void assertConvertDataRecordToRecord() throws InvalidProtocolBufferException, SQLException {
DataRecord dataRecord = new DataRecord(PipelineSQLOperationType.INSERT, "t_order", new IntegerPrimaryKeyIngestPosition(0L, 1L), 2);
dataRecord.addColumn(new Column("order_id", BigInteger.ONE, false, true));
dataRecord.addColumn(new Column("price", BigDecimal.valueOf(123), false, false));
dataRecord.addColumn(new Column("price", BigDecimal.valueOf(123L), false, false));
dataRecord.addColumn(new Column("user_id", Long.MAX_VALUE, false, false));
dataRecord.addColumn(new Column("item_id", Integer.MAX_VALUE, false, false));
dataRecord.addColumn(new Column("create_date", LocalDate.now(), false, false));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ private static void run() throws ClassNotFoundException, SQLException, Interrupt
Connection connection = getConnection();
OrderService orderService = new OrderServiceImpl(connection);
OrderController orderController = new OrderController(orderService);
long endTime = System.currentTimeMillis() + (5 * 60 * 1000);
long endTime = System.currentTimeMillis() + (5L * 60L * 1000L);
while (System.currentTimeMillis() < endTime) {
orderController.dropTable();
orderController.createTable();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,21 +50,21 @@ public void dropTable() {
* Insert order.
*/
public void insert() {
long index = 0;
while (index++ <= 100) {
long index = 0L;
while (index++ <= 100L) {
OrderEntity order = new OrderEntity(index, index, "OK");
orderService.insert(order, 0 == (index & 1) ? StatementType.STATEMENT : StatementType.PREPARED, 0 == index % 5);
orderService.insert(order, 0L == (index & 1L) ? StatementType.STATEMENT : StatementType.PREPARED, 0L == index % 5L);
}
}

/**
* Create error request.
*/
public void createErrorRequest() {
long index = 0;
while (index++ <= 10) {
long index = 0L;
while (index++ <= 10L) {
OrderEntity order = new OrderEntity(index, index, "Fail");
orderService.insert(order, 0 == (index & 1) ? StatementType.STATEMENT : StatementType.PREPARED, false);
orderService.insert(order, 0L == (index & 1L) ? StatementType.STATEMENT : StatementType.PREPARED, false);
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ protected void configure() {
}

private Map<String, String> createResourceMappingForProxy() {
Map<String, String> result = new HashMap<>();
Map<String, String> result = new HashMap<>(2, 1F);
result.put("/env/jdbc/conf/config.yaml", CONFIG_PATH_IN_CONTAINER + "conf/config.yaml");
if (!Strings.isNullOrEmpty(plugin)) {
result.put(String.format("/env/agent/conf/%s/agent.yaml", plugin), CONFIG_PATH_IN_CONTAINER + "agent/conf/agent.yaml");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ protected void configure() {
}

private Map<String, String> createResourceMappingForProxy() {
Map<String, String> result = new HashMap<>();
Map<String, String> result = new HashMap<>(3, 1F);
result.put("/env/proxy/conf/global.yaml", ProxyContainerConstants.CONFIG_PATH_IN_CONTAINER + "global.yaml");
result.put("/env/proxy/conf/database-db.yaml", ProxyContainerConstants.CONFIG_PATH_IN_CONTAINER + "database-db.yaml");
if (!Strings.isNullOrEmpty(plugin)) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,8 +136,8 @@ public void init() {
createJDBCEnvironment();
}
log.info("Waiting to collect data ...");
if (0 < collectDataWaitSeconds) {
Awaitility.await().ignoreExceptions().atMost(Duration.ofSeconds(collectDataWaitSeconds + 1)).pollDelay(collectDataWaitSeconds, TimeUnit.SECONDS).until(() -> true);
if (0L < collectDataWaitSeconds) {
Awaitility.await().ignoreExceptions().atMost(Duration.ofSeconds(collectDataWaitSeconds + 1L)).pollDelay(collectDataWaitSeconds, TimeUnit.SECONDS).until(() -> true);
}
initialized = true;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ private void request() {
JDBCAgentTestUtils.insertOrder(orderEntity, connection);
results.add(orderEntity.getOrderId());
}
OrderEntity orderEntity = new OrderEntity(1000, 1000, "ROLL_BACK");
OrderEntity orderEntity = new OrderEntity(1000L, 1000, "ROLL_BACK");
JDBCAgentTestUtils.insertOrderRollback(orderEntity, connection);
JDBCAgentTestUtils.updateOrderStatus(orderEntity, connection);
JDBCAgentTestUtils.selectAllOrders(connection);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,9 +39,9 @@ public final class OkHttpUtils {

private OkHttpUtils() {
OkHttpClient.Builder builder = new OkHttpClient.Builder();
builder.connectTimeout(10, TimeUnit.SECONDS);
builder.readTimeout(10, TimeUnit.SECONDS);
builder.writeTimeout(10, TimeUnit.SECONDS);
builder.connectTimeout(10L, TimeUnit.SECONDS);
builder.readTimeout(10L, TimeUnit.SECONDS);
builder.writeTimeout(10L, TimeUnit.SECONDS);
client = builder.build();
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,15 +48,13 @@ public abstract class DockerStorageContainer extends DockerITContainer implement

private final DatabaseType databaseType;

private final Map<String, DataSource> actualDataSourceMap;
private final Map<String, DataSource> actualDataSourceMap = new LinkedHashMap<>();

private final Map<String, DataSource> expectedDataSourceMap;
private final Map<String, DataSource> expectedDataSourceMap = new LinkedHashMap<>();

protected DockerStorageContainer(final DatabaseType databaseType, final String containerImage) {
super(databaseType.getType().toLowerCase(), containerImage);
this.databaseType = databaseType;
actualDataSourceMap = new LinkedHashMap<>();
expectedDataSourceMap = new LinkedHashMap<>();
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ public StorageContainerConfiguration(final String containerCommand, final Map<St
final Map<String, DatabaseType> databaseTypes, final Map<String, DatabaseType> expectedDatabaseTypes) {
this.databaseTypes = databaseTypes;
this.expectedDatabaseTypes = expectedDatabaseTypes;
this.scenario = null;
scenario = null;
this.containerCommand = containerCommand;
this.containerEnvironments = containerEnvironments;
this.mountedResources = mountedResources;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ protected void configure() {
mapResources(storageContainerConfig.getMountedResources());
withPrivilegedMode(true);
super.configure();
withStartupTimeout(Duration.of(120, ChronoUnit.SECONDS));
withStartupTimeout(Duration.of(120L, ChronoUnit.SECONDS));
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ public static DataSource generateDataSource(final String jdbcUrl, final String u
result.setPassword(password);
result.setMaximumPoolSize(maximumPoolSize);
result.setTransactionIsolation("TRANSACTION_READ_COMMITTED");
result.setLeakDetectionThreshold(10000);
result.setLeakDetectionThreshold(10000L);
return result;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ public final class AuthorityEnvironment {
* Get init SQLs of this database type.
*
* @param databaseType database type
* @return init SQLs of this data base type
* @return init SQLs of this database type
*/
public Collection<String> getInitSQLs(final DatabaseType databaseType) {
Collection<String> result = new LinkedList<>();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ public static Map<String, DatabaseType> getDatabaseTypes(final String scenario,
}

private static Map<String, DatabaseType> crateDatabaseTypes(final Collection<String> datasourceNames, final DatabaseType defaultDatabaseType) {
Map<String, DatabaseType> result = new LinkedHashMap<>();
Map<String, DatabaseType> result = new LinkedHashMap<>(datasourceNames.size(), 1F);
for (String each : datasourceNames) {
List<String> items = Splitter.on(":").splitToList(each);
DatabaseType databaseType = items.size() > 1 ? TypedSPILoader.getService(DatabaseType.class, items.get(1)) : defaultDatabaseType;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ public Collection<String> doSharding(final Collection<String> availableTargetNam
int maxValue = shardingValue.getValueRange().hasUpperBound() ? shardingValue.getValueRange().upperEndpoint() : Integer.MAX_VALUE;
long range = BigInteger.valueOf(maxValue).subtract(BigInteger.valueOf(minValue)).longValue();
int begin = Math.abs(minValue) % 10;
if (range > 9) {
if (range > 9L) {
return availableTargetNames;
}
for (int i = begin; i <= range; i += 1) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ public void registerStorageUnit(final String storageUnitName) throws SQLExceptio
.replace("${url}", getActualJdbcUrlTemplate(storageUnitName, true));
proxyExecuteWithLog(registerStorageUnitTemplate, 0);
int timeout = databaseType instanceof OpenGaussDatabaseType ? 60 : 10;
Awaitility.await().ignoreExceptions().atMost(timeout, TimeUnit.SECONDS).pollInterval(3, TimeUnit.SECONDS).until(() -> showStorageUnitsName().contains(storageUnitName));
Awaitility.await().ignoreExceptions().atMost(timeout, TimeUnit.SECONDS).pollInterval(3L, TimeUnit.SECONDS).until(() -> showStorageUnitsName().contains(storageUnitName));
}

/**
Expand Down Expand Up @@ -473,11 +473,11 @@ public List<Map<String, Object>> queryForListWithLog(final DataSource dataSource
*/
public List<Map<String, Object>> transformResultSetToList(final ResultSet resultSet) throws SQLException {
ResultSetMetaData resultSetMetaData = resultSet.getMetaData();
int columns = resultSetMetaData.getColumnCount();
int columnCount = resultSetMetaData.getColumnCount();
List<Map<String, Object>> result = new ArrayList<>();
while (resultSet.next()) {
Map<String, Object> row = new HashMap<>();
for (int i = 1; i <= columns; i++) {
Map<String, Object> row = new HashMap<>(columnCount, 1F);
for (int i = 1; i <= columnCount; i++) {
row.put(resultSetMetaData.getColumnLabel(i).toLowerCase(), resultSet.getObject(i));
}
result.add(row);
Expand Down Expand Up @@ -506,7 +506,7 @@ public List<Map<String, Object>> waitIncrementTaskFinished(final String distSQL)
for (int i = 0; i < 10; i++) {
List<Map<String, Object>> listJobStatus = queryForListWithLog(distSQL);
log.info("show status result: {}", listJobStatus);
Set<String> actualStatus = new HashSet<>();
Set<String> actualStatus = new HashSet<>(listJobStatus.size(), 1F);
Collection<Integer> incrementalIdleSecondsList = new LinkedList<>();
for (Map<String, Object> each : listJobStatus) {
assertTrue(Strings.isNullOrEmpty((String) each.get("error_message")), "error_message: `" + each.get("error_message") + "`");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,13 +53,12 @@ public final class DataSourceRecordConsumer implements Consumer<List<Record>> {

private final DataSource dataSource;

private final Map<String, PipelineTableMetaData> tableMetaDataMap;

private final StandardPipelineTableMetaDataLoader loader;

private final Map<String, PipelineTableMetaData> tableMetaDataMap = new ConcurrentHashMap<>();

public DataSourceRecordConsumer(final DataSource dataSource, final DatabaseType databaseType) {
this.dataSource = dataSource;
tableMetaDataMap = new ConcurrentHashMap<>();
loader = new StandardPipelineTableMetaDataLoader(new PipelineDataSourceWrapper(dataSource, databaseType));
}

Expand All @@ -79,7 +78,7 @@ public void accept(final List<Record> records) {
private void processRecords(final List<Record> records, final Connection connection) throws SQLException {
long insertCount = records.stream().filter(each -> DataChangeType.INSERT == each.getDataChangeType()).count();
if (insertCount == records.size()) {
Map<String, List<Record>> recordsMap = new HashMap<>();
Map<String, List<Record>> recordsMap = new HashMap<>(records.size(), 1F);
for (Record each : records) {
String key = buildTableNameWithSchema(each.getMetaData().getTable(), each.getMetaData().getSchema());
recordsMap.computeIfAbsent(key, ignored -> new LinkedList<>()).add(each);
Expand Down Expand Up @@ -182,7 +181,7 @@ private String buildSQL(final Record ingestedRecord) {
case DELETE:
return SQLBuilderUtils.buildDeleteSQL(tableName, "order_id");
default:
throw new UnsupportedOperationException();
throw new UnsupportedOperationException("");
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ void assertMigrationSuccess(final PipelineTestParameter testParam) throws SQLExc
containerComposer.waitJobPrepareSuccess(String.format("SHOW MIGRATION STATUS '%s'", orderJobId));
containerComposer.startIncrementTask(
new E2EIncrementalTask(containerComposer.getSourceDataSource(), SOURCE_TABLE_NAME, new SnowflakeKeyGenerateAlgorithm(), containerComposer.getDatabaseType(), 30));
TimeUnit.SECONDS.timedJoin(containerComposer.getIncreaseTaskThread(), 30);
TimeUnit.SECONDS.timedJoin(containerComposer.getIncreaseTaskThread(), 30L);
containerComposer.sourceExecuteWithLog(String.format("INSERT INTO %s (order_id, user_id, status) VALUES (10000, 1, 'OK')", SOURCE_TABLE_NAME));
containerComposer.sourceExecuteWithLog("INSERT INTO t_order_item (item_id, order_id, user_id, status) VALUES (10000, 10000, 1, 'OK')");
stopMigrationByJobId(containerComposer, orderJobId);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ void assertMigrationSuccess(final PipelineTestParameter testParam) throws SQLExc
String schemaTableName = String.join(".", PipelineContainerComposer.SCHEMA_NAME, SOURCE_TABLE_NAME);
containerComposer.startIncrementTask(new E2EIncrementalTask(containerComposer.getSourceDataSource(), schemaTableName, new SnowflakeKeyGenerateAlgorithm(),
containerComposer.getDatabaseType(), 20));
TimeUnit.SECONDS.timedJoin(containerComposer.getIncreaseTaskThread(), 30);
TimeUnit.SECONDS.timedJoin(containerComposer.getIncreaseTaskThread(), 30L);
containerComposer.sourceExecuteWithLog(String.format("INSERT INTO %s (order_id, user_id, status) VALUES (10000, 1, 'OK')", schemaTableName));
DataSource jdbcDataSource = containerComposer.generateShardingSphereDataSourceFromProxy();
containerComposer.assertOrderRecordExist(jdbcDataSource, schemaTableName, 10000);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ void assertMySQLToPostgreSQLMigrationSuccess(final PipelineTestParameter testPar
+ "KEY_GENERATE_STRATEGY(COLUMN=order_id, TYPE(NAME='snowflake')))", 2);
initTargetTable(containerComposer);
containerComposer.proxyExecuteWithLog("MIGRATE TABLE source_ds.t_order INTO t_order", 2);
Awaitility.await().ignoreExceptions().atMost(10, TimeUnit.SECONDS).pollInterval(1, TimeUnit.SECONDS).until(() -> !listJobId(containerComposer).isEmpty());
Awaitility.await().ignoreExceptions().atMost(10L, TimeUnit.SECONDS).pollInterval(1L, TimeUnit.SECONDS).until(() -> !listJobId(containerComposer).isEmpty());
String jobId = listJobId(containerComposer).get(0);
containerComposer.waitJobStatusReached(String.format("SHOW MIGRATION STATUS %s", jobId), JobStatus.EXECUTE_INCREMENTAL_TASK, 15);
try (Connection connection = DriverManager.getConnection(jdbcUrl, "postgres", "postgres")) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ private void insertOrder(final Object[] orderInsertData) {
} else if (databaseType instanceof OpenGaussDatabaseType) {
sql = SQLBuilderUtils.buildInsertSQL(OPENGAUSS_COLUMN_NAMES, orderTableName);
} else {
throw new UnsupportedOperationException();
throw new UnsupportedOperationException("");
}
DataSourceExecuteUtils.execute(dataSource, sql, orderInsertData);
}
Expand All @@ -124,7 +124,7 @@ private void doIncrementalChanges(final Object orderId, final List<IncrementalAc
deleteOrderById(orderId);
break;
default:
throw new UnsupportedOperationException();
throw new UnsupportedOperationException("");
}
}
}
Expand Down
Loading