You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. I'm taeyong In LG CNS Software Application Architect Unit. Spring has come. Please take care of your health in the changing season.
I have one question about config.
I want to increase the performance of linked oracles in the process of mass publishing a specific topic. When I searched related oracle connection properties, these properties exist.
defaultRowPrefetch - the default number of rows to prefetch from the server (default value is "10") defaultBatchValue - the default batch value that triggers an execution request (default value is "10")
At first, I thought the connectionProperties value was being set but as I fine-tuned the query in SQL Developer (cost went from 3670 to 285 and plan explain went from :10 to :03), the time in the application never fluctuated from the original 15 seconds. Removing the connectionProperties setting had no effect.
So, what I did was this:
DAO class
private List getAllActivitiesJustJDBC() {
String query = "select * " + "from activity a, work_order w "
+ "where a.ac_customer = 'CSC' "
+ "and w.wo_customer = a.ac_customer "
+ "and a.ac_workorder = w.wo_workorder ";
long startTime = System.currentTimeMillis();
List activities = new ArrayList();
try {
Connection conn = jdbcTemplate.getDataSource().getConnection();
PreparedStatement st = conn.prepareStatement(query);
st.setFetchSize(1000);
ResultSet rs = st.executeQuery();
ActivityMapper mapper = new ActivityMapper();
while (rs.next()) {
Activity activity = mapper.mapRow(rs, 1);
activities.add(activity);
}
} catch (Exception ex) {
ex.printStackTrace();
}
System.out.println("Time it took...."
+ (System.currentTimeMillis() - startTime));
System.out.println("Number of activities = " + activities.size());
return activities;
}
This time, the time it took to fetch 11,115 rows took on average 2 seconds. The key statement is the setFetchSize(1000).
do I need to close the connection or is Spring handling this for me?
How can oracle connection-properties be involved in eventuate-cdc behavior? If you have any advice, please reply.
The text was updated successfully, but these errors were encountered:
defaultRowPrefetch=10;defaultBatchValue=10;useFetchSizeWithLongColumn=true need to be passed to HikariConfig.setDataSourceProperties()/addDataSourceProperty()
One solution would be to add a Map property to ConnectionPoolConfigurationProperties for these DataSourceProperties.
I noticed however that Hikari can read a configuration file. I wonder whether a workaround would be to specified these nested "dataSourceProperties" in the configuration file.
@cer @dartartem @eventuateio
Hi. I'm taeyong In LG CNS Software Application Architect Unit. Spring has come. Please take care of your health in the changing season.
I have one question about config.
I want to increase the performance of linked oracles in the process of mass publishing a specific topic. When I searched related oracle connection properties, these properties exist.
defaultRowPrefetch - the default number of rows to prefetch from the server (default value is "10")
defaultBatchValue - the default batch value that triggers an execution request (default value is "10")
So, I applied it to application.properties,
application.properties
eventuate.cdc.reader.reader.readersample.datasourceUrl=jdbc:oracle:thin:@111.11.11.111:3525
eventuate.cdc.reader.reader.readersample.datasourceUsername=test
eventuate.cdc.reader.reader.readersample.datasourcePassword=test01
eventuate.cdc.reader.reader.readersample.datasourceDriverClassName=oracle.jdbc.OracleDriver
eventuate.cdc.reader.reader.readersample.leadershipLockPath=/eventuate/cdc-test/leader/sample
eventuate.cdc.reader.reader.readersample.pollingIntervalInMilliseconds=1
eventuate.cdc.reader.reader.readersample.maxEventsPerPolling=1000
eventuate.cdc.reader.reader.readersample.maxAttemptsForPolling=100
eventuate.cdc.reader.reader.readersample.pollingRetryIntervalInMilliseconds=100
eventuate.cdc.pipeline.pipelinesample.type=eventuate-tram
eventuate.cdc.pipeline.pipelinesample.eventuateDatabaseSchema=sample
eventuate.cdc.pipeline.pipelinesample.sourceTableName=message
eventuate.cdc.pipeline.pipelinesample.reader=readersample
eventuate.local.kafka.producer.perperties.buffer.memory=35544320
eventuate.local.kafka.producer.perperties.batch.size=1024000
...
spring.datasource.connection-properties:
defaultRowPrefetch=10;defaultBatchValue=10;useFetchSizeWithLongColumn=true
At first, I thought the connectionProperties value was being set but as I fine-tuned the query in SQL Developer (cost went from 3670 to 285 and plan explain went from :10 to :03), the time in the application never fluctuated from the original 15 seconds. Removing the connectionProperties setting had no effect.
So, what I did was this:
DAO class
private List getAllActivitiesJustJDBC() {
String query = "select * " + "from activity a, work_order w "
+ "where a.ac_customer = 'CSC' "
+ "and w.wo_customer = a.ac_customer "
+ "and a.ac_workorder = w.wo_workorder ";
long startTime = System.currentTimeMillis();
List activities = new ArrayList();
try {
Connection conn = jdbcTemplate.getDataSource().getConnection();
PreparedStatement st = conn.prepareStatement(query);
st.setFetchSize(1000);
ResultSet rs = st.executeQuery();
ActivityMapper mapper = new ActivityMapper();
while (rs.next()) {
Activity activity = mapper.mapRow(rs, 1);
activities.add(activity);
}
} catch (Exception ex) {
ex.printStackTrace();
}
System.out.println("Time it took...."
+ (System.currentTimeMillis() - startTime));
System.out.println("Number of activities = " + activities.size());
return activities;
}
This time, the time it took to fetch 11,115 rows took on average 2 seconds. The key statement is the setFetchSize(1000).
do I need to close the connection or is Spring handling this for me?
How can oracle connection-properties be involved in eventuate-cdc behavior? If you have any advice, please reply.
The text was updated successfully, but these errors were encountered: