-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SNOW-1196082: Inserting and reading timestamps is not symetric if too much columns inserted with batch #1655
Comments
Hello @ericcournarie , Thank you for raising the issue and sample application, we are taking a look. Regards, |
Hello @ericcournarie , I tried the sample java application with the latest Snowflake JDBC driver 3.15.0, its giving the expected output, not throwing error. The timezone is set to stmt.execute("alter session set timezone = 'America/New_York'"); Here is the application if inserting ps.setTimestamp(2, new Timestamp(1388016000000L), utc); Regards, |
Hi Sujan, |
Hello @ericcournarie , I tried even the full sample jdbc application as it is but still its not throwing any incorrect output. I tried with latest jdbc 3.15.0 and set timezone for the user as timezone='UTC' etc. Please try the following code, could you please try from a different client machine and use latest snowflake jdbc driver, please capture jdbc log as well. Regards, |
Hi Sujan, it works with yours, but this is not the one I gave you please test with in my case it fails while it was working with yours the bug should appear . it looks like it is related to a 'sizing' effect, as with less columns and less lines, the bug does not appear thanks |
Hello @ericcournarie , Thanks for the update. stmt.execute("alter session set timezone = 'America/New_York'"); // CurrentTimestamp -> 1967-06-23 03:00:00.123 -0400 Here is the updated jdbc application, you can try running from a different client machine and capture the jdbc trace if able to reproduce. Regards, |
Hello Sujan, Sorry, but once again, your program is not correct and not the one you should run you have by just changing this, I ran into the problem with your app. Thanks |
Hello Eric, With loop (int row = 0; row < 30000; row++), its not throwing error but just a warning, but thats is not related to this issue. There is no other error being thrown, could you please try from a different client machine and capture jdbc logs. Regards, |
Hello Sujan, I have tried on several machine (Unix and Mac) with several Java version (8,11,17). As said, the Java machine should not be on UTC to see the bug. Can you add the following at the start of your program to test, so to be sure
Thanks For some reason, cannot get the log . I have added 'tracing=ALL' , but nothing is outputted.. |
Hello Eric, Thanks for the update. Regards, |
Hello @ericcournarie , The engineering team updated that "the timezone setting for the snowflake server is not the same as the timezone on the application" This is the root cause of the issue. This issue did not happen on version 3.13.15 because the driver had uploaded the timestamp with only the UTC zone before version 3.13.22. A new feature has been applied since version 3.13.22 was released on bdb8546, and you can see that the timestamp value is converted with the timezone based on the system time zone( the customer machine). However, when timestamp data is uploaded with the regular insert query, this data does not affect the timezone of the client machine, but the timezone of the snowflake server. This is to say that if the application does not change the timezone setting of the server with "alter session set timezone", the timestamp value will not be converted. Therefore, if you do not set the timezone the same as the application timezone, the first row of the data will have a different timezone value on the sample application code. To resolve this issue, could you please execute the alter session set timezone = "the timezone you want " before inserting the data. Regards, |
Hello Sujan, Sadly, setting the timezone does not change the behaviour, the data when reading back is still not the same as the one written. Regards, |
Hello @ericcournarie , The engineering team working on the fix, meanwhile could you let us know the value of the session parameter 'CLIENT_TIMESTAMP_TYPE_MAPPING' . You can get it this way
Regards, |
Hello @sfc-gh-sghosh @ericcournarie being in vacation, I will respond. The command returns:
If needed few others params that you may be interested in:
Regards, |
Hello @ericcournarie , The fix has been delivered in Snowflake JDBC 3.17.0, I checked as well, its working fine now, no error. Regards, |
Hello @ericcournarie , Closing this ticket as the issue has been fixed and resolved. Regards, |
Hello Sujan, sorry I was out this summer . I confirm it's ok, thanks |
Please answer these questions before submitting your issue.
In order to accurately debug the issue this information is required. Thanks!
What version of JDBC driver are you using?
3.15.0
What operating system and processor architecture are you using?
Mac or Alma Linux
What version of Java are you using?
Java 8
What did you do?
After inserting rows with timestamps values using PreparedStatement and batch, when reading back the rows , some values are shifted. The shift actually depends of the timezone of the machine you are running on ( be sure not to be on UTC to see the problem)
Note that this does not happen with older driver version like 3.13.15
It depends on 'how many columns' are inserted. With 3, there was no problem, with 8, the problem appears
The following program describe the problem (just change the boolean
bug
to false or true to exercise with 3 or 8 columns )report_bug_timestamp.txt
What did you expect to see?
The values read should be the one that was written
Can you set logging to DEBUG and collect the logs?
https://community.snowflake.com/s/article/How-to-generate-log-file-on-Snowflake-connectors
What is your Snowflake account identifier, if any? (Optional)
The text was updated successfully, but these errors were encountered: