Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SNOW-1778959: Binary value '14843418AF05FDC3C69C' is too long and would be truncated #1952

Closed
amergey opened this issue Oct 31, 2024 · 7 comments
Assignees
Labels
invalid status-information_needed Additional information is required from the reporter status-triage_done Initial triage done, will be further handled by the driver team

Comments

@amergey
Copy link

amergey commented Oct 31, 2024

Please answer these questions before submitting your issue.
In order to accurately debug the issue this information is required. Thanks!

  1. What version of JDBC driver are you using?
    3.19.0

  2. What operating system and processor architecture are you using?
    Ubuntu 22.04.5

  3. What version of Java are you using?
    JDK 17

  4. What did you do?

Execute an insert in a table with a JDBC prepared statement binding a byte[] value into a binary column

create or replace HYBRID TABLE DM_DATALOC_TEST_AM.TSTUS_JDBC_STEP_INST_REPO_TEST.WF_ATTACHMENT (
	ID VARCHAR(36) NOT NULL COMMENT 'VARCHAR',
	WORKFLOW_INSTANCE_ID NUMBER(38,0) COMMENT 'NUMBER',
	STEP_INSTANCE_ID NUMBER(38,0) COMMENT 'NUMBER',
	FILE_NAME VARCHAR(128) NOT NULL COMMENT 'VARCHAR',
	CONTENT BINARY(8388608) NOT NULL COMMENT 'BLOB',
	constraint PKWF_ATTACHMENT primary key (ID)
insert into WF_ATTACHMENT (
    ID,
    STEP_INSTANCE_ID,
    FILE_NAME,
    CONTENT
)
values (
    ?,
    ?,
    ?,
    ?
)

It fails with

Binary value '418E3F4240F7038405B8' is too long and would be truncated

The odd thing is, if I debug my java program and put a breakpoint to execute step by step, then it succeed

Normal run, fails all the time. Debug with breakpoint succeed all the time, so probably some race conditions

  1. What did you expect to see?

it should succeed, my table is created to contains max size binary and I set a 10 bytes byte array

  1. Can you set logging to DEBUG and collect the logs?
  [net.snowflake.client.core.SFStatement] DEBUG : ExecuteQuery: insert into WF_ATTACHMENT (
    ID,
    STEP_INSTANCE_ID,
    FILE_NAME,
    CONTENT
)
values (
    ?,
    ?,
    ?,
    ?
)
[net.snowflake.client.core.SFStatement] DEBUG : JDBC conservative memory usage is enabled.
[net.snowflake.client.core.SFStatement] DEBUG : conservativeResultChunkSize changed from 0 to 48
[net.snowflake.client.core.QueryContextCache] DEBUG :  Cache Entry: id: 0 readTimestamp: 1730391796448935 priority: 0
[net.snowflake.client.core.QueryContextCache] DEBUG :  Cache Entry: id: 213157808 readTimestamp: 1730391796423668 priority: 1
[net.snowflake.client.core.StmtUtil] DEBUG : queryContextDTO: {"entries":[{"id":0,"timestamp":1730391796448935,"priority":0,"context":null},{"id":213157808,"timestamp":1730391796423668,"priority":1,"context":{}}]}
[net.snowflake.client.core.StmtUtil] DEBUG : JSON: {"sqlText":"insert into WF_ATTACHMENT (\n    ID,\n    STEP_INSTANCE_ID,\n    FILE_NAME,\n    CONTENT\n)\nvalues (\n    ?,\n    ?,\n    ?,\n    ?\n)","sequenceId":263,"bindings":{"1":{"type":"TEXT","fmt":null,"schema":null,"value":"337a51ad-5ba9-4899-b26d-73038b87642e"},"2":{"type":"FIXED","fmt":null,"schema":null,"value":"102"},"3":{"type":"TEXT","fmt":null,"schema":null,"value":"file 2"},"4":{"type":"BINARY","fmt":null,"schema":null,"value":"14843418AF05FDC3C69C"}},"bindStage":null,"describeOnly":false,"parameters":{"CLIENT_RESULT_CHUNK_SIZE":48},"queryContextDTO":{"entries":[{"id":0,"timestamp":1730391796448935,"priority":0,"context":null},{"id":213157808,"timestamp":1730391796423668,"priority":1,"context":{}}]},"describedJobId":null,"querySubmissionTime":1730391796540,"isInternal":false,"asyncExec":false}
[net.snowflake.client.core.HttpUtil] DEBUG : Executing request with OCSP enabled: null
[net.snowflake.client.core.HttpUtil] DEBUG : Pool: [leased: 1; pending: 0; available: 2; max: 300] Executing: POST https://host.snowflakecomputing.com:443/queries/v1/query-request?requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b HTTP/1.1
[net.snowflake.client.jdbc.RestRequest] DEBUG : [requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b] Executing rest request: POST https://host.snowflakecomputing.com:443/queries/v1/query-request?requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b HTTP/1.1, retry timeout: 0, socket timeout: 0, max retries: 7, inject socket timeout: 0, canceling: false, without cookies: false, include retry parameters: true, include request guid: true, retry http 403: false, no retry: false
[net.snowflake.client.jdbc.RestRequest] DEBUG : [requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b] Retry count: 0, max retries: 7, retry timeout: 0 s, backoff: 1000 ms. Attempting request: POST https://host.snowflakecomputing.com:443/queries/v1/query-request?requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b HTTP/1.1
[net.snowflake.client.jdbc.RestRequest] DEBUG : [requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b] Request POST https://host.snowflakecomputing.com:443/queries/v1/query-request?requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b HTTP/1.1 guid: 5cdd11fc-acaf-4ff4-bd8a-09f2ca52864c
[net.snowflake.client.jdbc.RestRequest] DEBUG : HTTP response code for request POST https://host.snowflakecomputing.com:443/telemetry/send HTTP/1.1: 200
[net.snowflake.client.jdbc.RestRequest] DEBUG : Execution of request POST https://host.snowflakecomputing.com:443/telemetry/send HTTP/1.1 took 152 ms with total of 0 retries
[net.snowflake.client.core.HttpUtil] DEBUG : Pool: [leased: 1; pending: 0; available: 2; max: 300] Request returned for: POST https://host.snowflakecomputing.com:443/telemetry/send HTTP/1.1 took 153 ms
[net.snowflake.client.core.SFBaseSession] DEBUG : Sending telemetry took 153 ms. Batch size: 2
[net.snowflake.client.jdbc.RestRequest] DEBUG : [requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b] HTTP response code for request POST https://host.snowflakecomputing.com:443/queries/v1/query-request?requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b HTTP/1.1: 200
[net.snowflake.client.jdbc.RestRequest] DEBUG : [requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b] Execution of request POST https://host.snowflakecomputing.com:443/queries/v1/query-request?requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b HTTP/1.1 took 250 ms with total of 0 retries
[net.snowflake.client.core.HttpUtil] DEBUG : Pool: [leased: 0; pending: 0; available: 3; max: 300] Request returned for: POST https://host.snowflakecomputing.com:443/queries/v1/query-request?requestId=090b9de2-89f7-4053-9b57-3a3fcfc9917b HTTP/1.1 took 250 ms
[net.snowflake.client.jdbc.SnowflakeSQLException] DEBUG : Snowflake exception: Binary value '14843418AF05FDC3C69C' is too long and would be truncated, sqlState: 22000, vendorCode: 100145, queryId: 01b80fb7-0004-18dc-000c-b48704462f56
[net.snowflake.client.jdbc.SnowflakeStatementV1] TRACE : close()
[net.snowflake.client.core.SFStatement] TRACE : void close()
@amergey amergey added the bug label Oct 31, 2024
@github-actions github-actions bot changed the title Binary value '418E3F4240F7038405B8' is too long and would be truncated SNOW-1778959: Binary value '418E3F4240F7038405B8' is too long and would be truncated Oct 31, 2024
@amergey amergey changed the title SNOW-1778959: Binary value '418E3F4240F7038405B8' is too long and would be truncated SNOW-1778959: Binary value '14843418AF05FDC3C69C' is too long and would be truncated Oct 31, 2024
@sfc-gh-wfateem sfc-gh-wfateem self-assigned this Oct 31, 2024
@sfc-gh-wfateem sfc-gh-wfateem added the status-triage Issue is under initial triage label Oct 31, 2024
@sfc-gh-wfateem
Copy link
Collaborator

@amergey thanks for raising this issue.

That error is returned from the backend; it's not a JDBC driver issue unless there's a problem with the data that it's sending and it's not the same data you were expecting to see.

Does that binary data 418E3F4240F7038405B8 look right to you in terms of what you tried to send with the JDBC driver, or has this been modified?

If the error is showing you a value that you're not expecting, then can you provide an example code snippet showing how you generated this data and how you prepared your statement in your Java code?

Otherwise, if the data matches what you sent, then it's not a JDBC issue. You will need to open a support case to have that looked at.

@sfc-gh-wfateem sfc-gh-wfateem added status-information_needed Additional information is required from the reporter and removed status-triage Issue is under initial triage labels Oct 31, 2024
@amergey
Copy link
Author

amergey commented Oct 31, 2024

@sfc-gh-wfateem thanks for you answer.
Actually I do a setObject on prepared statement with a byte array as parameter (10 bytes) supposed to be set as the binary column value. The jdc driver transforms the byte array into 418E3F4240F7038405B8 which correponds to the byte array value passed as argument but converted into hex string.
So in a sense it is the expected value (or not) depending of how the jdbc driver is supposed to work and tread bytes in prepared statement.
A direct sql execution through snowflake front or a through a regular statement is working, that's why I suspected an issue related to jdbc driver (with binary in prepared statement or communication between jdbc driver and backend), but if you think it is a backend issue, then I am sorry to have opened this and I will open a support case instead

@sfc-gh-dszmolka sfc-gh-dszmolka added the status-triage Issue is under initial triage label Nov 1, 2024
@sfc-gh-wfateem sfc-gh-wfateem removed the status-information_needed Additional information is required from the reporter label Nov 1, 2024
@sfc-gh-wfateem
Copy link
Collaborator

Thanks for the response @amergey.
Yes, the key point here is that you were able to insert the same data into the binary column outside of the JDBC driver.
I'll take a look.

@sfc-gh-wfateem sfc-gh-wfateem added status-information_needed Additional information is required from the reporter status-triage_done Initial triage done, will be further handled by the driver team and removed bug status-triage Issue is under initial triage labels Dec 16, 2024
@sfc-gh-wfateem
Copy link
Collaborator

sfc-gh-wfateem commented Dec 16, 2024

@amergey I just got around to checking this. I'm not seeing the issue you reported.
Here's a code snippet:

try (Connection connection = getConnection()) {
      Statement statement = connection.createStatement();
      statement.execute(
              "CREATE OR REPLACE HYBRID TABLE h_table (id NUMBER PRIMARY KEY AUTOINCREMENT START 1 INCREMENT 1, CONTENT BINARY(8388608) NOT NULL)");
      try (SnowflakePreparedStatementV1 stmt =
                   (SnowflakePreparedStatementV1)
                           connection.prepareStatement("insert into h_table select ?, ?");
            ) {

        byte[] binaryData = "ABCDEFGHIJKLMNOPQRSTUVWXYZ".getBytes(StandardCharsets.UTF_8);
        stmt.setInt(1, 1);
        stmt.setObject(2, binaryData);
        stmt.executeUpdate();
      }

That runs fine and I confirmed that the data landed in that table. Refer to the screenshot below showing the results of a select query on that same table:

Screenshot 2024-12-16 at 10 53 24 AM

Can you share the code you used to reproduce your issue, please?

The only way I would be able to reproduce your issue is if I attempt to insert that data into a narrower column. The binary data I created is 26 bytes, so if I declare my Binary column as BINARY(N) where N >= 26 then that's no problem. However, if N <= 25 then understandably that will fail, with this error:
net.snowflake.client.jdbc.SnowflakeSQLException: Binary value '4142434445464748494A4B4C4D4E4F505152535455565758595A' is too long and would be truncated

So I don't quite understand how you were able to get into the situation where you get this error for a BINARY(8388608) column:
Binary value '14843418AF05FDC3C69C' is too long and would be truncated

@sfc-gh-wfateem
Copy link
Collaborator

I've been continuously testing this and I'm unable to reproduce this unless in the expected scenario where your data length exceeds the column's length. My conclusion is that the error you were seeing was for a different schema where the column size is smaller than 10 bytes.

I'm going to close this with the assumption that it's a user error, but please do come back and open this again if you have a specific code you can share with us that reproduces the issue and we'll be happy to take another look at it.

@sfc-gh-wfateem sfc-gh-wfateem closed this as not planned Won't fix, can't repro, duplicate, stale Dec 18, 2024
@amergey
Copy link
Author

amergey commented Dec 18, 2024

Thanks @sfc-gh-wfateem for investigating. I do not manage to provide code to reproduce, it seems to be caused by some race condition. Wen I debug or try to isolate, everything works fine. I have the issue in the running code frequently but not always, and too much code involved with the full application. Seems to happened where 2 inserts happened almost simultanetly

image

image

The error returned is probably not the real one, it makes no sense, the 10 bytes should be not overflow the binary column

image

I am pretty sure it is not a user error but a bug somewhere

@sfc-gh-wfateem
Copy link
Collaborator

@amergey I just wasn't a 100% sure if the error you shared (data of 10 bytes) was actually being inserted into the Binary of 8 MBs.

I'm not sure your issue is going to be a straightforward problem to debug in GitHub, and based on the screenshot you just shared I'm not even sure that this is really a JDBC driver issue. In any case, it might make more sense for you to create a support ticket because this will require JDBC debugging logs and you definitely don't want to share that here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
invalid status-information_needed Additional information is required from the reporter status-triage_done Initial triage done, will be further handled by the driver team
Projects
None yet
Development

No branches or pull requests

3 participants