You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 12, 2022. It is now read-only.
last week ,I update phoenix 4.2.2 to 4.3.0, and now hbase version is 0.98.1-cdh5.1.0
today,when I run sql "upsert into ua_label_email2 select * from ua_label_email " in sqlline the exception was occured:
Mon May 25 20:29:48 CST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@715363a8, java.io.IOException: java.io.IOException: java.lang.NegativeArraySizeException: -1
at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:78)
at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:3200)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5541)
at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3300)
at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3282)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29501)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NegativeArraySizeException: -1
at org.apache.hadoop.hbase.util.Bytes.readByteArray(Bytes.java:210)
at org.apache.phoenix.index.IndexMaintainer.readFields(IndexMaintainer.java:884)
at org.apache.phoenix.index.IndexMaintainer.deserialize(IndexMaintainer.java:230)
at org.apache.phoenix.index.IndexMaintainer.deserialize(IndexMaintainer.java:210)
at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:48)
at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:87)
at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:75)
... 11 more
notice:the table ua_label_email2 is the same as ua_label_email,below is the ddl of these two table:
CREATE TABLE ua_label_email(
type Integer not null,
label_id Integer not null,
email_rowkey varchar(200) not null,
email varchar(200),
label_name varchar(200),
domain varchar(50),
clicked_num Integer,
opened_num Integer,
last_active_time varchar(20),
active_score integer,
last_click_date varchar(20),
last_opened_date varchar(20)
CONSTRAINT pk PRIMARY KEY (type,label_id,email_rowkey))SALT_BUCKETS=256;
CREATE INDEX idx_ua_email ON ua_label_email(email);
CREATE INDEX idx_ua_domain ON ua_label_email(domain);
is there anybody meet the same problem or help me to fix it out?
thanks very much
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
last week ,I update phoenix 4.2.2 to 4.3.0, and now hbase version is 0.98.1-cdh5.1.0
today,when I run sql "upsert into ua_label_email2 select * from ua_label_email " in sqlline the exception was occured:
Mon May 25 20:29:48 CST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@715363a8, java.io.IOException: java.io.IOException: java.lang.NegativeArraySizeException: -1
at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:78)
at org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:3200)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5541)
at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3300)
at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3282)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29501)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
at org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NegativeArraySizeException: -1
at org.apache.hadoop.hbase.util.Bytes.readByteArray(Bytes.java:210)
at org.apache.phoenix.index.IndexMaintainer.readFields(IndexMaintainer.java:884)
at org.apache.phoenix.index.IndexMaintainer.deserialize(IndexMaintainer.java:230)
at org.apache.phoenix.index.IndexMaintainer.deserialize(IndexMaintainer.java:210)
at org.apache.phoenix.index.IndexMetaDataCacheFactory.newCache(IndexMetaDataCacheFactory.java:48)
at org.apache.phoenix.cache.TenantCacheImpl.addServerCache(TenantCacheImpl.java:87)
at org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:75)
... 11 more
notice:the table ua_label_email2 is the same as ua_label_email,below is the ddl of these two table:
CREATE TABLE ua_label_email(
type Integer not null,
label_id Integer not null,
email_rowkey varchar(200) not null,
email varchar(200),
label_name varchar(200),
domain varchar(50),
clicked_num Integer,
opened_num Integer,
last_active_time varchar(20),
active_score integer,
last_click_date varchar(20),
last_opened_date varchar(20)
CONSTRAINT pk PRIMARY KEY (type,label_id,email_rowkey))SALT_BUCKETS=256;
CREATE INDEX idx_ua_email ON ua_label_email(email);
CREATE INDEX idx_ua_domain ON ua_label_email(domain);
is there anybody meet the same problem or help me to fix it out?
thanks very much
The text was updated successfully, but these errors were encountered: