-
Notifications
You must be signed in to change notification settings - Fork 227
encounter Error: (state=08000,code=101), when we create index on bigtable #730
Comments
I had the same issue here.
|
ok,thanks,i will try and get back to you
|
Thanks @charlesb your solution worked for me. |
same error i am gacing, i have set the phoenix.query.timeoutMs but cout not resolve it |
Can you paste your hbase-site.xml? |
Hi i am getting same error Please help. |
ok below is my hbase configuration .
and i am using hdp 2.2 |
i have setup the phoenix.query.timeoutMs property from ambari web UI but it is not reflecting in hbase-site.xml file. |
ok here is final hbase-site.configuration |
jdbc:phoenix:hadoopm1> Select count(*) from PJM_DATASET;
0: jdbc:phoenix:hadoopm1> |
please help what wrong i am doing. |
Seems like something else times out. Have you tried to scan this table from your hbase client (hbase shell)? |
i have one master and three slaves .. i uninstall and reinstall the hbase and phoenix at master and install the hbase at other slave machin but now i am even not able to start the hbase master from ambari.. web UI |
RegionServer |
above is the current error i got from hbase regionserver log. |
ok one question do we need to installed phoenix at the same machine where we have region server. ? |
Phoenix moved to Apache over a year ago, so this site is no longer |
ok thanks i will |
2015-01-14 02:27:57,512 WARN [DataStreamer for file /apps/hbase/data/WALs/hadoopm2.dev.oati.local,60020,1421221188209/hadoopm2.dev.oati.local%2C60020%2C1421221188209.1421223957430 block BP-337983189-10.100.227.107-1397418605845:blk_1073948934_216462] hdfs.DFSClient: DataStreamer Exception now i am getting this error on region server . |
we use performance.py script to generate a bigtable with 50,000,000 rows, and then we create index on it using SQL: CREATE index iPERFORMANCE_50000000 on PERFORMANCE_50000000(core) include(db, ACTIVE_VISITOR); after about 10 mins, it end with Error: (state=08000,code=101), any suggest, thanks!
The text was updated successfully, but these errors were encountered: