This page lists all of the examples.
- Connect to BigSQL from Spark (e.g. on Data Science Experience) [BigSQL]
- Connect to WebHDFS from Spark (e.g. on Data Science Experience) [WebHDFS]
- Performs HDFS service check via Ambari REST api using Groovy [Ambari]
- Lists installed services via Ambari REST api using Groovy [Ambari]
- Shows the hostname for BIGSQL, HIVE, KNOX and BIGR services where the master component runs via Ambari REST api using Groovy [Ambari]
- Connects to Big SQL via JDBC using Groovy [BigSQLGroovy]
- Export table to CSV file using Groovy [BigSQLGroovy]
- Create and query external table to HDFS using Groovy [BigSQLGroovy]
- Federation to DashDB using Groovy [BigSQLGroovy]
- Create and query HBASE table using Groovy [BigSQLGroovy]
- Insert and query table using Groovy [BigSQLGroovy]
- Load from file and query table using Groovy [BigSQLGroovy]
- Connects to Big SQL via JDBC using Java [BigSQLJava]
- Connects to HBase and check system version, cluster version and status using Groovy [HBaseGroovy]
- Manipulate schema and execute CRUD operation using Groovy [HBaseGroovy]
- Connects to HBase and check system version, cluster version and status using Java [HBaseJava]
- List directory contents, Create a directory and Upload a file using Groovy [WebHdfsGroovy]
- List directory contents, Create a directory and Upload a file using cURL [WebHdfsCurl]
- Create and Drop table using Groovy [HiveGroovy]
- Create and Drop table using Java [HiveJava]
- Start a Hive Beeline Session [HiveBeeline]
- Download and configure Jsqsh shell for Big SQL [Jsqsh]
- Run a knox shell client session [Knoxshell]
- Compile a Map/Reduce Java application and submit it to Oozie using Groovy [OozieWorkflowMapReduceGroovy]
- Compile a Map/Reduce Java application and submit it to Oozie using cURL [OozieWorkflowMapReduceCurl]
- Compile a Spark Java application and submit it to Oozie using Groovy [OozieWorkflowSparkGroovy]
- Perform a simple word count using Pyspark [SparkWordCountPython]
- Perform a simple word count using Scala [SparkWordCountScala]
- Spark Streaming MessageHub consumer in Scala [SparkMessageHubScala]
- Spark Streaming MessageHub consumer in Scala running on Yarn [SparkMessageHubScalaYarn]
- Download, configure and Run SquirrelSQL [SquirrelSQL]
- Execute a Hive Job using Groovy [WebHCatGroovy]
- Execute a MapReduce Job using Groovy [WebHCatGroovy]
- Execute a Pig Job using Groovy [WebHCatGroovy]
- Install, configure and run Zeppelin. Run Pyspark notebook [Zeppelin]
More examples coming soon ...