title | description | services | documentationcenter | tags |
---|---|---|---|---|
Run Custom Programs - Azure HDInsight | Microsoft Docs |
hdinsight |
azure-portal |
Hadoop-based big data systems such as HDInsight enable data processing using a wide range of tools and technologies. This article provides comparisons between the commonly used tools and technologies to help you choose the most appropriate for your own scenarios, then goes into detail on how to run custom map/reduce programs.
The following table shows the main advantages and considerations for each one.
Query mechanism | Advantages | Considerations |
---|---|---|
Hive using HiveQL |
|
|
Pig using Pig Latin |
|
|
Custom map/reduce |
|
|
HCatalog |
|
|
Apache Spark |
|
|
Typically, you will use the simplest of these approaches that can provide the results you require. For example, it may be that you can achieve these results by using just Hive, but for more complex scenarios you may need to use Pig or even write your own map and reduce components. You may also decide, after experimenting with Hive or Pig, that custom map and reduce components can provide better performance by allowing you to fine tune and optimize the processing.
Map/reduce code consists of two separate functions implemented as map and reduce components. The map component is run in parallel on multiple cluster nodes, each node applying it to its own subset of the data. The reduce component collates and summarizes the results from all of the map functions (see Use MapReduce in Hadoop on HDInsight for more details on these two components).
In most HDInsight processing scenarios it is simpler and more efficient to use a higher-level abstraction such as Pig or Hive, although you can create custom map and reduce components for use within Hive scripts in order to perform more sophisticated processing.
Custom map/reduce components are typically written in Java. However, Hadoop provides a streaming interface that allows components to be used that are developed in other languages such as C#, F#, Visual Basic, Python, JavaScript, and more.
- See Develop Java MapReduce programs for Hadoop on HDInsight for a walkthrough on developing custom Java MapReduce programs.
- To see an example using Python, read Develop Python streaming MapReduce programs for HDInsight.
You might consider creating your own map and reduce components when:
- You want to process data that is completely unstructured by parsing it and using custom logic in order to obtain structured information from it.
- You want to perform complex tasks that are difficult (or impossible) to express in Pig or Hive without resorting to creating a UDF. For example, you might need to use an external geocoding service to convert latitude and longitude coordinates or IP addresses in the source data to geographical location names.
- You want to reuse your existing .NET, Python, or JavaScript code in map/reduce components. You can do this using the Hadoop streaming interface.
The most common MapReduce programs are written in Java and compiled to a jar file. The steps to upload and run your custom MapReduce program are simple.
Once you have developed, compiled, and tested your MapReduce program, execute the following to upload your jar file to the headnode using the scp
command:
scp mycustomprogram.jar [email protected]
Replace USERNAME with the SSH user account for your cluster. Replace CLUSTERNAME with the cluster name. If you used a password to secure the SSH account, you are prompted to enter the password. If you used a
certificate, you may need to use the -i
parameter to specify the private key file.
Next, connect to the cluster using SSH.
From the SSH session, execute your MapReduce program through YARN.
yarn jar mycustomprogram.jar mynamespace.myclass /example/data/sample.log /example/data/logoutput
This command submits the MapReduce job to YARN. The input file is /example/data/sample.log
, and the output directory is /example/data/logoutput
. Both the input file and the output file(s) are stored to the default storage for the cluster.
This article introduced the landscape of commonly used tools that can be used to process your data, ending off with detailing how to run custom MapReduce programs. Learn more about the various available data processing tools, and methods to create and run custom programs that use them, by following the links below.
- Use C# with MapReduce streaming on Hadoop in HDInsight
- Develop Java MapReduce programs for Hadoop on HDInsight
- Develop Python streaming MapReduce programs for HDInsight
- Use Azure Toolkit for Eclipse to create Spark applications for an HDInsight cluster
- Use Python User Defined Functions (UDF) with Hive and Pig in HDInsight