Google Cloud Dataflow provides a simple, powerful programming model for building both batch and streaming parallel data processing pipelines. This repository hosts example pipelines that use the Cloud Dataflow SDK and demonstrate the basic functionality of Google Cloud Dataflow.
A good starting point for new users is our set of word count (java, python) examples, which compute word frequencies. This series of four successively more detailed pipelines is described in detail in the accompanying walkthrough.
-
MinimalWordCount
(java, python) is the simplest word count pipeline and introduces basic concepts like Pipelines, PCollections, ParDo, and reading and writing data from external storage. -
WordCount
(java, python) introduces Dataflow best practices like PipelineOptions and custom PTransforms. -
DebuggingWordCount
(java, python) shows how to view live metrics in the Dataflow Monitoring Interface, get the most out of Cloud Logging integration, and start writing good tests. -
WindowedWordCount
(java) shows how to run the same pipeline over either unbounded PCollections in streaming mode or bounded PCollections in batch mode.
The examples in the example module can be built and executed from the java/examples
directory by running:
mvn compile exec:java \
-Dexec.mainClass=<MAIN CLASS> \
-Dexec.args="<EXAMPLE-SPECIFIC ARGUMENTS>"
This will use the latest release of the Cloud Dataflow SDK for Java pulled from the Maven Central Repository.
For example, you can execute the WordCount
pipeline on your local machine as follows:
mvn compile exec:java \
-Dexec.mainClass=com.google.cloud.dataflow.examples.WordCount \
-Dexec.args="--inputFile=<LOCAL INPUT FILE> --output=<LOCAL OUTPUT FILE>"
Once you have followed the general Cloud Dataflow Getting Started instructions, you can execute the same pipeline on fully managed resources in Google Cloud Platform:
mvn compile exec:java \
-Dexec.mainClass=com.google.cloud.dataflow.examples.WordCount \
-Dexec.args="--project=<YOUR CLOUD PLATFORM PROJECT ID> \
--stagingLocation=<YOUR CLOUD STORAGE LOCATION> \
--runner=BlockingDataflowPipelineRunner"
Make sure to use your project id, not the project number or the descriptive name.
The Cloud Storage location should be entered in the form of
gs://bucket/path/to/staging/directory
.
Alternatively, you may choose to bundle all dependencies into a single JAR and execute it outside of the Maven environment. For example, you can execute the following commands to create the bundled JAR of the examples and execute it both locally and in Cloud Platform:
mvn package
java -cp target/google-cloud-dataflow-java-examples-all-bundled-manual_build.jar \
com.google.cloud.dataflow.examples.WordCount \
--inputFile=<INPUT FILE PATTERN> --output=<OUTPUT FILE>
java -cp target/google-cloud-dataflow-java-examples-all-bundled-manual_build.jar \
com.google.cloud.dataflow.examples.WordCount \
--project=<YOUR CLOUD PLATFORM PROJECT ID> \
--stagingLocation=<YOUR CLOUD STORAGE LOCATION> \
--runner=BlockingDataflowPipelineRunner
Other examples can be run similarly by replacing the WordCount
class path with the example classpath, e.g.
com.google.cloud.dataflow.examples.cookbook.BigQueryTornadoes
,
and adjusting runtime options under the Dexec.args
parameter, as specified in
the example itself.
Note that when running Maven on Microsoft Windows platform, backslashes (\
)
under the Dexec.args
parameter should be escaped with another backslash. For
example, input file pattern of c:\*.txt
should be entered as c:\\*.txt
.
The examples in this repository can be built and executed from the python
directory by running the following in a virtual environment:
python setup.py install
This will use the latest release of the Cloud Dataflow SDK for Python pulled from the PyPI.
For example, you can execute the WordCount
pipeline on your local machine as follows:
python -m dataflow_examples.wordcount \
--inputFile=<LOCAL INPUT FILE> --output=<LOCAL OUTPUT FILE>
After you've finished running your first few word count pipelines, take a look at the cookbook
(java, python)
directory for some common and useful patterns like joining, filtering, and combining.
The complete
(java, python)
directory contains a few realistic end-to-end pipelines.
For more information on Google Cloud Dataflow, see the following resources:
- Apache Beam
- Google Cloud Dataflow
- Apache Beam Programming Guide
- SDK Javadoc
- SDK Pydocs
- Stack Overflow posts tagged with google-cloud-dataflow
- DataflowJavaSDK repository, containing the SDK source code