Cloud Dataflow: A Unified Model for Batch and Streaming Data Processing

Getting your Trinity Audio player ready...

Dataflow is a fully managed service to execute pipelines within the Google Cloud Platform ecosystem. It is a service that is fully dedicated to transforming and enriching data in stream (real-time) and batch (historical) modes. It is a serverless approach where users can focus on programming instead of managing server clusters, can be integrated with Stackdriver, which lets you monitor and troubleshoot pipelines as they are running. It acts as a convenient integration point where Tensorflow machine learning models can be added to process data pipelines.

History

Google Cloud Dataflow was announced in June 2014 and released to the general public as an open beta in April 2015. In January 2016 Google donated the underlying SDK, the implementation of a local runner, and a set of IOs (data connectors) to access Google Cloud Platform data services to the Apache Software Foundation. The donated code formed the original basis for the Apache Beam project.

Overview

Here’s an overview of what we know about dataflow:

  • It’s multifunctional: As a generalisation, most database technologies have one speciality, like batch processing or lightning-fast analytics. Google Cloud Dataflow counts ETL, batch processing and streaming real-time analytics amongst its capabilities.
  • It aims to address the performance issues of MapReduce when building pipelines: Google was the first to develop MapReduce, and the function has since become a core component of Hadoop. Cloud Dataflow has now largely replaced MapReduce at Google, which the company apparently stopped using “years ago” as stated by the Google’s Senior VP of Technical Infrastructure.
  • It’s good with big data: The senior VP also stated that MapReduce performance started to sharply decline when handling multi-petabyte datasets. Cloud Dataflow apparently offers much better performance on large datasets.
  • The coding model is pretty straightforward: The Google blog post describes the underlying service as “language-agnostic”, but the first SDK is for Java. All datasets are represented in PCollections (“parallel collections”). It includes a “rich” library of PTransforms (parallel transforms), including ParDo (similar to Map and Reduce functions and WHERE in SQL), and GroupByKey (similar to the shuffle step of MapReduce and GROUPBY and JOIN in SQL). A starter set of these transforms can be used out of the box, including Top, Count and Mean.
Dataflow: A unified model for batch and Streaming Data Processing

The Elemental principle of Cloud Dataflow

Google Cloud Dataflow follows stream processing that embosses the computational errors into stages (pipelines) concurrently. Basically, it provides an analytic solution to ingest, process and analyze event streams on a fully managed infrastructure. By focusing on stream analytics solutions, we can set a faster alternative to batch ETL (Execute-Transform-Load) for getting the maximum value from user-interaction events, application and machine logs. We can ingest millions of streaming events per second from anywhere in the world. Stream analytics in GCP simplifies ETL pipelines without compromising robustness, accuracy, or functionality. Fast pipeline development via expressive Java and Python APIs in the Apache Beam SDK is supported by Cloud Dataflow. This provides a rich set of windowing and session analysis primitives and an ecosystem of the source as well as sink connectors.

Cloud Dataflow will allow us to gain actionable insights from your data while lowering operational costs without the hassles of deploying, maintaining or scaling infrastructure.

Google Cloud

Google Cloud Dataflow always supports a fast simplified pipeline through expressive SQL, Java, and Python APIs in the Apache Beam SDK. Google Cloud Dataflow allows us to integrate its service with Stackdriver, which lets us monitor and troubleshoot pipelines as they are running. Google Cloud Dataflow provides uninterrupted integration with the GCP services for streaming events ingestion (Cloud Pub/Sub), data warehousing (BigQuery), machine learning (Cloud Machine Learning), and more. 

References

Apache Beam is an open-source, unified model for defining both batch and streaming data-parallel processing pipelines. Using one of the open-source Beam SDKs, you build a program that defines the pipeline. The pipeline is then executed by one of Beam’s supported distributed processing back-ends, which include Apache FlinkApache Spark, and Google Cloud Dataflow.

Comments are closed.