Data Productivity Cloud Pipeline

Author: Matillion
Date Posted: Aug 22, 2024
Last Modified: Nov 29, 2024

Pre-built pipelines for loading streamed data

Process the latest files from cloud storage to maintain tables in your cloud data platform.

If you have configured a streaming pipeline with Amazon S3 or Azure Blob as the destination, these Data Productivity Cloud pre-built pipelines can be used to load the Avro files into Snowflake or Databricks.


Requirements

To load the files into Databricks, you must use a Data Productivity Cloud project configured with a Hybrid SaaS agent.

To load the files into Snowflake, you can use a Data Productivity Cloud project configured with either a Full SaaS or Hybrid SaaS agent.


Installation

  1. Download the latest zip file for your target data platform below.
  2. Open a branch on your Data Productivity Cloud project.
  3. If you already have a folder named “Matillion Pre-built Pipelines” in the root of your project, delete it.
  4. Hover over the root folder in your project, click the three dot menu and select “Import”.
Image ofImport pipelines into the root of the project
Import pipelines into the root of the project
  1. Browse to and select the zip file.
  2. You should now have a folder named “Matillion Pre-built Pipelines” containing the latest version of the pipelines.
Image ofLatest pipelines imported into a project
Latest pipelines imported into a project

Usage

Open the orchestration pipeline “Matillion Pre-built Pipelines > Example”.

Follow the instructions on the notes in this pipeline to copy the Run Orchestration component into your own orchestration pipeline, and configure it to load your Avro files into your data platform.

Image ofConfigure the Run Orchestration component to load the files into your data platform
Configure the Run Orchestration component to load the files into your data platform

Downloads

Licensed under: Matillion Free Subscription License

Installation Instructions

How to Install a Data Productivity Cloud Pipeline