This post has been republished via RSS; it originally appeared at: Microsoft Tech Community - Latest Blogs - .
Introduction
This blog post provides a step by step guide for incremental data load from Oracle to Azure Database for PostgreSQL – Flexible Server using Azure Data Factory dataflows. By using this solution there is no additional set up required in the source Oracle database as typically needed using LogMiner or a traditional CDC tool (Change Data Capture).
There are 2 parts in the solution. Part 1 covers creating the Azure Data Factory Dataflows, creating the pipelines and triggers. Part 2 covers the remainder of the solution.
Create dataflows in Azure data factory
The following steps are to be carried out for creating and running the complete incremental load.
Source table data from Oracle
In this step use the data sets created in Azure Blob Storage from the Oracle for tables as shown below, it is possible to have a data preview for each step by enabling the debug option.
Create a dataflow with source table option as shown below.
Watermark table source data
Repeat the above step for the watermark table and create a dataflow with source option.
Create a derived column from the source table data.
For creating a derived column in the source data set, use the + sign next to source dataflow and select the Derived Column option.
Select LAST_MODIFIED for the Derived Column, if ADF identifies the column as a String data type, it can be easily converted to timestamp by using the expressions builder.
Expression builder
Join the watermark table column with the Derived Column from the table
As shown below select the left stream as the Derived Column where the column is from the source table and join the watermark table with the column where the LAST_MODIFIED time is updated in the last run.
In the example below the custom cross join condition is used to filter the incremental data (delta) after the last job run, however it is also possible to use left outer join or inner join based on the conditions
Filter the only last modified (Incremental) records
In this step, the object is filtering the data in the table which is greater than the last run value in the watermark table
Select the columns required for the incremental load
In this step, the object is to select the columns required for the incremental load as a file. It is possible to change the column name or any rule-based mappings during this step.
Add Alter Row for the UPSERT in the target
In this step, the objective is to configure an alter row setting to add UPSERT to the data changes that are going to be applied in the target Azure PostgreSQL database:
Move the data to the target database
In this step, select the Sink data to UPSERT the data in the target database i.e Azure PostgreSQL – Flexible Server
Navigate to the Settings tab and make sure that it Allows UPSERT and that the key column selected is the Primary key column. The object is to Update or Insert the data based on the primary key values.
The data can be inspected and previewed before loading it to the target database by enabling the debug option.
Data Flow for the Delete Records
Repeat the above steps for the Delete records and ensure that Sink data settings are flagged with Allow Delete and the key column is selected as the Primary Key column.
Update the watermark table in the source Oracle database
Once the dataflow configuration is completed, the last step is to update the watermark table in the Oracle database for the next run to pick up the incremental data.
Create pipelines and triggers.
When the dataflows are correctly created and configured, the next step is to create pipelines and schedule tasks to run the jobs in a given interval, even though ADF supports every 1min intervals, considering the data volume and the time to complete the whole workflow, our recommendation is to scheduled it to every 30mins
Final Thoughts
We hope that this post has helped you to configure Incremental Load using ADF and described scenarios and options for using this solution to migrate your databases successfully. If you have feedback or suggestions for improving this data migration asset, please contact the Azure Databases SQL Customer Success Engineering Team. Thanks for your support!