New data flow features: Import schema and test connection from debug cluster, custom sink ordering

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Tech Community.

Several new features were added to mapping data flows this past week. Here are some of the highlights:

 

Import Schema from debug cluster

 

You can now use an active debug cluster to create a schema projection in your data flow source. 

Available in every source type, importing the schema will override the projection defined in the dataset. The dataset object will not be changed. All previously existing methods of creating and modifying schemas are still valid and compatible.

 

clipboard_image_0.png

 

For more information on the projection tab, see the data flow source documentation 

 

Test connection on Spark Cluster

 

You can use an active debug cluster to verify data factory can connect to your linked service when using Spark in data flows. This is useful as a sanity check to ensure your dataset and linked service are valid configurations when used in data flows. 

 

Custom sink ordering

 

If you have multiple destinations in your data flow, you can now specify the write order of your data flow. Nondeterministic by default, enabling custom sink ordering allows for sequential writes of your data flow sinks.

 

clipboard_image_1.png

 

For more information, see custom sink ordering 

REMEMBER: these articles are SYNDICATED. Your best bet to get a reply is to follow the link at the top of the post to the ORIGINAL post! BUT you're more than welcome to start discussions here:

This site uses Akismet to reduce spam. Learn how your comment data is processed.