This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Community Hub.
Data partitioning is a data management technique used to divide a large dataset into smaller, more manageable subsets called partitions or shards. Each partition contains a portion of the data, and these partitions can be stored and processed independently. The primary goal of data partitioning is to improve performance, scalability, and manageability in large-scale data systems.
Imagine having a vast encyclopedia, and your task is to find a specific page containing a particular word. Instead of searching through the entire encyclopedia, you opt to navigate directly to the relevant section that might contain the desired information. This approach allows you to narrow down your search scope and quickly find the page you need. This concept of narrowing down the search space by dividing data into manageable sections is precisely what partitioning entails.
In this blog, I will present a proposed method for data partitioning using Fabric notebooks.
Part 1: Prepare Data for Partitioning
in this blog, I'm using taxi data provided in this link: taxi_data | Kaggle
downloaded it and I'm using file: yellow_tripdata_2016-03.csv
Since Datasets in fabric does not support on-prem files that is more than 1 GB, and my file is 2 GB, i uploaded the file into my ADLS account and linked my ADLS as a shortcut into my fabric lake house check documentation here:
Options to get data into the Lakehouse - Microsoft Fabric | Microsoft Learn
Part 2: Create a new notebook and load data into it
Navigate through the following steps:
1. Click on your workspace.
2. Select "create," then choose "notebook."
3. Inside your notebook, locate your lakehouse on the left side.
4. Navigate to your desired file within the lakehouse.
5. Load the chosen file into your notebook using the provided options.
Part 3: Transform and partition data.
Now our task involves extracting the year, month, and day from the timestamp. Afterward, we'll proceed to save the data as parquet files in the designated destination.
Part 4: View the output within the lake house.
1. refresh your lake house - navigate to the left in your notebook
2. click on refresh
3. view files in your destination, in my case this was the output:
Since we linked our data lake into our lake house, if you open your ADLS storage account, you can see your files there like so:
Microsoft Fabric documentation - Microsoft Fabric | Microsoft Learn
Create a workspace - Microsoft Fabric | Microsoft Learn
Develop, execute, and manage notebooks - Microsoft Fabric | Microsoft Learn
- Make sure to establish all connections before starting to work on Fabric.
- check Linkes mentioned above.
- Follow me for more tutorials on LinkedIn: Sally Dabbah | LinkedIn