This post has been republished via RSS; it originally appeared at: Microsoft Tech Community - Latest Blogs - .
About this blog series
Microsoft Most Valuable Professionals, or MVPs, are technology experts who passionately share their knowledge with the community. They are always on the "bleeding edge" and have an unstoppable urge to get their hands on new, exciting technologies. They have very deep knowledge of Microsoft products and services, while also being able to bring together diverse platforms, products and solutions, to solve real world problems. To learn about the Microsoft MVP Award and to find MVPs visit the official website: https://mvp.microsoft.com/.
The Azure Synapse Analytics team presents you with a blog series "Azure Synapse MVP Corner" to highlight selected content created by MVPs.
This month's MVP content
Description: In this blog we’ll be looking at setting up a Synapse Link for Dataverse and synchronizing Dynamics 365 Sales data with a Synapse Analytics workspace. We’ll also be looking at what is created when the initial setup is run and look at what “near real-time” actually means, plus any issues we encounter.
Blog post: How to access Dataverse metadata in Synapse
Description: Synapse Link for Dataverse copies data from Dataverse into Synapse allowing the data to be queried easily using TSQL. Learn how you can access the Dataverse metadata and utilize view to enrich the exported data.
Description: The blog post demonstrates how a GitHub CI/CD experience for Azure Synapse Link for SQL Server 2022 can look (uses GitHub Actions). Also includes how to automatically stop and start it in the pipeline.
Description: This blog post will cover end-to-end deployment guidelines including the right roles and permission required to provision Azure Synapse Analytics for your organization.
Description: Data Factory or Integrated Pipelines under Synapse Analytics suite can be very useful as an extracting and orchestrating tool. It is a common scenario when we extract data from the source system and save it in a dedicated landing zone located in Azure Data Lake Storage Gen 2. The only question that can appear is what kind of file format we should use to store this data. We have a few options here but one of the most common choices will be PARQUET which has some great benefits like columnar storage, very good compression, etc. It comes also with some limitations that we have to deal with. One of those limitations is connected to the column names that cannot contain special characters or spaces in them. How to deal with that? Adrian will show that in the article.
Description: As we continue to evolve in both understanding and technology new challenges emerge. In this blog post Paul Andrew explores that next wave of frequently asked questions in the context of implementing a Data Mesh architecture. Including his current answers and experience.
Description: The Gartner Hype Cycle for Data Management 2022 is now a well-known picture in certain circles of the data community. It seems people love to point out the relatively low position that Data Mesh has on the curve and the red cross key advising: Obsolete before plateau. Data Mesh is not dead. In this post Paul Andrew offers his opinion why.
Blog post: Six-minute crash course about Synapse Studio
Description: Covers a six-minute crash course about Synapse Studio as well as an overview of Synapse Studio. This blog post is also estimated to only take 6 minutes of your time to read.
Call to action
- Read or watch the content listed above.
- If you like the content, subscribe to the blogs or YouTube channels of the MVPs.
- Follow the MVPs mentioned above on Twitter.
- Stay tuned for more "MVP Corner" blog posts!