Azure Cosmos DB for PostgreSQL | How it Works

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Community Hub.

Build highly scalable applications using the distributed Postgres relational database in Azure Cosmos DB for PostgreSQL. See how it works with current Postgres tools, how to scale out with distributed tables and nodes to keep apps responsive, and how to eliminate latency with geo-replication capabilities for globally distributed apps. 

 

Main Screen Shot 2022-10-10 at 9.56.32 AM.png

 

Cosmos DB has a variety of APIs, including native NoSQL and compatible APIs, targeted at NoSQL workloads. With the introduction of Postgres, Cosmos DB offers relational capabilities — a key cornerstone for application developers. 

 

With distributed Postgres in Cosmos DB, you can build highly scalable, cloud native apps using NoSQL and relational capabilities within a single managed service. Principal Group Program Manager, Charles Feddersen, from the Cosmos DB team joins Jeremy Chapman to walk you through the process. 

 

Join the Azure Cosmos DB team for all our announcements from Ignite 2022. Visit https://aka.ms/azurecosmosdbliftoff

 

Scale out data into shards. 

1- Scaling Screen Shot 2022-10-10 at 9.55.32 AM.png

Convert a traditional Postgres table to a distributed table online, without impacting the customer experience. Click to watch.

 

Combine multi-tenant resource sharing with tenant isolation.

2- SaaS Screen Shot 2022-10-10 at 9.56.04 AM.png

Simplify architecture, keep costs down using Azure Cosmos DB for PostgreSQL. See how to maximize SaaS app efficiency here.

 

Reduce latency on global workloads.

3- Geo-rep Screen Shot 2022-10-10 at 9.55.49 AM.png

See how to implement cross-region geo-replication for effectively instantaneous app response times.

 

Watch our video here.

 


QUICK LINKS: 

00:00 — Introduction 

00:51 — Relational capabilities and scalability 

01:46 — See it in action 

02:48 — Connect code to cluster 

05:20 — Scaling your application 

07:49 — SaaS application example 

11:05 — Geo-replication 

13:21 — Wrap up

 

Link References: 

Get started at https://aka.ms/trycosmosdb

 

Unfamiliar with Microsoft Mechanics? 

As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. 

 

Keep getting this insider knowledge, join us on social: 


Video Transcipt:

- Coming up, we take a first look at the new Azure Cosmos DB for Postgres capability that enables you to build highly scalable applications using the distributed Postgres relational database. We’re going to show you how it works with your current Postgres tools, how you can easily scale out with distributed tables and nodes to keep your apps responsive, and for globally distributed apps, the new geo-replication capabilities to eliminate latency. And to walk us through everything, we’re joined today once again by Charles Feddersen from the Cosmos DB team. Welcome back.

 

- Thanks Jeremy. It’s great to be back on the show.

 

- It’s really good to have you on to go deeper on this very important update for Cosmos DB in the introduction of Postgres. Cosmos DB has always had this variety of APIs including native NoSQL, and then compatible APIs like MongoDB and Cassandra, you know, but these were always targeted at NoSQL workloads. How does this change then with the introduction of Postgres?

 

- Yeah, so now Cosmos DB offers relational capabilities, which really is one of the key cornerstones for application developers. By introducing distributed Postgres in Cosmos DB, we’re now making it easier for you to build highly scalable, cloud native apps using NoSQL and relational capabilities within a single managed service.

 

- And one of the biggest advantages with Cosmos DB is of course it’s scalability. So is this also the case then with Postgres in Cosmos DB?

 

- Absolutely. You can build prototypes on a single node, and then scale from there. Imagine you start with a single node and create a regular Postgres table, and as the workload grows, you can add nodes to the cluster to increase compute and distribute the table, which balances the workload across the cluster. And finally, then to support a global user base, you can enable replication to scale read workloads across multiple Azure regions.

 

- This is really great. So you can kind of start with small compute and then globally scale it out across different regions. But my question is, can we see it in action?

 

- Absolutely. To get started, I’m in the Azure portal and you can see I’m in Cosmos DB. I have the choice of all of the existing NoSQL and now Postgres API, as well. I’ll select create for Postgres, and here it presents me with a pretty simple presenting flow of subscription and resource group. I’m going to give it a cluster name and I’ll select to put it in East US 2. But what I really want to focus on is configuring scale. You can see I’ve got a simple dropdown with node count, and what’s immediately clear is that although I can start with a single node just like traditional Postgres, I can also add nodes for cluster scale. But let’s start small and cost effective for our project. I’ll click save and we’ll give the cluster a quick password, and I’ll need to type that one in twice. I’ll select review and that looks good. And I’ll hit create. And this’ll take a few minutes to provision. And as you saw, it’s really super simple.

 

- Okay, so now you’ve got your cluster up and running, how would you connect your code to it?

 

- So that’s another area where we’ve kept things consistent and familiar. Because Cosmos DB for Postgres is built on open-source Postgres and we’ve enabled distributed query execution using the Citus open-source extension, it really is just Postgres. And that means pretty much the entire Postgres ecosystem just works. Let me show you how you can get started using a common graphical Postgres tool called PgAdmin to create a table and load some data. Here, I’m connected to my cluster and I can see on the left-hand side I’ve got the browser in PG Admin. Everything looks really familiar just like Postgres. I’ll go ahead and create a table called EcomSalesTransactions and that’s got a primary key of three columns, which you can see here. And I’ll go ahead and give it a secondary index as well, which is often very useful for performance. And now I’m going to load some data. What I’m going to do here is use the new Azure Storage extension that we’ve created to load data from cloud storage. I’ve added the account already and so I won’t do that again here, but you can see that if I go run a select star, I’ve got an account called cfmech, which I’m going to reference in this statement. And now I’m going to insert into the table using a select star from the extension function blob_get with some columns defined. I’ll go ahead and run that. It’ll take about four or five seconds to run, and it’s actually loading about half a million records. So we can see once this is finished, we’ve got about 491,000 records in a little over five seconds and I can come down and verify that row count again with a count star. And now let’s look at some of the other query examples to show the compatibility with Postgres. First, here’s a common table expression. So I’ll just go ahead and run this. It works as you’d expect. And now we can use Postgres extensions because we’re building on native Postgres. Here, I’m using t-digest and there’s actually over 50 extensions supported today in Cosmos DB. And that just works as well. And then finally, I’ll just go ahead and create a function to show that those work in Postgres, and I’ll go ahead and call that as well. And so as you saw, everything really just works as you would expect in Postgres with no code changes required.

 

- And this is really going to be good news for our Postgres developers that are watching right now, because you don’t have to worry about reskilling or hitting any surprises when maybe features that you need aren’t there, that were there in the past and worked. So now your cluster is up and running and you’ve loaded up data. So why don’t we look at how scaling might work?

 

- Yep, let’s do it. So the next thing I want to show is how easy it is to convert a traditional Postgres table to a distributed table completely online while the app is running and without any changes to the application. Scaling out data into shards is how distributed systems spread the workload across a cluster of machines, because this enables each machine in the cluster to work on a portion of the overall data. I’ve got a split screen here and on the left I’ve got a simple console app, which will process records in my eCOMSalesTransactions table, which I’ll kick off in a second. I want to start by showing you that I’ve added a couple of worker nodes to my cluster. You can see those here, w1 and w0. And I also want to show that I don’t have any shards yet, because I haven’t distributed my table. It’s still a local Postgres table. Let’s do a quick check of the row count and we can see that the rows are still there as they were before. And if I look at my query plan, it’s a very simple plan with just a couple of tasks here. So now I want to go ahead and start running this app on the left and this is going to insert a number of records and also run a set of selects at the same time. So there’s a lot of traffic on my database. We’ll take a look at the count star another couple of times, and we can see that the rows are going in. And now I’m going to run this statement to create a distributor table, and it’s going to take my table and break it apart into 32 shards, which is going to enable me to distribute it across the cluster. This is running while my app continues to run and now it’s finished. And you could see there was no interruption to my workload. I can take a look at my shards and I can see that now I have 32 each represented with a row here. So it’s a distributed table. And if I take a look at the count star, I can see that my row count has continued to grow. And if I take a look at this query plan again, you’ll see now we’ve got 32 tasks because we’ve got to fan this query out to count the rows in each shard and then sum that up globally. So a slightly more complex query plan, but now you can see how we created a distributed table completely online without impacting the customer experience.

 

- So now we’ve got our distributed table with a load balance basically across all your nodes. And just something again to reiterate, all of those configuration changes that Charles just did were all happening while the app was running, didn’t require any code changes, everything was still remaining online. So this is going to be critical, for example, if you’re building something like a SaaS application with a lot of tenants that needs high availability.

 

- That’s right. And building multi-tenant SaaS applications is one of the key workloads that Cosmos DB for Postgres is optimized to deliver. When building SaaS applications, it’s important to balance multi-tenant cost effectiveness with tenant isolation to mitigate the impact of noisy neighbors. The scalability of Cosmos DB for Postgres enables you to host hundreds or thousands of tenants on a single cluster, but also isolate tenants into their own nodes on the cluster. And this is a completely online operation as well with no downtime and no application changes.

 

- So can you explain how something like this might work?

 

- Absolutely. Let me give you an example. Let’s say you’ve built and deployed a multi-tenant e-commerce SaaS application, and demand for one of your tenants suddenly surges on a node. And this is going to impact the performance of all of the other tenants sharing that node. We need to handle this quickly and without any interruption to maintain everybody’s experience. Cosmos DB for Postgres makes this simple. We can isolate that specific large customer tenant into their own shard and move it to a dedicated node on the cluster. This redistributes the data to reduce the load on the original node and ensure everyone’s apps remain performant without making any application changes. So let’s go back to our demo. Remember in the previous demo we added a couple of worker nodes. I’ve gone ahead now and added a third node to my cluster to help show isolation. Here you can see node w2 is my new node, and if I take a look at my shards, I’ve still got 32 shards . Let’s look at my data and I’m going to run a query to count and group my tenant ID rows. You can see that one of my tenants is disproportionately large, and so I want to isolate the rows of that tenant to a new node to balance the workload. The first thing I’m going to do is split tenant 201 into a new shard. And so I’ll run this and it’s going to complete without any interruption to my running app. I’ll look at the shards and instead of 32, I’ve got 34 now. And that’s because tenant 201 got split into its own shard and the rows greater than 201 and less than 201 are each in their own shard, as well. So now I’ll go ahead and take that shard ID from the results. And what I can do is move this to my new node. I can see that it’s currently running on node C. And so I’m going to put that there. And I want to move this to node two, which I’m going to put here. And if I run it, this will finish pretty quickly. And now if I take a look at my shards again, you’ll see that the shard in row 33 is now running on node w2 by itself. And while all this was happening, my app kept running uninterrupted and nobody felt a thing.

 

- Right, and again, this online tenant isolation really provides a seamless solution for optimizing those multi-tenant SaaS workloads using Cosmos DB with Postgres.

 

- That’s right, you can blend multi-tenant resource sharing with tenant isolation in a single cluster. And this enables you to simplify your architecture and keep your costs down.

 

- And this is really going to bring a lot of efficiencies. But now I want to switch gears because we touched on geo-replication earlier, and that’s something I think a lot of people know Cosmos DB for. So how does this work then with Postgres?

 

- Yeah, so Postgres also provides cross-region geo-replication, which enables you to reduce the read latency on global workloads. Plus it also provides multi-region disaster recovery. In fact, let me show you a demo right here. So we’ve got our cluster in US East 2, and let’s replicate that data to West Europe. I’m in the portal and I’ll select down here to start setting up my replication. I can see that my primary is running in East US 2 and I’m going to select add a replica. Now I’m going to give my replica cluster a name and I’ll select at the region that I want, which is West Europe. And I’ll go ahead and click okay, this will take a few minutes to create, and then I’ll have my read replica, which will be running in Europe.

 

- All right, so now you’ve got your European read replica running. Can we prove that it’s going to be faster for maybe a customer that hits it from Europe?

 

- Yeah, the easiest way to measure this is with the latency testing that application connected from Europe to both US and the European clusters. So here I’ve got a virtual machine and it’s running in West Europe with two console apps ready to run. The app on the left will connect to East US 2, and the one on the right will connect to our new replica locally in West Europe. The app will run the same query on each side five times to measure the latency. Let’s run East US 2 on the left first and you’ll see that these queries run and the response time is consistently about 86 milliseconds. So now we’ll run the app on the right connected to the West Europe replica, and look, they come back in zero or one millisecond. And so if your app is running multiple consecutive queries to populate a page, this latency really starts to add up. It can be the difference between a site that loads in seconds or effectively instantly. And this cross region replication could also be applied to disaster recovery scenarios. If the primary region goes down, you can quickly promote the read replica to be the primary in only a couple of clicks.

 

- So just to summarize this, you know, you get that geo-replication that Cosmos DB is known for, but for Postgres now, so you can deploy your replicas to different regions in Azure around the world. So for everyone who’s watching right now looking to get started with Azure Cosmos DB for Postgres, what do you recommend?

 

- So I’ve got some good news on that. We’ve got a free trial available for you to get started immediately with a self-paced tutorial for learning the basics of distributed PostgreSQL. You just need to go to aka.ms/trycosmosdb to get started today.

 

- Thanks so much, Charles, for joining us today and sharing with us all of the first looks at Azure Cosmos DB for PostgreSQL. Of course, keep watching Microsoft Mechanics for the latest tech news. Subscribe to our channel if you haven’t already. And as always, thank you for watching.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.