December 2019 unified Azure SDK Release

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Tech Community.

Welcome to another release announcement for the unified Azure SDK.  This month, we have expanded our service support to include a preview of the Azure Storage DataLake SDK.

 

Generally available releases:

  • Azure Storage Blobs (with batch support)
  • Azure Storage Queues
  • Azure Storage Files (SMB and other File shares)
  • Azure Key Vault Secrets and Keys
  • Azure Identity

These are ready to use in your production applications.

 

Preview releases:

  • Azure Storage DataLake Files
  • Azure Key Vault Certificates
  • Azure Event Hubs, including a simplified event processor module
  • Azure App Configuration
  • Azure Cosmos

We believe these are ready for your use, but not yet ready for production.  Between now and the GA release, these libraries may undergo API changes.  We'd love your feedback!  If you use these libraries and like what you see, or you want to see changes, let us know in the GitHub issues for the appropriate language. 

 

Getting Started

Use the links below to get started with your language of choice.  You will notice that all the preview libraries are tagged with "preview".

For those of you who want to dive deep into the content, the release notes linked above and the change logs they point to give more details on what has changed.

 

Event Hubs Processor

Event Hubs is a message passing service.  One side (the producer) sends events to the hub and another (the consumer) receives the events.  We've refactored the base Event Hubs library to better reflect this - there is now an EventHubConsumerClient and an EventHubProducerClient within the library.

 

For me, the more exciting change is the introduction of the Event Processor.  Writing scalable services that consume events is hard.  It needs to scale to multiple instances, and each instance must understand which events have been consumed already and which need to be consumed.  We've wrapped all the best practices for implementing scalable event handlers into a new client.

 

Let's take a typical event processing service that needs to consume events coming in from an Event Hub.  In the past, you would have to set up a loop that received a batch of an appropriate size, then processed that batch.  At the end of the batch, it would write a checkpoint somewhere, then go on to receive the next batch.  If another service was started, it could assert control over the event queue, read the checkpoint, and continue processing.  There were several issues you need to resolve, and a lot of boiler-plate code.

 

With the new event processor library, we've solved most of this, allowing you to concentrate on just two things:

  • What do I need to do to process and event?
  • When should I write a checkpoint?

You can write a checkpoint after each event or a pre-determined time.  Let's look at the code for a typical event processor with the new library (this service is written in JavaScript):

 

import { EventHubConsumerClient, CheckpointStore } from "@azure/event-hubs"; import { ContainerClient } from "@azure/storage-blob"; import { BlobCheckpointStore } from "@azure/eventhubs-checkpointstore-blob"; const containerClient = new ContainerClient(storageConnectionString, storageContainerName); const checkpointStore : CheckpointStore = new BlobCheckpointStore(containerClient); const eventHubConsumerClient = new EventHubConsumerClient(consumerGroupName, ehConnectionString, eventHubName); const subscription = eventHubConsumerClient.subscribe( partitionId, { // In V5 we deliver events in batches, rather than a single message at a time. // You can control the batch size via the options passed to the client. // // If your callback is an async function or returns a promise, it will be awaited before the // callback is called for the next batch of events. processEvents: (events, context) => { /** your code here **/ }, // Prior to V5 errors were handled by separate callbacks depending // on where they were thrown i.e when managing different partitions vs receiving from each partition. // // In V5 you only need a single error handler for all of those cases. processError: (error, context) => { if (context.partitionId) { console.log("Error when receiving events from partition %s: %O", context.partitionId, error) } else { console.log("Error from the consumer client: %O", error); } } }); await subscription.close();

 

If you use the EventProcessorHost previously, check out the Migration Guide for converting your code to the new event proecssor.  Of course, if you need even more fine-tuned control, the lower level library is available for this.  We're still working on this preview library, so please do give us feedback on how you like (or don't like) this approach.

 

Working with us and giving Feedback

So far, the community has filed hundreds of issues against these new SDKs with feedback ranging from documentation issues to API surface area change requests to pointing out failure cases. Please keep that coming. We work in the open on GitHub and you can submit issues here:

Finally, make sure to follow us on twitter: @azuresdk where you can find the latest news and announcements and interact directly to the Azure SDKs team

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.