Caching NFS, High-Throughput VMs and More HPC Cache Developments

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Tech Community.

When you are managing large read intensive workloads, minimizing latency to your stored data simply makes sense for both efficiency and cost management. Azure HPC Cache helps to do both, and we’ve just made a few announcements to take this a few steps further.

 

We just added support for three new features that will drive your value in Azure!

 

Read Caching with NVMe Storage

Customers can now create HPC Cache instances that reap the benefits of NVMe cache storage media. Leveraging these high-throughput, low-latency storage devices permit higher performance at lower prices for read-intensive workloads. Learn more about

storage-optimized VMs in Azure and HPC Cache. HPC Cache will be available for three NVMe-based SKUs: 4.5 GB/s, 9 GB/sec, and 16 GB/s.

 

Sizing options for creating an HPC Cache.Sizing options for creating an HPC Cache.

 

Blob NFS 3.0 Support E-Series General Availability

In April of this year, we shared our preview of support for NFS 3.0, which was also being previewed. The Azure Blob team just shared that Blob NFS 3.0 protocol support is generally available and HPC Cache will follow suit with general availability using existing 2 GB/sec, 4 GB/sec, and 8 GB/sec SKUs.

 

Support of NFS 3.0 enables use of both NFS 3.0 and REST access to storage accounts. For customers seeking to run file-dependent workloads in Azure, you can now do so directly against Azure Blob containers using NFS 3.0 protocol. Because HPC Cache features an Aggregated Namespace, your file system can incorporate NFS 3.0-enabled containers into a single directory structure even when operating against multiple storage accounts, containers, and even your on- premises NAS exports, easing management and efficiency.

 

The addition of HPC Cache (caching of NFS data) is the perfect fit when workloads run at-scale across many virtual machines and require lower latency than the NFS endpoint provides. HPC Cache in front of the container will provide sub-millisecond latencies and improved client scalability for access to directories and file data stored in the cache. HPC Cache also responds to client NLM (network lock manager) traffic and manages advisory lock requests with the NLMv4 service.

 

Blob NFS 3.0 Support for NVMe-based Caches Preview

All the advantages of HPC Cache with Blob NFS 3.0 are also coming to support NVMe SKUs. These high-throughput, low latency cache types can be used for even greater performance at lower costs, perfect for media rending and genomic secondary analysis workloads.

 

Don’t stop reading yet—we have more. While taking advantage of NVMe support and Blob NFS, check out additional new features sure to please.

 

Multiple Network Time Protocol (NTP) Servers for Hybrid Clouds

Customers who are using Hybrid cloud architecture may need to use their own Time Servers, often located in their data centers. HPC Cache now supports the configuration of one or three Time Servers for redundancy. The configuration of exactly two NTP servers is not supported.

 

Metrics per Storage Target

In the past, it was difficult to understand cache capacity utilization on a per storage target basis. In our latest release, you’ll find that you can now view the recycle rate of the cache to provide insight into the demands against HPC Cache. Per-storage-target cache capacity space utilization and availability as well as client activity information (client address, HPC Cache address, and protocol) metrics will also be available in the Azure Portal.

 

Storage Target Operations

HPC Cache users now have more control over storage target operations with the ability to flush, suspend, and resume storage targets. These new controls will help reduce impacts to clients while managing storage targets.

 

Newly added storage target management options for HPC CacheNewly added storage target management options for HPC Cache

 

Network Isolation Documentation

New documentation is available to help customers configure network isolation for HPC Cache workloads. Find the documentation here.

 

Multiple IPs per NFS Storage Target

Customers using Isilon SmartConnect can enter a fully qualified domain name (FQDN) or a comma-separated list of addresses SmartConnect is using for mounting purposes, enhancing storage target configuration.

 

HIPAA Compliance

HPC Cache is now compliant with the Health Insurance Portability and Accountability Act (HIPAA). This means that HPC Cache meets standards for adequately protecting personal health information (PHI).

 

Customer Managed Keys (CMK) Updates

In April, CMK-enabled cache disks became available in all regions where CMK is supported. Now, we’re adding both user assigned identity and default auto key rotation. With default auto key rotation, a key will automatically be set to the latest version. (CMK is not currently supported by NVMe-based SKUs).

 

Get Started

To create storage cache in your Azure environment, start here to learn more about HPC Cache. You also can explore the documentation to see how it may work for you.

 

What’s next? We’d love to hear from you!

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.