Creating managed online endpoints in Azure ML

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Tech Community.



Suppose you’ve trained a machine learning model to accomplish some task, and you’d now like to provide that model’s inference capabilities as a service. This is the purpose of endpoints — they provide a simple web-based API for feeding data to your model and getting back inference results.


Azure ML currently supports three types of endpoints: batch endpoints, Kubernetes online endpoints, and managed online endpoints. In the blog post linked below, I'll discuss managed online endpoints. These are designed to quickly process smaller requests and provide near-immediate responses, while relying on Azure to manage compute resources, OS updates, scaling, and security.


Check out the blog post for the full implementation details:

REMEMBER: these articles are REPUBLISHED. Your best bet to get a reply is to follow the link at the top of the post to the ORIGINAL post! BUT you're more than welcome to start discussions here:

This site uses Akismet to reduce spam. Learn how your comment data is processed.