Armchair Architects: How Architecture Is Changing – Part 1

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Tech Community.

How architecture is changing part 1.png

So, how is cloud architecture evolving? In this latest installment of the Armchair Architects series of the Azure Enablement Show, we discuss the evolution of cloud architecture for four topics:

  1. Containers and collaboration environments
  2. Machine learning
  3. Serverless computing
  4. Low-code/No-code development.

 

We’ll cover four topics in two separate posts to minimize reader fatigue.  The first two are topics are covered in this one, while Serverless and Low-code/No-code development are in Part 2.

 

While these posts are detailed, if you’d like to get straight to the videos (averaging about 10 minutes each), feel free at the following link.

 

Containers and Collaboration Environments:

Let’s start describe containers at a high level.  A container is a packaging and distribution mechanism that abstracts and resolves many of the installer issues that result from ‘unique’ environments.  We’ve all heard developers exclaim “well, it works on my machine,” after pushing an application to a new environment only to realize its broken.  Containers strive to address this problem by creating a hard boundary between the infrastructure and the software stack used by an application. External dependencies are not necessarily added to the container, but all your internal dependencies (frameworks, runtimes, etc.) are there.  This makes the deployment of the application to a new environment significantly more predictable as the compute environment is consistent as its part of the container.

 

Sounds pretty good, right? Well, it’s not quite as rosy as it sounds. There are still several new foundational things that architects must plan.  Orchestration, security, and versioning are all still important considerations when designing your application whether you are deploying to an application service, container or VM. That’s one of the reasons that we have container orchestrators like Kubernetes. With Kubernetes you deploy pods, which are a collection of containers integrated with certain application logic. You can then use DAPR (covered in the last blog post) to talk to other services or pods.

 

New types of applications are those that  run in collaboration environments, like Teams or Slack. You need to think about how this application should participate in this environment first. Applications that run in these environments are most likely involved in a conversation that’s mostly driven by humans. Unfortunately, we see that many apps are simply just hosted in Teams and really don’t know how they support certain conversations or other native collaboration scenarios. There’s a lot of room for improvement here as many apps don’t take advantage of the collaboration functionality within these app environments. Thus, plan application user experience accordingly so that instead of using Teams or Slack as an application hosting shell with a self-contained UI, think about how to integrate the application experience with native Teams or Slack functionality.  

 

Collaboration environments (and other highly UX-driven architectures) require the architect to go outside of the technical considerations and think more carefully about the user experience and UX tools available.  Architects should work very closely with those in the UX discipline to provide high quality and productive application experiences. 

 

Machine Learning:

Machine learning models are components that often utilize algorithms trained on historical data to render inferences. The transition from Software Architecture 1.0 to Software Architecture 2.0 includes considering which application components can benefit from deterministic code (using if/else statements, etc.) to these probabilistic models, which utilize cognitive capabilities to provide applications with “intelligent” features. Where once you would have fixed-functionality code that doesn’t change its behavior, you now have a model that evaluates and makes decisions based on training data.

 

A great example of how machine learning supplemented an application is in a collaboration environment that replaced 150,000 lines of C++ code—which was effectively a set of policies that determined such things as: that if I connected to a backend, what is the jitter? What’s the network look like? Which network connection should I use?  Architects should strive to integrate machine learning capabilities carefully and responsibly into application experiences.  Working with UX teams, architects can advise on the method in which machine learning can increase the fidelity of the user experience utilizing features using machine learning.  Currently, most of machine learning is happening outside the application, such as risk analysis or fraud detection. But that’s changing with the example used above. We see Software 2.0 as the promise of declarative application code and probabilistic models which infuse applications with cognitive capabilities, getting closer together.  Architecture and architects are key in making this happen.

Part of responsibly implementing machine learning components in an application is monitoring. 

 

Rather than just monitoring application performance or availability, machine learning component inferences must also be monitored as well.  Machine learning components often appear as if they are functional, available, and working properly, but application owners must routinely inspect the quality of the inferences from these components.  Poor inferences from machine learning models can result in a scenario that’s equivalent to an “application down” situation. We need to look at the quality of machine learning models inferences.

 

Model drift which becomes a significant contributor to poor model performance is sometimes referred to as decay, or staleness.  This can happen when training data contains changes which result in the model becoming less accurate over time.   This can also happen when the data that a model was trained on is no longer sufficient to render quality inferences based on parameter inputs.

 

Additionally, malicious intent by hostile actors can poison model training data by injecting deliberately inaacurate or false information.  As the model is trained on this data, it may learn from this data and result in reduced inference quality.  We need ‘virus scanners’ that look out for these sorts of malicious behaviors both in training data, parameter inputs to deployed models and low inference quality from models. There’s a great example where Microsoft released a bot that was very nice from a behavioral perspective, but people learned to feed data that made the bot do hateful things.

 

This topic deserves its own post—look for one soon!

Another aspect of probabilistic code is that the receiver of the model output needs to figure out how to use this output, as it’s relative to the data—and can’t have just one, hard-coded reaction/action. You can expect to use probabilistic code for perception management (there is an obstruction in front of your car and it must stop), but then also to integrate/compose higher level reasoning to make ever more precise decisions (it’s a car that is slowing down to 20 mph and thus you must also slow down to 20 mph).

 

MLOps also needs to be something you need to do as soon as the word “model” surfaces. MLOps helps to instill lifecycle management of models so that training, change management and deprecation become a formal process.

 

For more details about these two videos, please watch them below. And please any feedback or comments! We read and respond to all of them!

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.