This post has been republished via RSS; it originally appeared at: Windows Blog Archive articles.First posted to MSDN on Jun, 27 2013
One of the most important decisions you can make when it comes to running an app compat project is to determine how much time (if any) you should be spending investigating any given application. The tendency that many customers have is to use a first-in, first-out approach. Just find them all, test them all, and then we’re ready to go! But, if you think about it, by making every application important, you’re ending up making none of them important, as you just end up treating them all the same.
So, how should you categorize your apps?
Here’s what I like to do.
First, I like to build out a taxonomy that determines if you’re going to care at all. Here’s what I start with:
What you determine first is whether you are going to care proactively about the application, or if you are going to care reactively. Or, quite possibly, you may not care at all. Of course, categorization doesn’t matter at all (in fact, it just wastes your time) if it doesn’t impact behavior, so here are the behaviors I recommend for each category:
A managed application is one which you will proactively test prior to changes that you make in the environment. You’ll assess the potential risk, and invest in these applications appropriately. They’re the ones which could potentially block a migration. In other words, they are the stuff that you want to be able to provide authoritative responses about.
A supported application is one which you care about, but would not choose to invest in proactively testing. For example, in an Office migration, many documents will end up on the Supported list – if you have a problem, you can call, but otherwise we’re not going to look at 25 million of them since most are likely to never be opened again. As another example, you may categorize your apps by type. You may choose, say, a single ZIP processing utility as the standard. And some business unit may choose another one for some presumably legitimate business reason. You just don’t choose to test 2 each and every single time. If they want care and love, then pick the standard one. If they want to have to look after themselves, then you’ll help them out to leverage your centralized knowledge, but won’t issue a full insurance policy.
An unsupported application is one which you will neither test in advance nor provide helpdesk support for. If the end user finds a way to install an application (perhaps they are a local admin, or it is a per-user install), you may be willing to go ahead and let them. But, that doesn’t mean you’re willing to put yourself on the hook for supporting it. As far as you are concerned, they’re on their own.
A banned application is one which you will take active steps to avoid running. Using techniques such as AppLocker, you’ll try to prevent it from running. Some organizations use white lists, so unless you are a managed app, you are banned. That’s a bit extreme for many organizations – more typical is to have a select list of specific applications or application types to try to avoid. (A common one would be peer-to-peer sharing apps, or some consumer apps which are known to cause the potential for security exploits.)
So, once you have defined this hierarchy, and aligned your apps, the only ones you are going to care about in your project are the Managed apps. the more quickly and less expensively you can get to this point, the faster your project will run, and the more agile you’re going to be moving forward.
But, this alone isn’t enough of a taxonomy, as even though you care enough to investigate proactively, you still want to differentiate within this grouping to focus your efforts where they will have the highest ROI.
That’s where we introduce this second taxonomy within Managed Apps:
Once again, the purpose for building this hierarchy is to drive behavior. Here are some suggested behaviors for apps of each type:
Platinum apps are the apps which you are going to not only care about, you’re going to be reaching out to the app owners and/or teams to assist them in building the plan. These are the core, critical apps – the ones which drive the business. These are the kinds of apps you can typically name off the top of your head. You want to start on these first and spend the most time and resources on them. They typically include full regression tests, because they’re just that critical.
Gold apps are apps which aren’t going to merit white glove treatment, but you’d still consider them to be mission critical. Whatever process you have built, you’re going to run these apps through the whole thing. Static analysis, install/launch testing, user acceptance testing – these apps are going to go end-to-end and have a sign-off before you mark them as green to signal that its users are ready to deploy.
A silver app is one that, while important, you’re willing to invest less in ensuring compatibility. You may run these apps through whichever tools and manual evaluation you have in place, but if it sails through with flying colors, you’ll often just mark them as green and move along. Depending on the accuracy of the tools, you may even choose to dispense with manual testing. Understand, of course, that this means accepting the potential of some failures. But – guess what? You’re guaranteed to get some anyway, even with your platinum apps! It’s just not possible to find every bug in your software. The ROI in digging deeper to find that extra small percentage of potential bugs just isn’t there, so you don’t bother.
A bronze app isn’t one that doesn’t matter – after all, it still made your Managed Apps list – but it is one where failure can be mitigated fairly quickly. Some customers (including, notably, Microsoft itself) will take a “Canary in a Coal Mine” approach to these apps. Grouping them together by similarity, they’ll test a very small percentage of that cluster of apps. If they find failures in that cluster, they test more. If not, then the other apps within that cluster just don’t get tested at all. After all, you do have data (similar apps working) suggesting the risk is low. Other customers will still do something, but may only use inexpensive automated or manual solutions to just do a sanity check.
Of course, the behaviors vary somewhat depending on the magnitude of the expected impact of a platform change. Expect a higher percentage of apps to fail? Then go deeper in each bin. Expect a small percentage of apps to fail? Then do less work within each bin. The idea isn’t ever to say that testing is a bad idea, but to focus your time and energy where it matters most. When you’re undertaking a low risk activity, then invest less in managing risk. When you’re undertaking a high risk activity, invest more in managing risk, but make sure you’re investing in where the majority of the risk exists. Because, if the tornado is coming, it’s a really good idea to get your kids into the cellar first, before you go and fetch your favorite wash cloth or a box of pens.