The five steps to managing application readiness for Windows 7
Like most big undertakings, the challenge isn't insurmountable if you take the time to deconstruct the problem into logical, manageable tasks.
An application readiness project falls into three major sections: Collecting, Analyzing and Mitigating. However, there are a couple of additional steps we would like to call out: consider virtualization technologies before you commence the testing regimen, to help reduce the testing process and potentially help improve your desktop infrastructure to make future migrations more manageable; and sequence the testing phase to align to your roll-out strategy.
If you're ready to dive in, let's get started.
Step 1: Collect an application inventory
The first step is to take an application inventory to understand exactly where you stand—and believe us; at this point you've probably just realized the problem is bigger than you thought. But more importantly, you've just turned an ‘unknown' into a ‘known' and are in a better position to scope the testing and readiness program and understand the challenges ahead.
Fortunately there are a number of tools available that can help automate the process. Your client management software might have this capability built-in, or you can also use the Application Compatibility Toolkit, available for free download. If you already have another inventory mechanism like System Center Configuration Manager, Asset Inventory Service or other, you can use that as a starting point.
To make the inventory most useful downstream, capture more than just a list of applications—you'll want to understand more detail on who is using an application, what their role is, and how important that application is to the user. With this information, you can prioritize those mission-critical applications and eliminate unused or redundant applications (more on that in the next step).
Also, there's a side benefit—identifying widely-used applications that you don't currently manage. You'll want to get these into your orbit so you can ensure they are properly managed, on the approved version and have the required software updates.
Step 2: Analyze your applications
How many applications do you currently support that have been replaced or have otherwise fallen out of favor with business users? If you're like most organizations, a sizable number of them—in some cases most of them. So once you done your assessment and have a good ‘lay of the land,' the next step is to scrub your supported application list and filter them down, before you undertake the time consuming—and costly— process of regression testing.
Set appropriate goals for your application portfolio. How many total apps do you want to support? At what point does an app elevate to "managed" status?
After you set your goals, it's now time to find the low hanging fruit and narrow down the applications that need testing.
Eliminate redundant and unused applications.You'll undoubtedly find that you have several applications that perform the same function. Now is a good time to standardize on a single application per function, and eliminate those that have been made obsolete. One tip here is to try and map application dependencies, as you may need to support a legacy version of one application to keep another one supported by the ISV.And of course drop those that are rarely or never used.Not only will you make testing easier, you might save on licensing expense as well.
Remove multiple versions of the same application and standardize on the most current.In almost all cases, the newest version performs best and is the most secure and reliable. Again, watch for application-to-application dependencies.
Collect information from business users to help prioritize those apps that are mission critical, and determine which departments are using which apps. This will be useful when you sequence your testing process; you'll want to align the timing of your testing to your staged roll-out of the new desktop image.
Step 3: Assess incompatibilities and mitigation options
No doubt you will find some applications that need some work to get them ready for Windows 7
. At this point you have several options:
1.You can replace the non-compatible application with a new version.Certainly the most reliable method, but unfortunately, the most expensive as well.If the application is mission-critical or otherwise strategic to operations, this is the way you'll want to go.
2.Create shims for your existing applications.Shims are small pieces of code inserted between the application and Windows to modify calls to the underlying OS—for instance, to trick your application into thinking the user is running as an administrator, while still maintaining standard user mode.You will have some management overhead, since you'll need to maintain a shim database, but this approach will remedy many application problems.This is the more cost effective route, and might be the only option if the application vendor is no longer around.One caveat—many vendors will not support shimmed applications.
3.You can use Group Policy to change the offending behavior of the application.Like shimming, this will usually take care of the compatibility problem but carries some downsides as well.Essentially this approach uses policy to disable a particular feature or function that is causing the application to falter.Unfortunately, in many cases these functions involve the security of the underlying system, so the trade-off is significant.Likewise, the application must have Group Policy settings to enable this manageability.
For custom or in-house developed applications, you can of course modify the code. This isn't always an option, but if it is, there are great resources to help—the Application Compatibility Cookbook for changes made from Windows XP to Windows Vista, and the Application Quality Cookbook for changes made from Windows Vista to Windows 7. Both are free guides that help developers recode an application for native compatibility.
Step 4: Prepare for the OS deployment and new application delivery options
The start of an OS migration project is a great time to rethink how you package and deliver applications to your end users. Virtualization technologies have opened up options that simply weren't available for the last major OS migration; you should consider different models for desktop image and application delivery before beginning the testing process. You might find that the savings in application testing and readiness more than offsets the cost of implementing a virtualized environment—while providing a more flexible and easier-to-manage environment for future efforts.
There are two major forms of virtualization that can address application compatibility issues—application virtualization and OS virtualization. Application virtualization separates the application layer from the OS, including the applications files and registry settings, and packages the application for streaming. OS virtualization come in a few different forms, but essentially creates an OS image independent of the native image on the machine.
Virtualizing your application portfolio provides a number of benefits for manageability and flexibility, but one key advantage is that you minimize application-to-application conflicts. This type of conflict arises, for instance, when you need to run two versions of the same application simultaneously—common in training situations where you want to compare the process of conducting a specific task in an old versus new application, or when the finance department is migrating to a newer version of their accounting software but needs access to the old one to close the fiscal year.
A more general use of virtualization to overcome application compatibility is to create a virtual image that contains a critical application and the operating system it is designed to run on. There are several tools to enable OS virtualization, from Virtual PC and Windows XP Mode in Windows 7 Professional and higher SKUs (an unmanaged virtual image that will run applications intended for Windows XP but not compatible with Windows 7) to Microsoft Enterprise Desktop Virtualization (MED-V), in the Microsoft Desktop Optimization Pack (MDOP), which enables a virtual machine to be easily provisioned, configured and managed using policies to determine how the physical and virtual environments interact with one another.
Of course, adopting an alternative computing model for your client PCs is an undertaking in its own right, but this would be the time to assess whether the benefits to your organization—greater flexibility and manageability—outweigh the additional effort to adopt this model for PC provisioning.
Step 5: Sequence your testing, piloting and deployment efforts
Use your prioritization from step 2 to sequence your testing efforts, so you can begin the staged roll-out with and conduct subsequent testing in parallel.
As you begin the testing process, you can use two approaches—static and dynamic analysis; while static analysis is relatively new, a thorough testing regimen will use both.
Static analysis looks at the structure of the application and identifies issues that will undoubtedly arise, either in installation or runtime.There are a number of tools and services that can help automate this process, and will quickly highlight the obvious problems.
Dynamic analysis looks at the behavior of the application at runtime, and is what is traditionally done in regression testing.Here, you are "smoke testing" the application in your specific environment—replicating the experience a variety of users will have with their hardware and the other key applications and drivers.
Finally, you will want to get a handful of real users running the applications and looking for any strange behavior that hasn't surfaced in the structured testing.The promise of keeping the new PC for participation can be a great motivator here!
Once you are ready to start rolling out into production, identify the people for whom a migration makes sense first—based on specific capabilities they need, or to minimize business disruption. Migrating a group of expert users will be easier than dealing with the help desk calls from task workers who now are looking at an unfamiliar screen and don't know what to do with it. Next, identify which applications these groups will need to perform their work. Start with groups that are minimally or unaffected by application compatibility based on the applications they use, this will enable you to validate the deployment process and the operating system. As you work through your application portfolio and more groups become unblocked from incompatible applications, then target those groups.
One final word of caution—avoid taking the process too far. If you let the scope creep from application compatibility to a full-blown application quality project, you might never finish. Accept the goal of fixing bugs that prevent work from being done, and avoid trying to eliminate every bug that exists—you undoubtedly have better use for your time!
An application readiness project falls into three major sections: Collecting, Analyzing and Mitigating. However, there are a couple of additional steps we would like to call out: consider virtualization technologies before you commence the testing regimen, to help reduce the testing process and potentially help improve your desktop infrastructure to make future migrations more manageable; and sequence the testing phase to align to your roll-out strategy.
If you're ready to dive in, let's get started.
Step 1: Collect an application inventory
The first step is to take an application inventory to understand exactly where you stand—and believe us; at this point you've probably just realized the problem is bigger than you thought. But more importantly, you've just turned an ‘unknown' into a ‘known' and are in a better position to scope the testing and readiness program and understand the challenges ahead.
Fortunately there are a number of tools available that can help automate the process. Your client management software might have this capability built-in, or you can also use the Application Compatibility Toolkit, available for free download. If you already have another inventory mechanism like System Center Configuration Manager, Asset Inventory Service or other, you can use that as a starting point.
To make the inventory most useful downstream, capture more than just a list of applications—you'll want to understand more detail on who is using an application, what their role is, and how important that application is to the user. With this information, you can prioritize those mission-critical applications and eliminate unused or redundant applications (more on that in the next step).
Also, there's a side benefit—identifying widely-used applications that you don't currently manage. You'll want to get these into your orbit so you can ensure they are properly managed, on the approved version and have the required software updates.
Step 2: Analyze your applications
How many applications do you currently support that have been replaced or have otherwise fallen out of favor with business users? If you're like most organizations, a sizable number of them—in some cases most of them. So once you done your assessment and have a good ‘lay of the land,' the next step is to scrub your supported application list and filter them down, before you undertake the time consuming—and costly— process of regression testing.
Set appropriate goals for your application portfolio. How many total apps do you want to support? At what point does an app elevate to "managed" status?
After you set your goals, it's now time to find the low hanging fruit and narrow down the applications that need testing.
Eliminate redundant and unused applications.You'll undoubtedly find that you have several applications that perform the same function. Now is a good time to standardize on a single application per function, and eliminate those that have been made obsolete. One tip here is to try and map application dependencies, as you may need to support a legacy version of one application to keep another one supported by the ISV.And of course drop those that are rarely or never used.Not only will you make testing easier, you might save on licensing expense as well.
Remove multiple versions of the same application and standardize on the most current.In almost all cases, the newest version performs best and is the most secure and reliable. Again, watch for application-to-application dependencies.
Collect information from business users to help prioritize those apps that are mission critical, and determine which departments are using which apps. This will be useful when you sequence your testing process; you'll want to align the timing of your testing to your staged roll-out of the new desktop image.
Step 3: Assess incompatibilities and mitigation options
No doubt you will find some applications that need some work to get them ready for Windows 7
. At this point you have several options:
1.You can replace the non-compatible application with a new version.Certainly the most reliable method, but unfortunately, the most expensive as well.If the application is mission-critical or otherwise strategic to operations, this is the way you'll want to go.
2.Create shims for your existing applications.Shims are small pieces of code inserted between the application and Windows to modify calls to the underlying OS—for instance, to trick your application into thinking the user is running as an administrator, while still maintaining standard user mode.You will have some management overhead, since you'll need to maintain a shim database, but this approach will remedy many application problems.This is the more cost effective route, and might be the only option if the application vendor is no longer around.One caveat—many vendors will not support shimmed applications.
3.You can use Group Policy to change the offending behavior of the application.Like shimming, this will usually take care of the compatibility problem but carries some downsides as well.Essentially this approach uses policy to disable a particular feature or function that is causing the application to falter.Unfortunately, in many cases these functions involve the security of the underlying system, so the trade-off is significant.Likewise, the application must have Group Policy settings to enable this manageability.
For custom or in-house developed applications, you can of course modify the code. This isn't always an option, but if it is, there are great resources to help—the Application Compatibility Cookbook for changes made from Windows XP to Windows Vista, and the Application Quality Cookbook for changes made from Windows Vista to Windows 7. Both are free guides that help developers recode an application for native compatibility.
Step 4: Prepare for the OS deployment and new application delivery options
The start of an OS migration project is a great time to rethink how you package and deliver applications to your end users. Virtualization technologies have opened up options that simply weren't available for the last major OS migration; you should consider different models for desktop image and application delivery before beginning the testing process. You might find that the savings in application testing and readiness more than offsets the cost of implementing a virtualized environment—while providing a more flexible and easier-to-manage environment for future efforts.
There are two major forms of virtualization that can address application compatibility issues—application virtualization and OS virtualization. Application virtualization separates the application layer from the OS, including the applications files and registry settings, and packages the application for streaming. OS virtualization come in a few different forms, but essentially creates an OS image independent of the native image on the machine.
Virtualizing your application portfolio provides a number of benefits for manageability and flexibility, but one key advantage is that you minimize application-to-application conflicts. This type of conflict arises, for instance, when you need to run two versions of the same application simultaneously—common in training situations where you want to compare the process of conducting a specific task in an old versus new application, or when the finance department is migrating to a newer version of their accounting software but needs access to the old one to close the fiscal year.
A more general use of virtualization to overcome application compatibility is to create a virtual image that contains a critical application and the operating system it is designed to run on. There are several tools to enable OS virtualization, from Virtual PC and Windows XP Mode in Windows 7 Professional and higher SKUs (an unmanaged virtual image that will run applications intended for Windows XP but not compatible with Windows 7) to Microsoft Enterprise Desktop Virtualization (MED-V), in the Microsoft Desktop Optimization Pack (MDOP), which enables a virtual machine to be easily provisioned, configured and managed using policies to determine how the physical and virtual environments interact with one another.
Of course, adopting an alternative computing model for your client PCs is an undertaking in its own right, but this would be the time to assess whether the benefits to your organization—greater flexibility and manageability—outweigh the additional effort to adopt this model for PC provisioning.
Step 5: Sequence your testing, piloting and deployment efforts
Use your prioritization from step 2 to sequence your testing efforts, so you can begin the staged roll-out with and conduct subsequent testing in parallel.
As you begin the testing process, you can use two approaches—static and dynamic analysis; while static analysis is relatively new, a thorough testing regimen will use both.
Static analysis looks at the structure of the application and identifies issues that will undoubtedly arise, either in installation or runtime.There are a number of tools and services that can help automate this process, and will quickly highlight the obvious problems.
Dynamic analysis looks at the behavior of the application at runtime, and is what is traditionally done in regression testing.Here, you are "smoke testing" the application in your specific environment—replicating the experience a variety of users will have with their hardware and the other key applications and drivers.
Finally, you will want to get a handful of real users running the applications and looking for any strange behavior that hasn't surfaced in the structured testing.The promise of keeping the new PC for participation can be a great motivator here!
Once you are ready to start rolling out into production, identify the people for whom a migration makes sense first—based on specific capabilities they need, or to minimize business disruption. Migrating a group of expert users will be easier than dealing with the help desk calls from task workers who now are looking at an unfamiliar screen and don't know what to do with it. Next, identify which applications these groups will need to perform their work. Start with groups that are minimally or unaffected by application compatibility based on the applications they use, this will enable you to validate the deployment process and the operating system. As you work through your application portfolio and more groups become unblocked from incompatible applications, then target those groups.
One final word of caution—avoid taking the process too far. If you let the scope creep from application compatibility to a full-blown application quality project, you might never finish. Accept the goal of fixing bugs that prevent work from being done, and avoid trying to eliminate every bug that exists—you undoubtedly have better use for your time!
Source...