Thoughts on enterprise IT

Dustin Amrhein

Subscribe to Dustin Amrhein: eMailAlertsEmail Alerts
Get Dustin Amrhein: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Blog Post

Surveying Cloud and Virtualization in Application Middleware

Examining trends and challenges in the field

For about two years now, I have had the opportunity to be out in the field and talk to those in the IT trenches that create, deploy, and operate middleware application infrastructure. More to the point, I have been working with them on ways that they can leverage elements of virtualization, automation, and policy-driven computing to make their work easier and more efficient.

While I hope I have been able to share some helpful information and introduce some useful tools in this space, I believe I have been the largest benefactor from these interactions. I learn something each time I go out on a trip, and I thought it was about time I share some of what I see out in the field today. Therefore, in no particular order, here are five trends and challenges I see in the area of middleware application infrastructure with respect to cloud/virtualization/automation and related technologies.

  • Virtualization as a de facto delivery model: Leveraging virtualization to deliver middleware application environments is no longer contrarian or cutting-edge. In fact, it is mostly a mainstream scenario now, especially for dynamic/temporal environments like development and test. As a result, users are looking for advanced virtualization techniques, or in other words more bang for the buck. They want images that not only encapsulate installation of binaries, but they also want dynamic configuration updates and inter-component integration actions during image activation. Vendors are definitely pushing into this space, and standards like the Open Virtualization Format (OVF) definitely help to push innovation in this regard. Interestingly enough, images that are more sophisticated bring their own challenges, as you will see later.
  • Cultural challenges for the cloud: This is a topic I often bring up, but when I talk with users struggling with employing virtualization, automation, or cloud techniques, nine times out of ten it is not technical. Rather, the real problem is confronting culture within the organization. This could be learning how to work across organizational silos as they move towards self-service stack provisioning processes, or it could be convincing developers that the machine hidden under their desk is actually of greater use when put into a shared pool of resource. Multiple things have to happen here on both the provider and consumer side. Consumers have to look beyond short-term pain and look at long-term value. Providers need to focus on not only showing the compelling value in light of the short-term pain, but they also need to provide technical remediation for some of these issues. For instance, providers need to address organizational silos by accounting for that in the design and implementation of their interfaces (i.e. well-defined user roles and access controls).
  • Workflow governance challenges for the cloud: Users are attempting to reconcile traditional workflow governance techniques with the new provisioning model put forth by the cloud. While the technology allows them to provision environments with unprecedented speed and potentially move toward a self-service model, their workflow governance processes just do not work like this. If I can spin up an environment in 5 minutes from the time I decide to deploy until the time it is ready, that is great. However, if I have to go through a day long ticket request process to start that deployment, that is not so good. This is the challenge facing many organizations, and providers need to ensure they can account for some type of integrated workflow governance around provisioning. Whether it is directly in the system, or delivered via integration hooks, providers cannot ignore this capability.
  • Automation enhancement: Automating data center operations has long been a priority for many organizations. Virtualization perhaps reinforces the need or desire to further automate since it enables the automation of a significant portion of environment delivery. With the automation boon provided by virtualization, companies are looking to go further, realizing that the deployment of middleware application environments does not happen in a vacuum. For instance, often times other processes, like the completion of an application build, signal the need for the deployment of middleware infrastructure. In this particular scenario, users are looking to create one coherent flow that builds an application, deploys application infrastructure, and deploys the new application on top of that infrastructure. Obviously, providers need to be aware of this as it heightens the need for remote interfaces into systems. It is not likely you will build a single system that can do it all. Rather, you want to build systems that deliver well-defined functionality, and ensure its openness to enable a more holistic data center management approach.
  • Delivery model agnosticism: While virtualization is indeed mainstream, many companies are not necessarily using it throughout the lifecycle of an application environment. In fact, I talk with many users that employ virtualization to deliver application environments for development and test efforts, but as they get closer to the production boundary they revert to running environments in native mode (directly on hardware). In some cases though, there is a problem bridging the gap between the virtual and physical world to ensure that what they developed and tested is the same thing they put into production. Specifically, when using virtual images that encompass many of the administrative configuration and integration tasks, there is a concern regarding the accuracy with which they can complete those actions when setting up the environment in a manual fashion. If you find yourself in this situation, the best approach is to decouple as many configuration/integration actions from the delivery process as possible. Whether delivering via virtualization or natively, the delivery should encompass the installation and a minimal set of application infrastructure configuration. At that point, a second system provides any additional required configuration (i.e. installing applications, creating application resources, etc.). This second system should be able to operate on the application infrastructure regardless of the way in which one delivers it.

These are just a handful of common issues I hear about, and a glimpse into how both providers and consumers can and are addressing them. If you are a practitioner in this space, or if you are like me and work directly with those practitioners, I am definitely interested in your take. Drop a comment here or contact me on Twitter @damrhein.

More Stories By Dustin Amrhein

Dustin Amrhein joined IBM as a member of the development team for WebSphere Application Server. While in that position, he worked on the development of Web services infrastructure and Web services programming models. In his current role, Dustin is a technical specialist for cloud, mobile, and data grid technology in IBM's WebSphere portfolio. He blogs at http://dustinamrhein.ulitzer.com. You can follow him on Twitter at http://twitter.com/damrhein.