But none of this software is developed in isolation. Ask developers, and they’ll tell you that the infrastructure they have to deal with is more complex and interdependent than ever before. In truth, much of software development is in fact an integration effort. IT development teams are building for myriad front-end web and mobile platforms, while also bringing along everything else the company has ever done. Enterprise development today relies on an increasing number of continually changing apps, data sources, everything as a service (IaaS, PaaS, SaaS, etc.), and integration middleware to play nicely with the real world and work properly for the organization.
Getting all of these distributed systems to an available and stable state for development and testing is like aligning the stars – it is nearly impossible. This lack of stable and available resources creates constraints that delay or prevent successful application deployments.
As an example of what companies face today, an IT leader at a large financial institution recently told me about a process his organization goes through three times a year. He’s dubbed it “enterprise release,” and it involves hundreds of applications, all pushed live simultaneously. These applications—some homegrown, some off-the-shelf, all highly customized—might address thousands of business and technical requirements. But none of the components can be effectively tested against any other systems, because nothing is ever really finished or ready at the same time, so they simply drop functionality out of each release. A development manager at the company summed up the situation this way: “I can’t do anything until I have everything, but I never have everything.”
These guys tried to create copies of production at huge expense, but that never provided enough stability for reliable results. The performance lab environment wouldn’t provide more than 10 percent of the peak production load, so there’s no chance to test for scalability. The code can’t be tested against specialized data scenarios they need, because there’s no DBA access to the data (e.g. where a specific customer scenario might be entered to give triple reward points after her third card purchase at a grocery store).
Sadly, that large financial institution is not alone. I’m sure many readers are nodding in agreement, having experienced similar less-than-ideal development scenarios that arise as a result of consumer-driven IT demands in a distributed software world. The goal of faster delivery drives many companies to implement software changes without forward visibility due to constraints in the environment. In the end, deadlines are missed, the IT organization’s reputation is further diminished, and in some cases (especially with well-known brands and public-facing applications), well-publicized failures make headlines. How can all of us in the business of developing software fix this problem, wherever we work?
We need to accept imitations, not limitations.
Unlike virtually all other manufacturing disciplines, the software development industry typically doesn’t validate its products in a simulator before finalizing and shipping its designs. Can you imagine Boeing taking an experimental wing, bolting it on an airplane in San Francisco, and seeing how well it works on the next scheduled flight to New York?
No, Boeing engineers wouldn’t dream of using the real thing. They test designs using a flight simulator and a wind tunnel, where any condition, from rainy days to high winds can be simulated. In the same way, enterprise software should be engineered and tested using service virtualization, which simulates an application’s surrounding real-world environment, data scenarios, and workload.
Service virtualization “listens” to applications and the messages passed between systems. It then clones those underlying systems in a stable, scalable virtual service environment for software development teams to use. A service virtualization platform such as CA LISA behaves and reacts just like the actual production systems being updated, integrated, or otherwise leveraged. Virtual services can be infinitely customized and used by multiple development teams at the same time.
The result is the polar opposite of what most companies dealing with complex IT landscapes are experiencing today: faster time to market (with few, if any, missed deadlines, planned rationally); lower development costs; and few or no embarrassing defects and performance issues escaping into production to vex end users and customers alike.
With the competition just a click away, isn’t it time for software development to behave more like a real engineering discipline? It’s time we embraced simulation to prove, perfect, and deliver new business and technical functionality without limitations.
Latest posts by John Michelsen (see all)
- DevOps 101: Why DevOps Matters to Large Enterprises - January 27, 2014
- Healthcare.gov: Proof We Need a New State-Of-The-Art - October 29, 2013
- Big Data Begins: IT Hero Rising - August 29, 2013