The high visibility surrounding the healthcare.gov rollout should require all of us in the software development community to pause and think about the best way to approach delivering composite enterprise applications.
Against this backdrop, I’d like to address the challenge of delivering enterprise software under today’s time-to-market pressures. Personally, I find it ironic that it is difficult to get our industry to focus on the inadequacy of our test engineering and yet it is now the subject of congressional hearings and nightly newscasts.
What seems so en vogue right now is to question the commitment or skill of those involved in healthcare.gov. This is entirely unfair. Let’s be honest, we have all taken part in software projects that have not met expectations. I am sure that healthcare.gov engineers worked very hard, did everything they knew to do, and yet didn’t deliver the results they wanted. This is the problem: When smart people do the best they can and fail, it’s clear that the state-of-the-art is insufficient, and we need a new state-of-the-art.
How many times have we heard about the need for a pristine, end-to-end test environment? This presupposes that everything else in the world has to be done, perfect, and made available before your part can be made ready. This is the Catch-22 that plagued healthcare.gov and has plagued so many other software projects.
Have you ever heard Intel blame Dell for an inability to build a good CPU? Have you ever heard of an airplane wing designer waiting for the wheel assembly to be complete before ensuring that the wing works correctly? The truth is that the software development community has not reconciled the challenges of delivering composite systems made of many discrete components, all being built and tested in parallel. The notion that we will make sure a component works at assembly time is inherently flawed and counterproductive. A computer would never be delivered if we first had to make sure the CPU works by sticking it in a motherboard. And, we won’t get good software when our prerequisite to quality requires everything else to be built, assembled, and of high-quality already. This is a pipe dream.
The following are four key concepts I believe can improve composite software development and help us achieve the kind of delivery expectation and quality that we are investing to receive.
1) We must design, build and test components in a purely simulated environment. The CPU engineer knows exactly how the device will connect to other systems early in the design. And, the CPU engineers know to build a test harness that fully simulates the expected behaviors of the outside system so the CPU can be fully tested as a discrete component, without the need for the actual system that the CPU will see in the real world. The CPU thinks it’s operating in a real computer because the harness provides something so close to the live environment.
This type of simulation for software is what we call Service Virtualization, and it is a technology we’ve developed at CA Technologies years ago. Customers use this to ensure quality from a feature-function and performance perspective long before integration testing ever starts. This makes component development a bit faster and testing dramatically faster. I’ve written about how FedEx, Sprint, First Data, and other companies faced the same challenges as healthcare.gov and so many other enterprise applications, and how Service Virtualization helped them address those challenges.
2) We need a Continuous Delivery system that builds and promotes environment changes through the lifecycle in a fully automated fashion. If you have been in software as long as I have, you know that changes often sit for days and sometimes weeks, waiting on environments. And even when those changes do get to the environment, you find out that they were applied incorrectly. Modeling the application and infrastructure into a Continuous Delivery system and making that system responsible for the deployment and promotion of changes across environments will accelerate implementation. Most importantly, it also allows for the fast, safe rollback of changes when unexpected failures occur.
3) We need to embrace the notion of consumer and producer governance when it comes to integration testing. Today, most integration testing is functional testing of the entire system. Functional testing is important, but integration testing should focus on the expected behavior of producing components by the consuming components, and indicate much earlier when those expectations are broken. This does not require a myriad of end-to-end test environments. It just requires the proper application of Service Virtualization and API test automation.
4) Finally, we need a feedback loop to help us better understand how the application is behaving and performing in production so we can leverage better development and test activities. In today’s highly composite systems, it is very hard for individual development teams to establish the common use cases, data scenarios, performance, scalability, and error conditions encountered by software running in production. The people who build software are often not the people who run it. As we make fast changes without this feedback loop, we run a higher risk of failure. One simple example of this feedback loop is to the leverage production observed transaction profiles and response time expectations for performance engineering load patterns and response time trend analysis.
I encourage you to learn more about these concepts, which are helping us address some of the most difficult composite enterprise software challenges—not just in healthcare, but in practically every industry.
Latest posts by John Michelsen (see all)
- DevOps 101: Why DevOps Matters to Large Enterprises - January 27, 2014
- Healthcare.gov: Proof We Need a New State-Of-The-Art - October 29, 2013
- Big Data Begins: IT Hero Rising - August 29, 2013