Proof We Need a New State-Of-The-Art Proof We Need a New State-Of-The-Art

The high visibility surrounding the rollout should require all of us in the software development community to pause and think about the best way to approach delivering composite enterprise applications.

Against this backdrop, I’d like to address the challenge of delivering enterprise software under today’s time-to-market pressures. Personally, I find it ironic that it is difficult to get our industry to focus on the inadequacy of our test engineering and yet it is now the subject of congressional hearings and nightly newscasts.

What seems so en vogue right now is to question the commitment or skill of those involved in This is entirely unfair. Let’s be honest, we have all taken part in software projects that have not met expectations. I am sure that engineers worked very hard, did everything they knew to do, and yet didn’t deliver the results they wanted. This is the problem: When smart people do the best they can and fail, it’s clear that the state-of-the-art is insufficient, and we need a new state-of-the-art.

How many times have we heard about the need for a pristine, end-to-end test environment? This presupposes that everything else in the world has to be done, perfect, and made available before your part can be made ready. This is the Catch-22 that plagued and has plagued so many other software projects.

Have you ever heard Intel blame Dell for an inability to build a good CPU? Have you ever heard of an airplane wing designer waiting for the wheel assembly to be complete before ensuring that the wing works correctly? The truth is that the software development community has not reconciled the challenges of delivering composite systems made of many discrete components, all being built and tested in parallel. The notion that we will make sure a component works at assembly time is inherently flawed and counterproductive. A computer would never be delivered if we first had to make sure the CPU works by sticking it in a motherboard. And, we won’t get good software when our prerequisite to quality requires everything else to be built, assembled, and of high-quality already. This is a pipe dream.

The following are four key concepts I believe can improve composite software development and help us achieve the kind of delivery expectation and quality that we are investing to receive.

1) We must design, build and test components in a purely simulated environment. The CPU engineer knows exactly how the device will connect to other systems early in the design. And, the CPU engineers know to build a test harness that fully simulates the expected behaviors of the outside system so the CPU can be fully tested as a discrete component, without the need for the actual system that the CPU will see in the real world. The CPU thinks it’s operating in a real computer because the harness provides something so close to the live environment.

This type of simulation for software is what we call Service Virtualization, and it is a technology we’ve developed at CA Technologies years ago. Customers use this to ensure quality from a feature-function and performance perspective long before integration testing ever starts. This makes component development a bit faster and testing dramatically faster. I’ve written about how FedEx, Sprint, First Data, and other companies faced the same challenges as and so many other enterprise applications, and how Service Virtualization helped them address those challenges.

2) We need a Continuous Delivery system that builds and promotes environment changes through the lifecycle in a fully automated fashion. If you have been in software as long as I have, you know that changes often sit for days and sometimes weeks, waiting on environments. And even when those changes do get to the environment, you find out that they were applied incorrectly. Modeling the application and infrastructure into a Continuous Delivery system and making that system responsible for the deployment and promotion of changes across environments will accelerate implementation. Most importantly, it also allows for the fast, safe rollback of changes when unexpected failures occur.

3) We need to embrace the notion of consumer and producer governance when it comes to integration testing. Today, most integration testing is functional testing of the entire system. Functional testing is important, but integration testing should focus on the expected behavior of producing components by the consuming components, and indicate much earlier when those expectations are broken. This does not require a myriad of end-to-end test environments. It just requires the proper application of Service Virtualization and API test automation.

4) Finally, we need a feedback loop to help us better understand how the application is behaving and performing in production so we can leverage better development and test activities. In today’s highly composite systems, it is very hard for individual development teams to establish the common use cases, data scenarios, performance, scalability, and error conditions encountered by software running in production. The people who build software are often not the people who run it. As we make fast changes without this feedback loop, we run a higher risk of failure. One simple example of this feedback loop is to the leverage production observed transaction profiles and response time expectations for performance engineering load patterns and response time trend analysis.

I encourage you to learn more about these concepts, which are helping us address some of the most difficult composite enterprise software challenges—not just in healthcare, but in practically every industry.

The following two tabs change content below.

John Michelsen

Chief Technology Officer at CA Technologies
As the Chief Technology Officer of CA Technologies, John is responsible for technical leadership and innovation, further developing the company’s technical community, and aligning its software strategy, architecture and partner relationships to deliver customer value. John is also responsible for delivering the company's common technology services, ensuring architectural compliance, and integrating products and solutions. John holds multiple patents including market-leading inventions delivered in database, distributed computing, virtual/cloud management, multi-channel web application portals and Service Virtualization (LISA). In 1999, John founded ITKO, and built LISA from the ground up to optimize today's heterogeneous, distributed application environments. Under his leadership, LISA’s platform for agile development grew in breadth and depth. The company was acquired by CA Technologies in 2011. CA LISA’s suite reshapes customers’ software lifecycles with dramatic results. Today, it delivers 1000%+ ROI for customers and is a lead offering in the Service Virtualization market. Prior to ITKO, John led SaaS and E-commerce transformations for global enterprises at Trilogy and He also founded a boutique custom software firm that focused on distributed, mission-critical application development projects for customers like American Airlines, Citibank and Xerox. John earned degrees in business and computer science from Trinity University and Columbus University. He has authored a best practices book, “Service Virtualization: Reality is Overrated,” which will be available this fall. He has contributed to dozens of leading technical journals and publications on topics ranging from hierarchical database techniques and agile development to virtualization.

Latest posts by John Michelsen (see all)

This article has 5 comments

  1. the article seems to be well with ground reality keeping in mind the challenges and approach differentials…. we in India are way behind but conceptually this thought can be shared with “who matter” and create an opportunity for CA.

  2. […] Capacity blog in response to a Wall Street Journal Blogs item. Given the recent news surrounding the not-so-smooth roll out of, we thought this advice would be […]

  3. Well… in the earlier days of development, I still believe that when one developer agrees with another developer that this is the request and response XML, they make quick stubs and they continue development. Didnt understand why this went off without becoming big.

    The article above is nice where we missed it.

  4. Well, the state of the art is here (Continuous Delivery, DevOps, Lean-Agile practices, Scaled Agile Framework, etc.) – large traditional orgs need to refresh their skillset. But the state of the art would not have even been necessary, even basic Agile and Continuous Integration would have gotten at least basic inter-component testing done. For insight into the challenges, check out what Jeff Sutherland (creator of Scrum) had to say:

  5. Well, the state of the art is here (Continuous Delivery, DevOps, Lean-Agile practices, Scaled Agile Framework, etc.) – large traditional orgs need to refresh their skillset. But the state of the art would not have even been necessary – even basic Agile and Continuous Integration would have gotten at least basic inter-component testing done. For insight into the challenges, check out what Jeff Sutherland (creator of Scrum) had to say:

Leave a Reply