Healthcare.gov: Proof We Need a New State-Of-The-Art

The high visibility surrounding the healthcare.gov rollout should require all of us in the software development community to pause and think about the best way to approach delivering composite enterprise applications.

The high visibility surrounding the healthcare.gov rollout should require all of us in the software development community to pause and think about the best way to approach delivering composite enterprise applications.

Against this backdrop, I’d like to address the challenge of delivering enterprise software under today’s time-to-market pressures. Personally, I find it ironic that it is difficult to get our industry to focus on the inadequacy of our test engineering and yet it is now the subject of congressional hearings and nightly newscasts.

What seems so en vogue right now is to question the commitment or skill of those involved in healthcare.gov. This is entirely unfair. Let’s be honest, we have all taken part in software projects that have not met expectations. I am sure that healthcare.gov engineers worked very hard, did everything they knew to do, and yet didn’t deliver the results they wanted. This is the problem: When smart people do the best they can and fail, it’s clear that the state-of-the-art is insufficient, and we need a new state-of-the-art.

How many times have we heard about the need for a pristine, end-to-end test environment? This presupposes that everything else in the world has to be done, perfect, and made available before your part can be made ready. This is the Catch-22 that plagued healthcare.gov and has plagued so many other software projects.

Have you ever heard Intel blame Dell for an inability to build a good CPU? Have you ever heard of an airplane wing designer waiting for the wheel assembly to be complete before ensuring that the wing works correctly? The truth is that the software development community has not reconciled the challenges of delivering composite systems made of many discrete components, all being built and tested in parallel. The notion that we will make sure a component works at assembly time is inherently flawed and counterproductive. A computer would never be delivered if we first had to make sure the CPU works by sticking it in a motherboard. And, we won’t get good software when our prerequisite to quality requires everything else to be built, assembled, and of high-quality already. This is a pipe dream.

The following are four key concepts I believe can improve composite software development and help us achieve the kind of delivery expectation and quality that we are investing to receive.

1) We must design, build and test components in a purely simulated environment. The CPU engineer knows exactly how the device will connect to other systems early in the design. And, the CPU engineers know to build a test harness that fully simulates the expected behaviors of the outside system so the CPU can be fully tested as a discrete component, without the need for the actual system that the CPU will see in the real world. The CPU thinks it’s operating in a real computer because the harness provides something so close to the live environment.

This type of simulation for software is what we call Service Virtualization, and it is a technology we’ve developed at CA Technologies years ago. Customers use this to ensure quality from a feature-function and performance perspective long before integration testing ever starts. This makes component development a bit faster and testing dramatically faster. I’ve written about how FedEx, Sprint, First Data, and other companies faced the same challenges as healthcare.gov and so many other enterprise applications, and how Service Virtualization helped them address those challenges.

2) We need a Continuous Delivery system that builds and promotes environment changes through the lifecycle in a fully automated fashion. If you have been in software as long as I have, you know that changes often sit for days and sometimes weeks, waiting on environments. And even when those changes do get to the environment, you find out that they were applied incorrectly. Modeling the application and infrastructure into a Continuous Delivery system and making that system responsible for the deployment and promotion of changes across environments will accelerate implementation. Most importantly, it also allows for the fast, safe rollback of changes when unexpected failures occur.

3) We need to embrace the notion of consumer and producer governance when it comes to integration testing. Today, most integration testing is functional testing of the entire system. Functional testing is important, but integration testing should focus on the expected behavior of producing components by the consuming components, and indicate much earlier when those expectations are broken. This does not require a myriad of end-to-end test environments. It just requires the proper application of Service Virtualization and API test automation.

4) Finally, we need a feedback loop to help us better understand how the application is behaving and performing in production so we can leverage better development and test activities. In today’s highly composite systems, it is very hard for individual development teams to establish the common use cases, data scenarios, performance, scalability, and error conditions encountered by software running in production. The people who build software are often not the people who run it. As we make fast changes without this feedback loop, we run a higher risk of failure. One simple example of this feedback loop is to the leverage production observed transaction profiles and response time expectations for performance engineering load patterns and response time trend analysis.

I encourage you to learn more about these concepts, which are helping us address some of the most difficult composite enterprise software challenges—not just in healthcare, but in practically every industry.

Written by

John Michelsen

CA Leadership

As CTO, John is responsible for technical leadership and innovation at CA. He is also…

Published in

Healthcare

View this topic
  • James Holland

    This is great. Hooray for Disney’s imagineers!

  • http://www.sheistocktips.com/ SHRISTOCKTIPS

    SHRISTOCKTIPS has
    become a new brand in the share market research with its accurate research. Proven
    itself always right whether market is bull or bear. Last week all paid clients
    booked handsome profit in NIFTY, BANKINIFTY & STOCKS. Now for the coming
    week we expect more correction can come in NIFTY as the IRAQ issue is getting
    more tense, If it happens more then you will see a sharp fall in all world marketNSE BSE, STOCK TIPSbecause as we know all world run on
    crude & most of the crude comes from IRAQ. So be ready for a sharp fall so
    sell will be the best strategy for next week also. Traders can make a sell
    position in NIFTY around 7600-7650 with stoploss 7750 for the target of
    7300-7200.One can also make a sell call NIFTY 50 stocks as per NIFTY levels. You
    can also take our two days free trial to check our accuracy. For further updates
    you can visit our website. http://goo.gl/sMgZ7n

    Regards

    SHRISTOCKTIPS TEAM

  • king lear

    testing comment functionality, please do not publish this

  • http://www.rachelmacik.com Rachel Macik

    Love the personal pic :)

  • Plutora Inc

    This is a good case study. 2.3 sec’s off a login transaction is big.