Lately, I’ve been working on a chapter on cloud security and governance in my forthcoming book, Cloud Standards, so I have been more attuned than usual to events in the cloud governance world. Even so, I don’t think I would have missed the flurry of news about a cloud data center closing at the end of February. You can read more about it here. What I find interesting is what this scenario tells us about governance concerns in the cloud.
Cloud governance is a major obstacle to cloud computing
The undercurrent is that certain end customers – in this situation government and commercial – feel more comfortable hosting their own critical information on premise vs. remotely. I’d like to analyze this in more depth. This is a serious issue for cloud. The unwillingness of customers to move mission-critical resources to the cloud demotes cloud computing to a second-tier technology, unsuitable for the most important enterprise purposes, denying mission-critical computing the cost-savings and flexibility that cloud promises.
What is the barrier?
Cloud data centers surrounded by high walls and razor wire do not look insecure, and public cloud provider sites bristle with security statements. A key to the problem is that traditional IT governance and security assumes the infrastructure is under the direct control of the enterprise. Cloud infrastructure is under the control of the cloud provider. If the cloud is not private, the enterprise chain of responsibility is not there to enforce controls in the cloud. But a mission-critical consumer has no choice: they must always assert adequate control to establish governance over their mission-critical resources.
Think about a typical ISO/IEC 27001 scenario in implementing an IT control. 27001 advocates a Deming Plan-Do-Check-Act cycle. In the Plan phase, risks, mitigations, costs, and benefits are evaluated and somebody makes a decision that a control should be implemented. In the Act phase, funds are allocated and a team is designated to execute the control. A simple example might be to declare that a certain set of files are critical and all access to physical storage and backup media and all access to file contents must be controlled and logged — a common procedure on-premises in the enterprise, and not that hard to establish in a third-party cloud data center surrounded by razor wire. Where is the team that does this when the control is on a file system on a cloud? Could be the enterprise team on IaaS, could be the cloud team on PaaS. All that has to be worked out.
Then comes the Check phase. Someone must check to be sure the controls are doing what they were intended to do. How do you do that? Within the enterprise, it is a probably set of reports from the operations team and some spot checking by an auditor. In a third-party data center, the answer is not so clear. The enterprise operations team is remote, not there to police for backup drives left sitting in the lunch room. The third-party personnel are not in the enterprise chain of responsibility. How can the enterprise be certain that third parties are following proper procedures? Files on disks in locked rooms on premises may not need to be encrypted, but they may have to be in the cloud. Is the third-party provider taking care of it? Or does the enterprise team have to do it? Has that been written into the control specification? The point is not that encryption is onerous, but new procedures and responsibilities have to be developed to ensure that the right entity is aware that it has to be done.
Currently, the tool most used to assure that controls are maintained is a contract or service-level agreement (SLA) that obligates the provider to protect the consumer. That approach can work, but such a contract is far from a click through. Negotiating a sufficiently detailed contract is difficult, time consuming, expensive, and can wipe out all the flexibility that makes cloud attractive. Particularly in an IaaS implementation, controls can be mixes of provider and consumer implementations with complex division of responsibility.
Provider certification is not enough
A provider can certify the services they offer to the consumer, but certifying the consumer’s services implemented on the provider’s cloud is not ordinarily something the provider can do. In fact, the company that decided to close this data center had every major certificate, and you could conclude that if provider certification were enough, the data center would still be open. The problem is that procedures are still unclear and the certificates focus on a single enterprise, not an enterprise and third parties who have taken partial responsibility for mission-critical resources.
This problem will be solved, but it is not solved yet. Groups like the Cloud Security Alliance are making great strides toward clarifying the tangle, and I will not be surprised to see the problem fall behind us soon, but for the time being, this public situation is an indication of the current state of affairs.
*Free license image courtesy of stock.xchng.
Latest posts by Marvin Waschke (see all)
- What OpenStack Can Learn from the 1990s Browser Wars - May 2, 2013
- Is Cloud Storage Safe? - April 4, 2013
- Where Standards and Open Source Intersect: The Cloud - January 30, 2013