Do This, Not That: 7 Ways to Think Different in the Cloud

Do This, Not That: 7 Ways to Think Different in the Cloud

Most large enterprises have spent millions of dollars on their public cloud initiatives over the past few years but have made little progress towards achieving their goals of increased agility and reduced operating costs. The main reason is simple: they approached cloud as if it were a datacenter. In order to achieve their lofty goals, enterprises need a complete mind shift change.

The first thing to remember when moving to the public cloud is it’s a platform designed for developers so that they can quickly build solutions and test ideas with minimal upfront cost. Let me repeat, public clouds are made to enable developers to help the business achieve their goals faster and cheaper than ever before. The problem arises when IT leaders try to treat the cloud like a datacenter and enforce legacy processes, tools, and operating procedures on the public cloud infrastructure. This drastically reduces agility, drives up costs and ultimately defeats the purposes of moving to the cloud in the first place.

I run out of fingers and toes counting the number of workshops I’ve been in with clients trying to shift the conversation to cover such anti-cloud topics as level 2 and 3 networking, low level server configurations, and other components that the public cloud abstracts away. We need to stop thinking about the cloud as a bunch of servers. There is no one-to-one mapping of what you need in the datacenter to what you need in the cloud, so please, stop thinking about the cloud as a bunch of servers!

We need to shift our thinking from “how do I replicate what I do in the datacenter” to “how do I configure the appropriate platform to enable my developers.” Developers want APIs, not servers.

Takeaway: Successfully adopting the cloud requires focusing on enabling developers – not replicating legacy environments to run in the cloud.

The first thing many companies do when they start their cloud initiative is figure out how to lock it down. Too often, the people who own security and governance spend months (sometimes years) trying to figure out how to apply controls necessary to meet their security and regulatory requirements. Meanwhile, developers are not allowed to use the platform or worse yet, they whip out their credit card and build unsecured and ungoverned solutions in shadow clouds.

We need to shift our thinking from “how can we prevent developers from screwing up” to “how can we empower developers” by providing security and governance services that are inherited as cloud resources are consumed. To do this, we need to get out of our silos and work collaboratively. Instead of enforcing security and governance controls by requiring rigorous reviews, we need to bake policies and best practices into the SDLC. Start with continuous integration (CI). Automate the build process and insert code scans that enforce coding best practices, security policies, and cloud architecture best practices. Fail the build if the code does not meet the appropriate policy requirements. Let the developers police themselves by using automation that relies on policies established by the security, governance and architecture teams. Set the policies and then get out of the way, letting the build process do the enforcement. Developers will get fast feedback from the CI process and quickly fix any compliance issues – they need to or the build will never get to production.

Once applications are deployed, run continuous monitoring tools that look for violations or vulnerabilities. Here’s a novel idea: Replace meetings with tools that provide real time feedback.

Takeaway: Empower developers to create compliant code from the start using automated security and governance policies that are baked into the build process.

The typical software development life cycle is made up of a series of handoffs and reviews. Before code can move from one stage to the next, various process owners call meetings to review the project and ensure that company policies and controls are being enforced. If we’re trying to empower developers instead of control them, this is not the best approach.

There are two important factors that radically change the way we should think about governance. The first is Infrastructure as Code. Now that infrastructure can be represented by code and stored in a code repository, we can code much of the policy and controls along with the infrastructure. The second is Pipeline as Code. In addition to the infrastructure, now all of the build and deployment artifacts, policies, and procedures can be represented in code as well.

This enables us to enforce more policies and controls by automating code, which reduces the need for so many review gate meetings. Instead, turn your meetings into post mortems where the artifacts are reviewed to ensure that the automation continues to be in compliance with current policies.

Takeaway: Implement infrastructure and pipeline as code to remove bottlenecks and unlock the cloud’s true potential for speed.

Enterprises like to create infrastructure blue prints that enforce corporate policies and controls. The challenge is that there may be so many permutations of these controls that it’s almost impossible to manage all of the blueprints. For example, a common blueprint is an Apache Web Server. Problems arise when different development teams have different requirements for the configuration of that Apache server. Before you know it, multiple Apache blueprints are created which results in much higher maintenance costs and even confusion for which blueprint should be used.

A better solution is to create blueprints that only contain what is common (or default) and abstract everything else as a configuration artifact that is stored in a code repository. This allows developers to tailor the environment to their needs without having to go through a painful ticketing process to make minuscule changes to their blueprint. It also prevents the blueprint owners from having to create an infinite number of blueprints to meet the dynamic needs of all the development teams.

Some of you may be cringing and asking, “How can I prevent developers from improperly configuring the blueprint?” For example, what will stop the developer from opening up a port that shouldn’t be opened? The answer is automated policy enforcement. This is where continuous delivery (CD) comes into play. CD automates the creation of the virtual infrastructure and deploys the last good build from (CI). It is in the CD process where policies are enforced to ensure that the developer’s configuration is not in violation.

Takeaway: Empower developers to configure their requirements on top of standard blue prints and enforce policies via automation.

I have seen companies spend well over a year trying to implement hybrid clouds that require a high level of maturity and sophistication to get right. Before anything can be deployed into production, IT insists on delivering a full service catalog. The irony is that most CIOs tell me that their biggest cloud driver is agility. If agility is the driver and your customers are developers (yes, your own team), then focus first on developer agility, not IT’s needs to build the perfect hybrid cloud solution.

A better approach is to start with a public cloud endpoint and focus on applications that can run in the public cloud from day one. This provides three key benefits. First, it allows developers to start building (or migrating) to the cloud which helps them learn. Second, the business starts seeing a return on their investment. Third, once we have real experience in the cloud, we can start architecting a more realistic hybrid cloud strategy for the business. Focus on the highest priority services as opposed to building an entire service catalog that won’t be used.

Takeaway: Focus on helping the customer (developers), as opposed to meeting all of IT’s needs out of the gates. Learn and iterate.

If I had a dollar for every time someone told me what can’t be done in the cloud I’d be a very rich man. Any time someone says you can’t do something in the cloud, my immediate response is “Why not?”

Usually the reason why we “can’t” is because of a legacy policy, a misconception about cloud security, or the lack of support of an old tool or process that is a poor fit for the cloud anyways. We need to coach people to focus on the “what” and not the “how.”

A recent customer conversation highlights the disconnect

Customer: “You can’t put app XYZ in the cloud because the cloud is too expensive.”

Me: “How is it too expensive?”

Customer: “We use High IOPs disk which cost a ton in the cloud.”

Me: “Why are you using High IOPS disk?”

Customer: “We run this weekly process that ingests various unstructured data and iterates through a series of temp tables until we get to the final data set.”

Me: “If you would use a NoSQL database instead of a relational database you would not need to iterate through writing temp tables and would not need High IOPS disk. In fact, you would not need 2 times the infrastructure for disaster recovery either. Even better, since you only run this once a week you could use spot instances and pay only for the time it takes to process the data which should drop from 24 hours to less than an hour.”

Takeaway: By focusing on “why can’t I” instead of “why I can’t,” you solve problems more effectively in the cloud.

Another common mistake is relying on legacy tools. When moving to the cloud, I strongly recommend rationalizing the application stack. What works great on-premise might be a huge show stopper in the cloud. When we moved from mainframe to client server we abandoned a lot of the previous technologies and we should do the same as we move to the cloud. In addition, we should not stay married to the same vendors just because they performed well on premise. For example, just because you are an Oracle shop or an HP shop or an EMC shop, does not mean you should immediately default to their “cloud” solutions. Make sure when you evaluate your tools that you consider new solutions built specifically for the cloud – in addition to whatever your legacy vendor is offering. In many cases, the new technologies are the right selection in the long run.

Takeaway: Just because it’s not broken doesn’t mean it can’t be replaced with something better.

The public cloud is not just another datacenter, so it shouldn’t be treated like one. A legacy mindset to cloud limits its capabilities, increases complexity and drives up costs – all of which lowers cloud adoption and prevents enterprise from achieving their cloud objectives. To successfully leverage the cloud, build a new operating model from the ground up, but only after your architects have shifted their mindset away from data center thinking to public cloud thinking. Otherwise, your public cloud will just turn into another expensive datacenter.

This article originally appeared on Cloud Technology Partners under the title Do This, Not That: 7 Ways to Think Different in the Cloud.