Generally, there are two schools of thought about how cloud infrastructures should be built and utilized: big bang and slow burn.

Breaking down the big bang and slow burn

The big bang parallels the "build it and they will come" school of thought where the entire data infrastructure needs to be built before any useful value is derived from it. This is how a power generation plant is built — it is proactive (see later definition). 

On the other hand, the slow burn refers to the incremental method where data systems are slowly revised and switched over, one at a time, so that a continuous value stream of capabilities always exist. This is how traditional infrastructures are built now — this is reactive (again, see later definition).

The big bang gets you to the value faster, but some of the biggest complaints coming from the SMB crowd around that methodology of building and utilizing cloud infrastructures is that no matter how great the actual business value might be: 

  1. It is simply too expensive upfront (i.e. the capital expense (CAPEX) required);
  2. they are mistrustful about being able to recoup their CAPEX costs by charging internal customers for utilization; and/or
  3. with no clear asset owner (more on this later), they doubt their ability to fund the continuous CAPEX requirements for an ever-expanding infrastructure.

These are all valid concerns, and not just for the small and medium business (SMB) market, but for anyone who is thinking about transitioning to a cloud infrastructure. More important though, is that it is not typically the technology differences that are the sticking point, but the funding requirements (CAPEX and OPEX) and the fundamental operational changes required that slow or stop the transition.

The CAPEX and OPEX funding issues are causing the biggest headaches in the industry but first, let me explain a major differentiation between a traditionally managed infrastructure and a cloud infrastructure: a cloud infrastructure is proactive while a traditional infrastructure is reactive.

Traditional infrastructure as reactive

Reactive—this is the traditional infrastructure process—means that as individual business requirements are identified (i.e. the need for purpose-built applications), only those assets relative to those requirements are purchased and added to the infrastructure. 

There are very clear process chains for how those assets are identified, specified and acquired with very clear supporting processes for how they are then tracked, handled and depreciated within the financial systems as they age and are eventually decommissioned.

Cloud infrastructure as proactive

Proactive, on the other hand, means that there is an infrastructure-spanning, generic set of capabilities (i.e. processing, performance, availability, security, capacity, etc.) that are defined (to be delivered via a service delivery framework). Then the infrastructure is built to deliver those capabilities, regardless of the underlying business requirements or any specific need. There is also a different process for how those assets are then tracked, handled and depreciated within the financial systems as they age and are eventually decommissioned, because there is no clear owner as the asset is decoupled from any application being run on it.

Power generation

Another way of saying this is that the relevant requirements of the infrastructure being built is much the same for a power generation plant being built:

  • The most relevant requirement equals future gross power requirements (or in our case, capabilities); and
  • the least relevant requirement equals what those power requirements (capabilities) would be used for.

So, as you can see, the proactive (or cloud) infrastructure is built first (via CAPEX like power generation plants) and then the delivery of the services are then cost-defined (via OPEX). Like an energy utilities customer, the business users pay for the system as it is utilized (via a chargeback mechanism) by an aggregation of fractional asset costs for network, storage and processing capacity usage as well as administrative overhead.  

The major problem with this from the SMB perspective is that their entire budget processes are built on the idea that the infrastructure is built out based on specific business requirements related to purpose-built applications (and associated assets).

SMB process flow through procurement
SMB process flow through procurement

They follow the traditional process flow of business need > business requirement > technical requirement > equipment specification > project ID > procurement for buying the assets they need for meeting the application's performance and capacity requirements.

They simply have no way of building the infrastructure—or even portions of it—proactively because their internal processes (business, financial, etc.) are not aligned to such an upfront investment that spans multiple business initiatives or applications. In their operational and business process systems, there is no way to differentiate who pays for what on the front end… and then divvying up those costs on the back-end is equally as murky.

Many SMB organizations are moving ahead with their transition plans toward a cloud infrastructure, but are finding that, to be successful, they absolutely must address their financial and operational (business process) systems in concert with designing the actual technological infrastructure—not one before the other, but in parallel.

An example of the chicken and the egg

The chicken and the egg analogy is as good as any here because the operational and financial processes (the chickens) will by necessity mirror the capabilities designed into the cloud infrastructure (the eggs), but the cloud infrastructure cannot be adequately designed without defining the necessary operational and business processes. 

See what I mean?

Considering BPI

It would be helpful if all of the successful process steps for how to effectively marry the operational and business processes with the technology infrastructure design were all contained within a single source (online, in a book, etc.), but they are not and cannot be because the process steps are different, sometimes vastly different, to each business. This is why Business Process Improvement (BPI) services exist.

BPI is a process in which business leaders use various methodologies to analyze their procedures to identify areas where they can improve accuracy, effectiveness and/or efficiency and then redesign those processes to realize the improvements. It works by identifying the operations or employee skills that could be improved to encourage smoother procedures, more efficient workflow and overall business growth. 

In the case of transitioning to a cloud infrastructure, those business processes would be rewritten to focus on the new processes versus re-making the old processes better—but that's a topic for another time! Stay tuned for more on infrastructure transformation.

Technologies