Cloudonomics: A Rigorous Approach to Cloud Benefit Quantification

Source: Shutterstock, https://www.shutterstock.com/image-illustration/2d-rendering-cloud-computing-concept-636510281
Cloudonomics - Shutterstock

Posted: November 15, 2016 | By: Joe Weinman

Cloud computing and cloud services consistently place at the top of surveys ranking IT trends and CIO interests. This is largely due to the position of the cloud at the nexus of macro trends such as social media, the Internet, Web 2.0, and mobility and broadband wireless, as well as the opportunity for benefits such as reduced cost and enhanced agility.

Traditional approaches to assessing cloud benefits largely fall into two categories: vague yet enticing words, such as “agility,” and empirical data from case studies, which may or may not apply to the general case.

Cloudonomics—a term and discipline founded by the author (Weinman, 2008)—seeks to provide a rigorous foundation based on calculus, statistics, trigonometry, system dynamics, economics, and computational complexity theory, which can be used to interpret empirical results. We will provide an overview of these results together with references to more detailed analyses.

Defining the Cloud, from an Economic Viewpoint

Many definitions of Cloud Computing exist. Perhaps the most widely accepted is the one developed by the National Institute of Standards and Technology, now stable at version 15 (Mell and Grance, 2011):

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

This cloud model promotes availability and is composed of five essential characteristics: …on-demand, self service … broad network access … resource pooling … rapid elasticity…[and] measured service.”

This is an excellent, broadly applicable definition. From an economic viewpoint, however, we can use a semantically equivalent mnemonic—CLOUD (Chan, 2009)—which can help surface economic benefits. A CLOUD is a service that has the following attributes:

  • Common Infrastructure—i.e., pooled, standardized resources, with benefits generated by statistical multiplexing.
  • Location-independence—i.e., ubiquitous availability meeting performance requirements, with benefits deriving from latency reduction and user experience enhancement.
  • Online connectivity—an enabler of other attributes ensuring service access. Costs and performance impacts of network architectures can be quantified using traditional methods.
  • Utility pricing—i.e., usage-sensitive or pay-per-use pricing, with benefits applying in environments with variable demand levels.
  • on-Demand Resources—i.e., scalable, elastic resources provisioned and deprovisioned without delay or costs associated with change.

We shall overview results concerning these benefits and additional related topics. The results are often counterintuitive. Various layers—Infrastructure as a Service, Platform as a Service, and Software as a Service—all have different benefit drivers. Here, we shall focus on Infrastructure as a Service, which is a foundation for many other benefits. After all, a salient difference between Platform Services and Service-Oriented Architectures and Integrated Development Environments ultimately comes down to Infrastructure resources, and a salient different between licensed software and SaaS ultimately resides in infrastructure costs and flexibility, including pricing and elasticity. Thus, we shall focus on infrastructure.

The Value of Common Infrastructure

What is the value of consolidating demands from independent sources into a common pool rather than partitioning them?

The traditional answer—“economies of scale”—certainly has some validity. Overhead costs can be reduced, and buyer power enhanced through volume purchasing.

However, another key value of consolidation is what might be called the “statistics of scale” (Weinman, 2008). Under the right conditions, multiplexing demand can generate benefits in terms of higher utilization and thus lower cost per delivered resource—with unutilized resource costs factored in—than unconsolidated workloads, for infrastructure built to peak requirements. For infrastructure built to less than peak, demand multiplexing can reduce the unserved demand, reducing a penalty function associated with that unserved demand, which may represent either loss of revenue or a Service-Level agreement violation payout.

A useful—if imperfect—measure of “smoothness” or “flatness” is the coefficient of variation cv (not to be confused with either the variance σ² nor the correlation coefficient. This coefficient is defined as the non-negative ratio of the standard deviation σ to the absolute value of the mean | µ |. The larger the mean for a given standard deviation, or the smaller the standard deviation for a given mean, the “smoother” the curve is.

This smoothness is important, because a facility with fixed assets servicing highly variable demand will achieve lower utilization than a similar one servicing relatively flat demand. To put it another way, one with low utilization has excess assets, whose cost—whether leasing or depreciation—must be carried by revenue-generating ones.

With that as background, the beauty of the cloud comes into focus: multiplexing demand from multiple sources reduce the coefficient of variation (Weinman, 2011d).

Specifically, let X1X2, … Xn be n independent random variables with identical standard deviation σ and positive mean µ and thus each with coefficient of variation cv(X). Note that they need not have the same distribution: one may be Normal, one may be exponential, and so forth.

Since under these conditions the mean of the sum is the sum of the means, the mean of the aggregate demand XX+ … Xn is nµ. Since the variance of the sum is the sum of the variances, the variance of the aggregate demand is nσ² and therefore the standard deviation is eqn013. Thus, the coefficient of variation is eqn014 . In other words, adding n independent demands together reduces the coefficient of variation to eqn016 of its unaggregated value.

Thus, n as grows larger, the penalty function associated with insufficient or excess resources grows relatively smaller.

Importantly, it does not take an enormous number of such demands to approximate “perfection.” Aggregation of 100 workloads will be within 10% of the penalty associated with an infinitely large cloud provider, and aggregation of 400 workloads will be within 5%.

It must be noted however, that the assumption of workload independence is a key one. There are two other possibilities worth considering. One is that workloads are not independent, but are negatively correlated or even complementary. If the two demands are X and 1-X, say, the sum is of course merely the “random” variable “1,” which has a 0 standard deviation. Such a scenario is not that farfetched: appropriate selection of customer segments can lead to a virtuous situation. In the early days of AC electric power, Samuel Insull targeted consumers who needed lighting in the morning and at night, trolley operators, whose peak electricity use was at rush hour, and factories, thus generating relatively flat aggregate demand [Carr, 2008].

The other possibility is that of perfectly correlated demand. Specifically, if each of n demands is characterized by X, then the aggregate demand is nX and the variance of the sum is n²σ²(X). Thus the standard deviation is nσ(X) and the mean is nμ(X). The coefficient of variation of the aggregate is unhelpfully eqn026. In other words, the coefficient of variation of the aggregate is the same as any of its components. A weaker condition, where we merely have at least one simultaneous peak, is equally problematic from the perspective of attempting to increase utilization and thus derive favorable economics.

Two lessons may be drawn from this. First, contrary to the proposition that only a few large cloud providers will survive, these statistical arguments suggest that for correlated demand or simultaneous peaks, midsize providers don’t generate much benefit relative to “private” implementations, but then again neither do large ones, and for independent demands a midsize provider can achieve statistical economies that are quite close to that of an infinitely large provider.

Second, if a “community cloud” is intended to aggregate demand from correlated components, it will not generate any benefits due to the statistics of scale, although it certainly may generate economies of scale, should there be any. The data on whether there truly are economies of scale for large cloud providers relative to large enterprises is mixed. After all, today’s ultramegadatacenter-based cloud providers use the same pods that are available to any enterprise or the government, and probably not at a substantially different discount. Large cloud providers may have benefits in terms of locating near cheap power, but this isn’t an economy of scale, it is, well, an economy of locating near cheap power, equally available to anyone else. While early entrants have advantages in automation, these differences are being eroded as 3rd parties offer management and administration, virtualization, provisioning, billing, portal, and other software on either an open source or competitive cost basis.

Want to find out more about this topic?

Request a FREE Technical Inquiry!