Surveying the State of IT for the Enterprise

In This Chapter

  • Discovering the key trends that affect the way IT does business
  • Learning why flash storage and hyperconverged infrastructure have revolutionized the datacenter
  • Finding out how the public cloud can enable IT but creates new challenges to overcome

The challenges facing today's IT function are both familiar and strange. IT faces the same kinds of resource constraints it's always had, but new and different solutions now provide ways to address them. Infrastructure is still a challenge, but recent innovations offer a path beyond the rough spots. This chapter explores the current state of enterprise IT.

Trends Shaping IT Infrastructure Today

In the past decade, IT infrastructure has undergone a revolution stemming from a number of evolutions across various resource silos. These changes have led IT to where it is today and they set the stage for the fundamental transformation that the industry is poised to undergo over the next few years.

From storage to servers to software, no area of the datacenter has been spared.

Flash storage

Not so long ago in a datacenter not so far away, solving storage performance issues was about as likely as a Stormtrooper hitting a target. Storage administrators were often witnessed throwing hardware at a problem. They had to add spindles — more spinning disks — to imbue their storage environments with sufficient IOPS to meet workload demand.

And then a funny thing happened on the way to Tatooine. Flash storage started to become a viable option for the enterprise. As this solid state storage became more popular, vendors began to work in earnest on ways to address the two major issues with the technology: cost and longevity.

In recent years, the cost of NAND-based flash storage has plummeted by double-digit percentages while capacity has increased. Today, when considering a standard disk-based form factor, you can buy an SSD that has even more capacity than a disk. Of course, that 16TB behemoth costs far more than the same spinning disk capacity, but it also means you can achieve all-flash capacity density that is better than that of disk.

Just as important as the ability to leverage flash is the ability to get at data quickly. This is where data locality comes into play. The closer that data is to processors and RAM, the more quickly that data can be retrieved and consumed. This is one area in which even all-flash storage arrays can be challenged. Storage in such environments sits in a separate silo and must traverse the storage fabric, which adds latency to the computation. The farther away from your application the data lives, the greater the latency and the lower the throughput. As you consider flash or hybrid storage solutions for your datacenter, keep this point in mind. A solution that enables data storage right in the server chassis will enjoy far better overall performance than solutions that require data to traverse a slow network.

Many people today still worry about flash "wear" that can cause drives to fail in place. As flash has become a staple of the datacenter, however, the wear concern has become a nonissue for most organizations. Drive manufacturers and array vendors have begun to implement all manner of mechanisms

intended to help keep drives alive. From wear leveling — in which a flash controller prevents a drive from pounding the same cells over and over — to active write avoidance techniques — such as deduplication and compression, which reduce the need to write data in the first place, the issue of whether a flash disk will fail during its usable life has been practically solved.

The short version is this: Flash is here. It isn't going anywhere. It's fast; it's durable and dependable. And it's becoming more affordable every month.

Software-defined functionality

At the same time that flash storage has become common in the datacenter, Intel has continued to release processors with massive numbers of cores just begging to be set free. The plethora of computing performance is being wrangled into submission through the use of powerful software tools, which are steadily replacing functions that used to be handled solely in hardware.

Why is this change important? In most cases, customized hardware is expensive, particularly when the hardware is performing a task that can easily be solved by using a commodity CPU with software. ASICs and FPGAs require occasional respinning — or updating — to remain viable. Over time this solution becomes expensive, particularly when the functionality can easily be replaced with a pure software component.

Today, we're seeing the rise of what has become known as the software defined datacenter (SDDC), a phenomenon enabled by commoditization of hardware. SDDCs allow far greater flexibility in datacenter configuration while also helping to reduce overall costs.

Hardware commoditization

Remember when I mentioned that Intel processor in the previous section? Well, that company is at the core of another revolution in the datacenter: hardware commoditization.

These days, you find all sorts of storage arrays that look practically identical to servers, and there's a good reason for that: They are servers. Rather than build a bunch of custom hardware and spend all their time on hardware engineering, resource-specific siloes — storage and networking — are increasingly turning to off-the-shelf servers and components to power their solutions. In essence, many of today's fastest growing storage and networking companies are truly software companies. They buy existing hardware that makes sense for their solution and build their software around it. Because the existing hardware is standards-based, the storage or networking company can easily swap components out as necessary, which helps a great deal with reducing cost and complexity.

Hypervisor commoditization and the emergence of containers

Back in the early days of virtualization, there was one company — VMware — to rule it all. Today, although VMware is still the leader in the hypervisor space overall, other commercial and open source hypervisor offerings are eating away at VMware's leadership position.

On a feature-by-feature basis, modern hypervisors generally have all the features that organizations really need in order to succeed. Sure, some have some extras here and there, but the capabilities — such as workload migration and high availability mechanisms — that initially drove virtualization adoption are common across almost any hypervisor choice.

Feature-rich hypervisors have led to a scenario in which the hypervisor can be considered a commodity for many organizations. The necessary features are guaranteed to be there, so switching to different hypervisors — such as Hyper-V, KVM, or a variant — becomes feasible.

At the same time, containers are emerging as an alternate abstraction technology allowing applications to be developed, tested, and deployed quickly and easily. It's important for the infrastructure platform of the future to support containerized applications.

The (hyper)converged revolution of compute and storage

Thanks to the rise of flash storage and the commoditization of the compute and storage layers, recent years have seen the tremendous rise of hyperconverged infrastructure. In such an environment, storage and compute — servers — are collapsed into a single unit of infrastructure, effectively eliminating expensive and complex SAN environments.

Hyperconverged infrastructure enables organizations to easily manage and scale their datacenter environments. This architectural option has been a boon for many customers because it's enabled far easier administration of the datacenter and has led to decreased costs and increased end-user and customer satisfaction.

Modern application architectures

If you've never heard the phrase bimodal IT, here's a quick rundown for you: IT has dueling priorities these days. First, organizations have a reasonable expectation that IT will continue to support what might be considered "legacy applications." In reality, such applications likely will continue to be mainstays of the business foundation for the foreseeable future. These hardy survivors include client/server enterprise resource planning (ERP) systems, collaboration systems, and local database applications.

These applications traditionally have required a conservative approach to maintenance. As mission-critical applications, they need a rock solid foundation, high availability mechanisms, and a light touch when doing updates, which must be painstakingly planned. The goal is to reduce risk to the business by ensuring that crucial applications are always on. The need to minimize risk to these applications is one of the reasons some IT departments have reputations of being stodgy and unyielding. In fact, the IT group is simply trying to keep the business running. Change makes that a difficult charge.

On the flip side of the equation, a new breed of applications is popping up. These innovative application types might exist in the cloud, locally, or even as apps. Where traditional applications require deliberate maintenance, the new apps require nimble, agile practices, which are often contrary to what has been considered best practice for application support.

Modern application architectures are driving enterprise IT needs. In the short term, a split has emerged in enterprise IT teams, depending on whether they cater to traditional IT applications or next-gen applications. Organizations must find ways to balance these conflicting goals.

The State of Public Cloud

In the early days of the public cloud, IT departments were quaking in their boots as they foresaw the potential to lose their jobs and their place in the organization. The era of the public cloud was upon us and the hype was real. Doomsayers predicted that IT pros would be out on the streets en masse offering their administration and programming skills to passersby. Businesses would thrive as they saved trillions of dollars in their budgets by eliminating all IT capital expenditures.

The post-IT future hasn't — and won't — come to pass. However, that doesn't mean organizations have eliminated the public cloud from usage. Instead, they've massively increased use of the public cloud as they've discovered applications and use cases where the public cloud makes sense.

But, the industry is far from the doomsday scenario that was envisioned early on.

The growth of all things cloud

Clouds come in all shapes and sizes, and the different options even have cute, little names. Figure 1-1 gives an overview of the various cloud types. It shows which entity — you or the cloud provider — handles each of the elements that comprise the infrastructure.

The industry is increasing adopting the three primary kinds of public cloud: infrastructure-as-a-service (IaaS), platform- as-a-service (PaaS), and software-as-a-service (SaaS). In fact, 451 Research indicates that the cloud computing "as-a-service" marketplace is likely to triple in size through 2019 (source: https://451research.com/report-short?entityId=87624&referrer=marketing).


Figure 1-1: Comparing public cloud service types.

Large IaaS providers now deliver platform capabilities, such as databases and message queues, that allow applications to be built quickly using packaged building blocks.

Clouds are great for unpredictable or highly variable workloads because you pay only for what you use. But for more stable or predictable workloads, the cloud is not as economical. Renting is good for the short term or when you don't know what the future holds, but owning is more economical when you know you're going to stay in a place for a while.

In an interesting dynamic, cloud adoption appears to act like a slingshot. For a while, a business builds and deploys an application on a public cloud service. Then, when the application reaches a certain scale or becomes predictable, the business brings it back in-house.

Increasing viability of public cloud

Early on, even with the analyst hype about public cloud decimating IT departments and forcing CIOs out of their jobs, public cloud providers had to contend with a number of daunting challenges:

  • Bandwidth: There was — and in many cases, still is — concern around how certain areas of the world are served with Internet bandwidth. Many locales remain woefully underserved, making it difficult to deploy mission-critical services to an environment that relies on an Internet connection. Although this issue is being corrected, improvement is coming slowly. In addition, many places that have decent bandwidth still have only a single connection, which makes cloud somewhat unpalatable. That said, the situation today is far better than it was just ten years ago.
  • Loss of control: At the beginning, the public cloud was an island. You had to manage it with completely separate tools, and a wall stood between it and your local datacenter environment. Today, a plethora of tools exist to help organizations seamlessly manage both local datacenters — private clouds in some cases — and public cloud environments. Control is no longer an issue.
  • Skills: When any new technology hits the streets, building up adequate skills to support it takes time. Today, with years of experience under their belts, plenty of people with the necessary skills are available to maintain public cloud infrastructure and services.

Understanding security and trust in the cloud

Because security is so important, it gets its own section rather than being relegated to a bulleted list! With regard to security, the public cloud has made massive strides in the past decade.

Security in the cloud is orders of magnitude better than it was in the early days. In fact, many providers make available hardened environments so that they can properly secure sensitive workloads for their customers.

People's willingness to trust the cloud is evidenced by the massive growth of clouds of all types. Microsoft continues to report that growth of Office 365 — Software-as-a-Service — continues to explode, and Amazon is reporting record growth with Amazon Web Services.

People are finally realizing that the public cloud is not a threat. It's simply another application delivery option that CIOs have at their disposal. The industry is realizing that, with the right provider, even sensitive workloads can be supported.

The three-letter threat

I frequently do speaking engage- expose their business to spying by ments in the United States, the the U.S. intelligence community. With United Kingdom, and Canada. In that in mind, many cloud providers the U.S., people's concerns about have located datacenters all over the cloud security are quite different world. Even many SaaS-based serfrom those in my Canadian and vices can be run from these global U.K.-based audiences. In non-U.S. locations that are housed outside the locales, data locality is a major con- U.S. As businesses, banks, and govcern. People there fear their data ernments continue to look for ways may end up being housed in the U.S. to embrace the public cloud, where on U.S.-based servers, which could their data lives is a critical decision.

Beyond Amazon — Embracing any cloud

Just as Kleenex is associated with sneezing and Google is associated with web searching, when IT pros think about the word cloud, they often immediately think Amazon. While Amazon, as the public cloud leader, is certainty formidable, it is far from being the only option available for public cloud consumption.

All kinds of "as-a-service" cloud options that go far beyond Amazon are available to you. Enterprises must always have an exit strategy that enables them to switch providers quickly. If a provider goes out of business or increases pricing to unsustainable levels, you may need to move quickly. You should always have a way to support any cloud, any time.