(A piece of your disaster recovery and business continuity strategy)
This information below builds on a previous blog The Wild World of Disaster Recovery in the Cloud which explained the basic concepts of replication and how they apply to Disaster Recovery (DR). These ideas were used by US Signal’s Product Development team to create an Image-based backup service called VM Based Backups (VBS).
This blog describes how the US Signal VBS product works and why it is should be a part of your company’s disaster recovery and business continuity plans.
The most straightforward description of VBS is as an automated replication process that allows quick restoration of corrupted or lost Virtual Machines (VM). Each day, VBS creates a snapshot (image) of all active VMs in the designated resource pool – the entire workload (OS, Apps, Data) of each VM is copied. The snapshot is transported and stored in an offsite remote storage facility. One month of these daily snapshots are retained in the storage facility. To recover a VM, a phone call is placed to the US Signal Network Operations Center (NOC), where they work with you to identify and restore the VM to the chosen date.
- Customers purchase resource pools that are housed in US Signal’s Grand Rapids, MI or Southfield, MI data centers. Within the resource pools are the VMs which hold the customer’s workloads.
- A third US Signal data center, located in Indianapolis, IN, houses the backup files on SAN storage equipment.
- When a VM is created in either Grand Rapids or Southfield a snapshot (copy) is automatically generated that night and stored in the Indianapolis data center.
- An image of each day’s changes to the VMS is stored in the Indianapolis data center with its corresponding full VM snapshot.
- If a VM becomes corrupted, the images in Indianapolis can be used to restore the affected VMs
To initiate restoral, contact the NOC via email or telephone. You work directly with a technician to identify both the VMs to be restored and their restoral point, which you choose by date - up to one month of restore points are available. The US Signal Service Level Agreement (SLA) for VBS is to commence VM restoration within four hours of your request. The time it takes to complete the restoral process is best effort and varies based on the size of the VM. A typical restore time for 500 GB drive is approximately one hour.
The scheduled daily backup of the VMs protect against significant data loss and mitigates downtime during recovery.
Once subscribed, all the VMs will receive a full backup once a month and changes are backed up daily without any manual intervention.
Agent-less replication does not tax the virtual resources and the entire VM is restored including the OS, applications, and data.
Day-to-day business operations are unimpeded as the backup operations do not require operating systems and applications to be powered down.
The mechanics and the recovery functions VBS are managed by the US Signal NOC. In-depth technical knowledge is not needed to implement the backup capabilities or to initiate a restore.
When deciding on a backup and recovery product, ask yourself this question: “How much data can I afford to lose?”
The usual answer is little to none.
US Signal VBS provides a complete, hands-off, and affordable solution to backup and restore your organization’s vital information rapidly and with minimal disruption to your business operations. It is the armor that protects your servers.
For details on ordering contact US Signal at 866-2-SIGNAL or www.ussignalcom.com.
 These data centers are exclusively Cisco equipped and VMware vCloud Powered.
BYOD and Security Awareness
(The Human Element)
The Trend: Portable electronic devices are now a part of virtually every business enterprise. Their growth is encapsulated in the acronym BYOD (Bring Your Own Device); an unplanned phenomenon where employees decided to meet their own IT needs. That is, they are choosing the type of devices (smartphones, tablets, PCs) on which to do company business. And, more often than not, these devices are not corporate owned – they are purchased by the employee for business and personal use.
It is critical to note this dual purpose – which evolved spontaneously and is not under the corporate thumb - presents a host of unique security issues.
The growth of BYOD has been explosive and it only exacerbates a company’s security concerns. By 2014, it is predicted 90% of companies will allow BYOD and, astoundingly, that each employee will average 3.3 devices connected to the corporate network! Smartphones, by far, will be the most prevalent mobile device employees use for access with a staggering 1.2 billion expected to enter the market in the next five years. Close behind are tablets which will experience a year-to-year increase of 50%.
Logically, the rapid proliferation of these devices will present IT professionals a host of new security issues. Think about the number of times you casually access work information from your mobile devices. Where were you? Was the connection at the coffee shop secure (i.e. encrypted)? There could be dire consequences. For example, it is possible to unintentionally unleash a malicious code while checking work email from an unpatched iPad. Or, just by misplacing your phone, private company data could be compromised. And, the security risks grow incrementally with each new device granted access to the network.
The Response: Do not, however, think that IT professionals have been asleep at the wheel. A host of innovative BYOD practices have been put in place - despite the fact that, unlike a traditional IT environment, they do not have full control over the hardware on the network.
Examples of these best practices include:
- Strong encryption protocols on personal devices on which business data is stored
- Routine updates of hardware and apps to mitigate the risk of a known vulnerability being exploited
- Registering devices so that unauthorized ones are not allowed a network connection
- Implement authentication protocols liked Security Sockets Layer (SSL) certificates
- Use of mobile device management systems that provide the ability to remotely swipe a personal device of corporate data
- Support of app blacklisting to stop disruptive apps from downloading onto the device
- Development of access control policies to define who can access what data based on the employee’s role in the organization
These and other policies lay the framework to protect data in a BYOD world. However, no matter how stringent the rules, they often are not enough to thwart security’s most menacing happenstance – the human element.
User Awareness: In a survey of 768 IT professionals, 72% said careless employees are a greater security threat than hackers. The biggest threats to their networks include lost or stolen devices, loss of company data, employee misuse of devices and mobile malware attacks. Studies have shown there are simple precautions that employees are just not doing - for example, less than 6 in 10 will reset their lost passwords.
Security awareness training for BYOD is in its nascent stages. Most security experts acknowledge that user awareness is the most important non-hardware, non-software precaution an enterprise can institute. Some consensus has emerged on the best way to impact employee behavior.
First, the training should be instructor-led. Should your company balk at this approach because it is more expensive than webinars or self-paced courses, ask the following question: How valuable is your data? Second, conduct regular security refresher sessions to reinforce safety protocols and keep employees up to date on the latest threats. Remember, companies are playing defense. Attackers devote full time to developing tools to breach your security efforts. Their malicious activities make it doubly important that engaging in security behaviors must become second nature to workers. Finally, hire a security consultant to test your security protocols. Having someone trying to break your system will provide invaluable experience without the data loss of a real hack.
Investing in the human element - user awareness - augments your physical and electronic BYOD security structures.
To be aware is to be secure.
 Forbes, Mobile Business Statistics for 2012, May 2012
 Desktop Administrator’s Guide to BYOD, 2013
 The Impact of Mobile Devices on Information Security: A Survey of IT Professionals, Dimensional Research, 2012
 BYOD, New Opportunities, New Challenges, Gartner Group, 2012
Today we’re sharing a success story that we hope will give you some insight as to how our team works together to discover customer business needs and engineers a solution to them. We continually look for patterns in customer behavior that affect their perceptions of cloud.
In this cloud recap success story seven US Signal personnel from our Michigan sales team worked across many months. The lead sales person managed the process from start to finish and engaged additional personnel based on the customer’s business needs, such as, technical information and demonstrations provided by engineers. Expert cloud advice was provided by the cloud overlay. And, US Signal upper management was brought in to meet the owner of the company.
A team approach is typical of US Signal sales. We have a deep bench on the cloud and network side of the house. These experts do not disappear after the sale is complete and the service is installed – they are available to the customer anytime their skill set is needed.
The customer is a software company which specializes in the design and development of point-of-sale software that can be accessed both as a secure paid interface and as an unsupported mobile app. Their signature SaaS product was developed for the salon and spa industries. They also developed management software for the pet grooming and kenneling, medical spa, and the tattoo and piercing markets. In 2012, the business was designated as one of the top 50 companies to watch in Michigan. And, Inc. magazine added them to their top 5000 list of the fastest growing companies in America.
The company was already a US Signal customer with a 50 MB Internet circuit. Through persistent and courteous questioning the lead sales person and his engineer discovered that their data center was running approximately 20 servers and that part of the environment was virtualized with VMware.
The US Signal team engaged with an IT integrator, who was hired as a consultant by the customer, in discussions about cloud services. Through these efforts they pinpointed several concerns:
- In their data center they needed backup for business continuity in case of individual server failure, a prolonged outage, or a large-scale disaster
- Because of their success, they were rapidly gaining clients and were outgrowing their current environment
- They were concerned about a pending requirement to upgrade the existing server hardware.
- A new SaaS program and updates to existing products were about to be rolled out and additional reliable infrastructure was needed
The lead US Signal sales person kept in contact with the IT integrator. He regularly sent him collateral describing US Signal’s suite of cloud products. A meeting was set with the founder and president of the company and US Signal’s sales team.
The president of the company was very knowledgeable about the IT aspects of his business and revealed further information the proved relevant to utilizing US Signal cloud services:
- A larger production environment was needed
- The security of having redundant resources outside of their datacenter was desired
- The customer was sold on both cloud and VMware
The last meeting was a demonstration of US Signal’s vCloud environment conducted by the team’s Senior Engineer.
In making their decision, the customer did their due diligence and compared US Signal’s cloud services to Amazon and Rackspace and saw that we were price competitive. The customer also reported that the engineer delivered and outstanding presentation. For these reasons, coupled with the historically consistent and excellent US Signal customer service on their existing circuit, the customer decided to sign a contract for a Dedicated Resource Pool consisting of large compute, RAM, and storage capacities.
Businesses have been bombarded with cloud information over the past several years and are now responding. Personnel outside of IT are demonstrating a growing comfort level with moving their data and applications to the cloud. This sale and others point to a pattern that it is less and less about selling the “concept of cloud” as it is providing a solution.
That is exactly what the US Signal sales team did. They meticulously unearthed the customer’s pain points, educated them on US Signal Cloud Services, and then proposed a practical and affordable solution – which the customer appreciated and accepted.
Defining Big Data and Examples
Big Data, at least in the world of IT number crunchers, has become something of a cause celebre. Big Data is an element of the Third Platform in IT (Visit our blog post about the Third Platform Here). IT types debate as to what constitutes a Big Data resource as well as what are the best analytical approaches. This blog will not solve their dilemma but it will help you understand what Big Data is and its importance. First, we will define a few Big Data terms and then describe its predictive value with examples.
Defining Big Data
Big Data is a general term used to describe the voluminous amount of unstructured data a company creates. Big Data Analytics is the process of mining these enormous messy stockpiles of seemingly innocuous entries and queries to discover discernible and, ideally, repeatable patterns. The result of detecting unknown correlations and harnessing this information in novel ways is to produce useful insights or goods and services of significant value.
Today, the terms Big Data and Big Data Analytics are used synonymously.
To give you an idea of the immense size of information deemed Big Data, most are measured in terms of exabytes, which is one billion billion, or a quintillion bytes – enough storage to contain 50,000 years’ of DVD-quality video.
There is controversy regarding the above definition of Big Data because some analysts say there are types of structured data that merit inclusion. Regardless of the exact content and size, a practical rule of thumb in its identification has emerged: Big Data is seemingly chaotic information that is deemed too costly in time and dollars to load into a relational database.
Big Data Examples
So far, the most famous example of the power of Big Data is Google’s innovative analysis of search queries and their relationship to potentially fatal outbreaks of H1N1 flu virus. H1N1 is a mutant hybrid that combines elements of the bird flu and swine flu viruses. And, its discovery and rapid spread in 2009 had health officials panicky that a pandemic sweeping the US was underway.
Health officials saw that the same conditions existed as with the 1918 Spanish flu – millions of people were infected and there was no vaccine against this new strain readily available in all parts of the country. All jittery officials at the Centers for Disease Control and Prevention (CDC) could do was track the number of new flu cases (as reported by primary care physicians) and try to arrest its march across the country. 
The doctors dutifully reported new H1N1 flu patients but there was a latency problem. Most people may feel sick for days before going to the doctor. And, the CDC tabulated these flu numbers once a week causing an average of a two week lag between sickness onset and reporting – this is an eternity when fighting a disease that spreads easily with a cough, sneeze, or even a touch. Unable to pinpoint the spread of H1N1 in near real time, public health officials were groping in the dark for an effective way to head off a potential catastrophe.
Coincidentally, a few weeks before the H1N1 outbreak was discovered Google quietly published an innovative research paper in the scientific journal Nature.  The paper reported an effort to identify areas in the United States infected by the winter flu virus by what people searched for on the Internet. Google’s computer scientists designed sophisticated software that identified a combination of 45 search terms people had used to gather information about the winter flu (e.g. “medicine for cough and fever”). Using complex and creative mathematical models, the researchers compared these terms against inquiries collected between 2003 and 2008.
And, here is where the term Big Data earns its brand. Google receives more than three billion search queries each day. For their analysis of winter flu outbreaks, the researchers utilized 450 million different mathematical models on the 5.5 quadrillion inquires they received between 2003 and 2008 to ferret out flu related inputs. Their system looked for correlations between the frequency of certain search queries and the spread of flu over time and space. Using these results, they created a graph that showed where flu outbreak would occur between 2003 and 2008.
They then compared these results - which were predictions of flu outbreak - against actual cases reported to the CDC during the same time period. And, they hit pay dirt: Their predictive figures correlated almost perfectly with official nationwide figures across the United States!
For the H1N1 outbreak, health officials were now armed with valuable information: where and when large outbreaks were likely to occur. Using this knowledge, they had the vaccine ready for the places that were most affected. Google’s predictive methodology circumvented the natural lags of compiling government statistics and, more than likely, saved lives.
Another example of the life-saving implications of Big Data, involves our most populous city, New York. Not only are there more people in New York than any other city in the country, there are more manholes – 51,000 to be precise. These cast iron Frisbees weigh up to 300 pounds and, on occasion, they blast out of the street as high as three stories.
The city’s commercial electric grid, first lit by Thomas Edison in 1882, contains 21,000 miles of cable – almost enough to circle the globe. Because a manhole launching itself like a Titan missile is very dangerous; the Con Ed Company hired researchers at Columbia University to figure out a way to predict manhole explosions.
The team had data on cable repairs and installation dating back to the 1880s, 10 years of trouble ticket reports which equated to 61,000 typed documents. These data also had much irrelevant information such as parking information for Con Ed vehicles or that a customer did not speak English. The researchers developed an algorithm to create order among the confusion.
Their analysis of the trouble reports showed that the majority of the explosions were from manholes with thick, deteriorating cables. Larger amounts of insulation left more decay which was more vulnerable to the inevitable build-up of methane and other gasses present in an underground environment. From this information, the researchers developed the “hot spot theory”. That is, they predicted manholes with larger cables were more likely to explode.
Armed with this information, city works officials modified manholes with thick cables and virtually eliminated explosions.
In a more benign example, Nate Silver, a former baseball statistician, used Big Data to predict the past Presidential election with stunning accuracy. Mr. Silver consistently rejected the conventional wisdom that the race was tied. He developed a statistical model to aggregate state and district level polling data to predict what states each candidate would win. On election night, he was vindicated as his predictions were 100% correct.
Think of everything you put into the ether of the Internet. Each time you perform a search, create a document, or even a keystroke you could be putting out piece of a puzzle that is waiting to be solved.
 Tech Target, April 2005
 World Health Organization, 2009
 Center for Disease Control and Prevention (CDC), 2010
In the formative years of information technology, innovations migrated from business uses down the chain to consumers – people had to adapt to the technology. Now, with the concept of Consumerization, the pattern has been reversed, technology is adapting to the people who use it. Think of the PC perched on your desk, it is – well – personal. The content that you need for work or entertainment is stored on its hard disk, or maybe an external flash drive. Now think of your smartphone (often called the Swiss Army Knife of consumer electronics) or your tablet – they fit in your hand, you can take them anywhere and access any variety of content from the cloud you subscribe to. The technology has become – well - intimate.
Consumerization is defined as the trend for new technology innovations to begin first in the consumer market then spread to business environments. Historically, businesses organizations determined and controlled the kind and type of information technology used within their firms. Increasingly employees have become more self-sufficient and creative in meeting their IT needs. BYOD or Bring Your Own Device strategies, where individual employees can choose their own type of device to do their work are beginning to flourish in the business world. Now, IT departments have to adjust and support the type of device their employee wants to use, not what they want to issue – this is Consumerization.
Apple was, arguably, the first to recognize and capitalize on this phenomenon – their iPhone and iPad were designed for individual consumers but their presence in the workplace has become ubiquitous. Lithium-ion polymer batteries, which can be molded almost like clay, have enabled, ultra-slim and attractive casing. The variety of styles and colors that Apple has used to showcase its products has given them the label of “The Gucci of Gadgets.”
A major driver of mobile device usage – thus Consumerization – has been the proliferation of cloud computing. As mentioned, both personal and business data can reside in the cloud - often on large server farms run by giant technology firms such as Amazon and Google, where staggering amounts of data are stored for retrieval from almost anywhere in the world. Combine this with cloud-based social networks like Facebook (over 1 billion users), Twitter (over 500 million) and a host of smaller firms and the use of portable, mobile devices increases exponentially. In fact, the Gartner group predicts that 1 billion smartphones will be sold in 2015, up from 468 million in 2012.
Consumerization is an unstoppable force. It has added the element of democracy to IT. The best IT experiences are no longer in the office – they are out in the consumer market place. And, this is driving consumer spending and shaping the IT department of the future.
A report predicts that small and medium size businesses’ spending on security technology will exceed $5.6 billion by 2015. This rate is twice as fast as the predicted growth in overall IT spending by these types of businesses.  A recent study revealed that the number of cyber security breaches on businesses with fewer than 250 employees doubled in 2012, totaling 36% of the total attacks.
As the above data shows, small to medium size businesses are particularly vulnerable to cyber-attacks originating external to the company. However, a new type of security threat has developed with the revolutionary use of social media for commerce.
Social networks are developing means to monetize their platforms. They allow consumers and businesses to buy and sell items and services as well as offer gifts (i.e. incentives) to engage customers. Cyber criminals view these transactions as new, fertile ground for mayhem. As people become more comfortable with commerce through social media, the more predisposed they are to being compromised.
It is especially important to acknowledge this threat because businesses, just like consumers, are placing a high level of trust in social media: 63% are now using these social networking outlets for advertising and promotion.
In 2013, malware attacks that steal payment credentials from those doing commerce over social networks will increase. Also, fake social network clients, using phony gift notifications and email requests, can gather home addresses and other useful personal information. Providing non-financial information may seem innocuous, but cybercriminals sell and trade this data with one another. They then assemble a profile of the person that can be used to gain access to other accounts.
The proliferation of commerce over social networks creates a unique danger for the employer. Specifically, it is not customer/business operation data that is at risk, but your employees’ personal information. This breach of personal security occurs at the work site.
Amazingly, 87% of small to medium size businesses do not have formal, written Internet security policies. Plus, 70% of these businesses lack policies for employees’ use of social media, despite the fact that they are increasingly favored by cybercriminals for phishing attacks.
In 2013, small and medium size businesses need to generate and implement security policies for employees’ Internet and social network use. If they do not, it will inevitably come back to bite them.
 International Data Corporation, 2012
 BIA/Kelsey/Ipsos, 2012
 Enterprise IT Technology News, 2012
There are a staggering number of variables connected with Disaster Recovery (DR) as it applies to the cloud. To understand, and then choose, the best approach for your business, it is important to think of DR as a concept or idea, and not an entity onto itself. And, the fundamental driver that turns this notion into a method of recovery is replication. 
As it applies to the IT world, replication, at its most elementary level, is copying data from one site to another. Think of when you copied files from your home PC onto a flash drive, went to work, plugged the drive into your work PC and uploaded the files. This is replication at its most primitive and manual level. Bottom line, there are two identical files at two different locations.
The many different types of replication are the variables one must wade through when looking at DR. Their names vary as does their degree of complexity: Image-based, File-based, Application-based, Machine-based, Hypervisor, and SAN/LUN, to name a few. To keep this discussion manageable and relevant to US Signal’s backup and recovery services, we will focus on Image and File-based backups.
Image-Based backup is a copy of an entire Virtual Machine (VM) in one environment, which is then replicated in a second virtual environment. Specifically, a snapshot – copy – of the entire workload (i.e. BIOS, OS, Apps, Data) in the primary location is a saved as a single file that is called an image. That image is then housed at the secondary location.
Compared to other levels of replication, the advantage of Image-Based backups is that all the information, in each layer of the workload, is collected in a single snapshot. Also, a copy of the full image only needs to be created one time. Subsequent backups consist only of the changes to the various layers, significantly speeding up the replication process time.
Significantly, Image-Based backups are not agent based reducing the time and complexity in rebuilding a VM. Plus, this type of backup can be restored remotely across WANS or LANs and the backup images can be saved to a variety of different media.
File-Based backups are controlled by an agent, which resides on each VM in the primary location, which forwards data changes to a copy at the secondary location. Only one part of the workload – usually the data layer – is copied. This allows specific files to be targeted for restoral instead of rebuilding the entire layer. However, because the agent must be installed on every VM, this method creates an additional level of complexity as the software needs to be maintained and administered.
Why is the information important?
This background will allow you to understand the advantages of using US Signal’s VM Based Backup service as part of a comprehensive disaster recovery and business continuity plan.
Briefly, the VM Based Backup service is an automated, Image-based replication technology that allows the recovery of critical servers in a speedy and cost-effective fashion. This service is managed by US Signal personnel to insure accurate replication and speedy recovery.
The VM Backup service will be explained in detail in the next blog.
 In the context of disaster recovery, the terms “replication” and “back-up” are used synonymously.
 Workload is also referred to as the “server stack.”
Simply stated, the Third Platform is the next phase of the IT revolution. The First Platform is the mainframe computer. The Second Platform is the Personal Computer (PC), which dominated the IT landscape from 1985 to 2005. The Third Platform is comprised of mobile computing, social networking, cloud services, and big data analytics.
57% of overall IT spending in 2013 will be on smartphones, tablets, and e-readers – that translates to $431 billion. Industry analysts predict that 2015 will be the first year that American consumers access the Internet with mobile devices rather than PCs. This explosion in mobile computing will rocket the growth of social media to unprecedented levels, especially internationally.
In the Middle East and Africa, social network access will increase by 23.3%, followed by the Asia-Pacific region (including China, India, and Indonesia) at 21.1%, then Latin America at 12.6%. In comparison, the forecast for social media growth in North America is an anemic 4.1%.
If current trends hold, niche players who offer deeper and more focused functionalities will grow exponentially worldwide. Instagram saw its share of social media traffic grow by a mind-blowing 17,319% in 2012. Sina Weibo, China’s answer to Twitter recently surpassed 400 million users – doubling its base in one year.
Cloud computing will continue to charge ahead. Because of the vast number of technologies and functions becoming virtualized, spending on cloud is expected to surpass $207 billion worldwide.
Big Data is a general term used to describe the large amount of unstructured and semi-structured data which companies create in the course of their day-to-day business practices. The immense amount of information generated makes it almost impossible for an organization to comb through it, make sense of it, and turn it into actionable policy. In fact, 93% of North American executives believe their companies are losing revenue by not leveraging available data.
It takes large infrastructure capacity to examine this treasure trove and uncover hidden patterns, unknown correlations, and other useful information that can create a competitive edge. Cloud technologies offer the perfect platform to develop the applications that can turn seemingly chaotic virtual transactions and searches into profitable business intelligence.
Take the plunge and invest your time and effort to learn and benefit from this next phase of IT development. It will add to your knowledge base and your overall company value.
(The Energy Requirements of all those Ones and Zeros)
Today data centers demand more power than ever. Servers are powered up 24/7/365. They hum, buzz and heat up with electrical current. To calculate energy consumption, research on data center power usage has yielded the 8,760 rule – this is the number of hours in one year. It costs approximately 11 ½ cents per kilowatt-hour for power; this equates to about $1,000.00 per kilowatt year for every piece of IT load (i.e. each server mounted in the rack).
Power Usage Effectiveness (PUE) is a metric used to determine the energy efficiency of a data center. PUE is determined by dividing the amount of power entering a data center by the power used to run the computer infrastructure within it. PUE is expressed as a ratio, with overall efficiency improving as the quotient decreases toward 1. That is, the lower the number the more energy efficient that data center.
Most data centers have an average PUE of 1.8 per IT load, which calculates to $1,800.00 a year in energy costs. The average life of a server is three years. Therefore, the cost of power for a server over its lifetime is approximately $5,400.00 – multiply that by the number of servers in your data center and you get its lifecycle powering costs.
To grasp the enormity of this type of energy consumption, take a business with major league international data centers, like Google. Every time a person runs a search on Google, watches a YouTube video or sends a message through Gmail another jolt powers through their servers. It is estimated that Google’s data centers continually pull almost 260 million watts of electricity. This equates to 260,000 kilowatts or approximately a quarter of the output of a nuclear power plant. That is enough electricity to light up 200,000 homes every second of every day.
Google’s energy consumption is, of course, on the high end. To have an easy frame of reference, let us stick with the housing equivalent depiction of consumption. The average data center – which spans 50,000 square feet – consumes enough electricity to power 5,000 houses. Although that is 40 times less than what Google devours it is still fairly significant. And, remember these figures reflect just the powering costs; on top of these expenditures are cooling costs (which will be addressed in a future blog).
Data center engineers and other specialists have developed (and are still at it) a variety of methods to tame excessive power consumption.
Server Design: More and more data centers are populated with “energy-aware” servers – this term translates to a server’s ability to mitigate is electricity use when not processing. Older equipment, when idle, still consumes 60% to 70% of the power it would need to process an active workload. The newer server designs now on the market reduce idle power consumption to as low as 25% of the needed energy to run a workload.
Virtualization: A key benefit of virtualization is the ability to contain and consolidate the number of servers in a data center. Ten server workloads running on a single physical server is typical. However, some data centers run as many as 30-40 workloads on one server. By consolidating servers into a virtualized system, the same systems can be operated in 1/5 the space needed for individual servers. And, energy costs can be reduced 50% to 80%.
Power Redistribution: Utility power enters a data center as Alternating Current (AC) at high voltages. Through a conversion process the voltage of the AC is stepped down before being distributed to the server racks. The conversion process causes energy loss. Some of the larger data centers are using Direct Current (DC) delivery of electricity which eliminates the conversion process. Today, this approach is considered experimental but it may become more commonplace over the next decade.
Cogeneration: Cogeneration is the concept of data centers creating their own electricity on-site to replace or supplement the power provided by the local utility. The electricity generation approaches include solar, wind, or hydrogen fuel cells. Already, some data centers are contracting with local wind or solar farms to secure an alternate energy source – this is the first step to actually building cogeneration facilities on-site.
At some point preferred power reduction methods will be in play across many data centers. It is a worthwhile endeavor as it will reduces costs, create jobs, and leave a greener footprint than what currently exists.
 Belden, Data Center Design Guidelines, 2007
 Green Grid, White Paper, 2008
 Uptime Institute, 2011
 New York Times, September 2012
 451 Group, Case Studies in Highly Energy-Efficient Data Centers, October 2011
 White Paper (How VMware Virtualization Right-sizes IT Infrastructure to Reduce Power Consumption, 2011)
 Modern Infrastructure, January 2013
It was predicted by many cyber-pundits, CIOs, and other business leaders that 2012 would be the year of the hybrid cloud. A hybrid cloud is defined by the US National Institute for Standards and Technology (NIST) as:
A composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability.
In practice, hybrid clouds have usually been just the combination of two clouds: public and private. It is true, that as cloud adoption matured hybrids have become more ubiquitous. As a result, the need for policy driven governed access was more important. Policy development and management became more complex because data and applications were housed in different clouds. So along with deciding what application went where, policies were needed to institute security and data integrity for each environment – then those rules needed to be implemented and enforced across disparate organizations within an enterprise. This is a daunting but doable task.
In 2013, as cloud providers become more specialized, some organizations have implemented “hyper-hybrid” clouds. The most straight-forward definition of the hyper-hybrid cloud is:
“multiple cloud offerings handling critical pieces of business operations and sourced from multiple public and private providers.”
Further complicating hyper-hybrids is that these offerings need to be connected back to the business premise. The organization is now managing their workflow across different cloud environments (public and private) with point-to-point links back to legacy systems. What develops is that the underlying, stable, business processes an organization has developed “become so riddled with multiple cloud players that the organization itself must integrate and orchestrate.” This situation diminishes a central value proposition of cloud: ease-of-use.
To cope with the myriad of complications that hyper-hybrid clouds bring to the table, it is helpful to adopt a “services thinking” mentality. That is, “defining operating models, business processes and technology components as services within and beyond the enterprise. And it demands that companies develop a disciplined set of control points for an emerging services grid-control points with policies in place that flow up and down the stack.”
Essentially there needs to be separate governed cloud management policies for each environment that needs to be integrated at the enterprise (i.e. on-premise) level.
This complex “plumbing” which needs to be integrated and managed has sprung a new business opportunity for the entrepreneurial: Cloud Services Brokers (CSB). These CSBs deliver a bundled suite of business rules and practices for multiple cloud environments that shields the enterprise from the complicated “’plumbing’ of integration, orchestration, and rules management.” Think of it as the back-office for an organization’s policy management.
It is unrealistic to think legacy systems are going to disappear anytime soon. And, the migration to cloud is only going to become more prevalent and the offerings more diverse. CSBs looks to be a growth industry as businesses will be living with IT assets in both places for a long time.
 Wall Street Journal, July 2012