Today, “The Cloud” is a hot topic, competing with Server, Desktop and Storage Virtualization in the press, advertising and on the Web. But it seems the Cloud has a wide range of meanings, depending on how it’s used and who is doing the pitching. Most agree the Cloud, or Cloud Computing refers to using IT resources that are made available through third parties such as Cloud providers, managed services providers (MSPs), outsourcers, application software developers and even application service providers (remember the big ASP push in the early 1990s?). This would include web-delivered and hosted applications like Salesforce.com and Microsoft Office Web Applications – all delivered over the web as an alternative to licensing, installing and maintaining the software in your own data center.

Most think about outsourced servers and storage when they think “Cloud”, such as those services offered by Amazon Web Services and Microsoft Windows Azure. But the Cloud is much more than just IT resources. Using Cloud resources can provide remote facilities, staffing and resources for disaster recovery planning if you don’t have your own. It can make an IT organization more agile and flexible as they can add or remove resources and staff as needed, and typically pay only for what they use – helping maximize budgets. The Cloud offers businesses and non-profits a way to convert capital expenditures (CAPEX) to operational expenditures (OPEX) and helps IT better manage budgets overall, as all expenses are known in advance.

But there are also “Private Clouds” that companies can set up themselves as an IT/datacenter resource, as well as those offered by MSPs and other service providers for outsourcing, IT resource expansion, and business continuity and disaster recovery purposes. For years, MSPs have offered a wide range of services including systems hosting and management and monitoring of critical IT systems and networks, so expanding into Cloud services is a logical next step for them. Hosting and outsourcing MSPs most likely have already deployed a co-location data center and operations center with standard SLAs, and so in essence, have already been providing “Cloud Services” for years. In many cases, the term “Cloud” has just replaced the term “outsourcing”. In some cases MSPs may just provide the management and monitoring services and outsource the facilities and equipment to a data center/facilities services provider. It’s your responsibility to interview your Cloud provider to understand just what they own and manage, and what they outsource themselves.

In some cases, contracting with a Cloud provider also means you get a fixed service level agreement (SLA) that covers the protection, recovery and availability of the systems you are using. There are different levels of service offered and typically, the better the SLA, the more costly. In most cases, if you are running production applications and data storage at the Cloud facilities, backup and recovery will be charged for separately. You need to assess your application and data criticality and determine the right SLA for each. The more critical the system and data, the better service level you should request. The most critical systems and data are usually protected by duplexed or clustered servers along with database mirroring, replication and high availability solutions. These environments demand minimal downtime and data loss—supporting the most demanding recovery time objectives (RTO) and recovery point objectives (RPO). And remember, IT organizations can use the Cloud for replication and failover/High Availability targets when deploying their own data center solutions. Today, most IT organizations are using the Cloud for simple offsite copy or archiving of data for disaster recovery, so it is not really critical to the day-to-day business. But that data copy will instantly become critical when you experience a loss at your data center, especially due to an unexpected disaster like fire or flood—or even simple theft. How do you plan to restore your data if it’s stored at a remote location? Does the Cloud provider offer some physical media transport or will you have to restore the data over the wire? How long will that take and can you wait?

There are a number of things to consider and questions to answer when adding Cloud services as an extension of the data center, including: How will using the Cloud affect application performance, especially for client/server type applications? Is transferring large volumes of data and databases over the wire for offsite storage even feasible? How long will it take? What about data security across the wire? Is a Virtual Private Network (VPN) required for encryption and to prevent hacking? What about the security of your data once it’s stored at the Cloud provider’s facility? What physical security and IT security technologies are deployed? Are your applications even designed to leverage the Cloud? Can you perform remote deployment, management, maintenance and reporting? What service level do they offer you for accessibility and availability of their own data center resources? What’s their DR strategy and plan? Should you have multiple offsite storage locations in case of regional disasters like snow storms, hurricanes, and earthquakes? Most backup, archiving, replication and high availability solutions offer remote deployment, management and maintenance and can leverage on-premise and off-premise resources, but having “Cloud connectors” to Public Cloud services will make integration faster and easier.

As you can see, there are many questions to ask and a lot of testing ahead before the mass market adopts the Cloud as a primary IT resource. But it’s coming sooner than later.