The last several years have seen a move towards the buzzword compliant world of cloud computing. Cloud computing is a somewhat nebulous term that has been used to describe all kinds of web-based services, from mail to storage to hosting.
At NewCity, one of the ways we're leveraging the cloud is by using the services of virtual server providers. This has allowed us move from the monolithic model of running physical web servers in-house to a much nimbler model in which we can create servers, resize servers and test and deploy new configurations as needed, without sacrificing on server power. It also makes available a much larger resource pool to bring to bear on the day-to-day issues and challenges that can come up when working for clients on the web. By abstracting the servers from the hardware behind them, we are able to realize a number of benefits:
Virtual servers are spread across many, and in some cases thousands, of computers. Just as RAID employs many hard drives to act as one, allowing for failure of a single drive without any data loss, virtualization piggybacks the virtual server on the resources of many real servers working in tandem. This means that if any one (or more) of these servers experience problems such as power fluctuations or hardware failures, the effect on the virtual server will be negligible. The malfunctioning machines can be pulled from the resource pool and replaced without interrupting the operation of the virtual server. This process is itself automated, resulting in a very good defense against unforeseen hardware issues.
Deployment of a virtual server starts with a server image. Providers typically have a wide variety of different images availble - for instance, Rackspace currently offers 8 different Linux images and has also started offering Windows Server images. If there is a standard configuration that a company requires their servers to run, it can be configured once and saved out as an master image. New server instances can then be created from this copy, making it easy to maintan consistency across the servers and ensure a baseline security or application configuration.
No more tapes or slow-moving backup scripts! The ease of taking server snapshots - that is, full copies of the server and its state at any given time, provides us with a handy way to recover from software crashes or other corruption. Since the servers run as virtual machines and not as physical servers, backing them up is as simple as making a snapshot of the server and saving the image securely (often to an equally redundant storage service). If a virtualized server malfunctions and cannot be recovered manually, it can easily be replaced with a working version of itself from an eariler time. While snapshots should not take the place of regular file and data backups from within an image, they do provide an additional defense against data loss that wasn't easily accomplished before.
Virtualizing the server also affords the administrator a good buffer against running out of drive space or memory. If a limitation is reached, one can easily scale the size of the server up by granting it additional resources from the cloud. Due to the fact that a server is defined by essentially cordoning resources from a pool of physical servers, changing the size of the server becomes trivial. After updating the resource allocation, the hourly charge for the server use is recalculated by the provider, and within minutes it's as if the server is running on new, expanded hardware.
Speed and uptime
Servers used for hosting typically sit in a layer on top of enormous server clusters at large data centers. The companies that run these data centers are usually hosting providers that have their own well-vetted redundancy features in place. Additionally, they often have agreements in place with utilities to receive redundant bandwidth, power and cooling as needed. These data centers are usually located at points where telecom companies have established large presences, placing the servers very close the the main arteries of the internet. Finally, they are staffed 24/7 by teams that specialize in maintaining large scale computing services, which places the issues of uptime, hardware maintenance and network troubleshooting into the hands of a dedicated staff.
Before virtualization became widespread, companies would often buy their own servers, accruing many thousands of dollars in up-front costs. If the servers were not fully utilized over their life, the company would have paid a large amount and invested many man-hours to support potentially inexpensive services. With virtualization, the pricing model essentially changes to that using of a utility -- you pay for what you use, and you use only what you need to.
Under this new model, one need only pay for a server when it's up and running and when need for the server has passed, it can be turned off and deleted. If the server is running, but isn't being fully utilized, the resources available to it it can easily be scaled down to reduce costs. These changes can be made at any time and very easily -- some providers even offer apps that allow for server deployment or resizing from a mobile device!
Because of the ephemeral nature of these servers, it makes it very easy to try out new configurations or run short-lived server instances for a low cost as compared to a dedicated server. Need to try out a new Apache configration without disrupting your production site? Duplicate your existing server and test away. Virtualization makes setting up and running development and staging environments easy. Need to host a high volume site for just a few days to support a short-term promotion? Spin up a large server instance for that time period and then take it down when you're done.
Virtualization has certainly changed the way we do things on the server-side here at NewCity. It has allowed us test new ways of configuring sites, services, and network layouts, has given us a richer variety of resources to pick from in addressing our clients' needs, and has allowed us to offer a much wider array of hosting options to satisfy ever-changing project requirements.