Building a better Cloud

Thoughts on Building a better Cloud

Over the past few weeks I have been asked several times how I would setup a new infrastructure using the Cloud. While this is like asking someone to design a Rain Cloud, You can not design a rain cloud but you can seed a cloud, building cloud infrastructures is something I believe I can speak intelligently about. However, I also know that cloud designing is something that can always be done better. In this writeup, we will be developing the idea of a base infrastructure which will serve as a basis for creating a better Cloud.

When you are building a Cloud infrastructure, there has to be planning. People believe that the "Cloud" is this mythical indestructible force of technology, but the fact is the "Cloud" is nothing more than Virtualization located in a Datacenter far far away. The real strength of ANY cloud infrastructure resides in its ability to scale horizontally and rapidly. The Cloud, when leveraged correctly, can yield the computing power of most major organizations with dedicated IT staff. Imagine having the computing power of the IBM Sequoia on demand.

When the Cloud is used right there are a lot of benefits. Unfortunately, these same benefits come with a level complexity that is simply unavoidable. However, complexity does not have to be scary, and I hope that my visual representation of what I believe to be a well thought out cloud setup should be is easy to understand; removing the scary out of the complexity.


Building a better Cloud Infrastructure

This first graphic shows the Optimal workflow for a user when the user
makes a request for content.

Graphic Explanation

In this graphic you can see that when a user requests content the packet is using standard channels to return content quickly and efficiently. While it could be argued that having more steps in the process, in this case one more step, could slow down the delivery of the content. However, these arguments would be misnomers. Having the Load Balancer doubles the overall throughput for the connection. This is because the throughput for Cloud Servers is based on the size of the server, at least when used in conjunction with Rackspace Cloud Servers. The network speeds are solely dependent on the amount of RAM allocated to your instance. To exemplify this, a 512MB instance would have a 20 Megabit per second Public Interface and a 40 Megabit per second Private Interface. Here is a complete break down of network interface speeds based on Rackspace Cloud Servers.

  ---------------- -------------- -------------
  Network Interf   ace Speeds     
  --------------   ------------   -----------
  Instance RAM     Public Net     Private Net
  ============     ==========     ===========
  512 MB           20 Mbps        40 Mbps
  1024 MB          30 Mbps        60 Mbps
  2048 MB          60 Mbps        120 Mbps
  4096 MB          100 Mbps       200 Mbps
  8192 MB          150 Mbps       300 Mbps
  15872 MB         200 Mbps       400 Mbps
  30720 MB         300 Mbps       600 Mbps
  ---------------- -------------- -------------

As illustrated by the previous chart, using the private Network has a higher throughput available by default. Additionally, the the pipe that you will have access to while using the Load Balancer is a 10 Gbps pipe which will provide you the ability to scale a single load balancer almost infinitely.

When we introduce a Load Balancer into the overall Network infrastructure we are creating a more efficient Network. The prospective user would connect to the content using the public IP address of the Load Balancer through the use of DNS or direct IP connection, which would then distribute the connections to the nodes behind the load balancer. In this example, the nodes are all connected to the load balancer by the Internal Network Interface.

In this graphic you can see the benefits of using a load balancer in almost any web facing setup.

Limitations
  • One limitation of the Load Balancers are that they are restricted to a single port or protocol. This means that if you have multiple ports or protocols that you want to balance, you will have to have multiple load balancers. However, in the Rackspace Cloud this limitation has been accounted for. To overcome the single protocol issues the Load Balancer has the ability to share IP addresses between other Load Balancers.

Prepare for Disaster

In this setup, we are able to create an infrastructure that is fast, efficient and scalable. Beyond all of the throughput benefits found when using a Load Balancer, one of the best parts of this type of setup is the ability to fail over. Load Balancers can be used to provide a safety net if there are ever issues with the node or nodes behind it. All that needs to be done to recover from disaster is to provision a new node behind the Load Balancer.

How I do it

You can also prepare for any type of disaster by provisioning a failover node, which could reside anywhere you want it to be. In my case I have a failover node that exists in the DFW datacenter that would serve basic content should there ever be an issue with the load balancer setup that I have provisioned in my Primary Datacenter Located in ORD.


Final Thoughts

When you are building any Infrastructure planning should be done long before deployment, and when you are building a Cloud Infrastructure the planning should be done around Load Balancing across multiple nodes. Having a high throughput network device in your infrastructure will allow you to scale to meet demand while also allowing for quick recovery from any type of disaster. The fact remains that the Cloud is not some mythical indestructible force of technology, but rather a force to be reckoned with when used properly. When designing your infrastructure building with Load Balancing should allow you the ability to compete against almost anything else, scale to meet or exceed demand, and failover if needed.

I would recommend that you review the offerings of Rackspace Cloud Server as well as the technical specifications of the Cloud Load Balancers.

Please drop me a line if you have any questions.

Mastodon