To carry on from my last POST; lets see how a high speed cloud can be viewed. The high-speed cloud is broken down several ways, but we are going to focus on five: Compute, RAM, Network, Local Disk and Remote Disk. So, lets break it down:
- Compute is easy: it’s your CPU Cores. At one time we would just say CPUs, but now a days we know the Cores are everything
- RAM is RAM. The faster you get, the better you are
- Networking is HUGE! If you are building your cloud in a corporate environment, you should be taking advantage of the fastest network speeds available. If you have 10GB Ethernet connections, then you should be using a few for every Compute Node.
- Local Disk is important also. Not only do you need it for your OS, but also for direct storage capability. Local storage is handled differently, so you should be aware of that. If you SSD or 15K disks spinning in the Compute nodes; that will give you a great start for simple storage, as long as you watch the sizes of the volumes you create.
- Remote disk can be a corporation’s saving grace. Local storage can fill up fast, and you are limited to the number of IOPS your disks can spit out. Going with a remote flash/SSD system can get you upwards of 500K IOPS per second. WAY faster than a local disk.
So you start assembling your cloud. You take several Compute nodes, and you load them up with CPU cores, RAM and high-speed local disks. Then you have your Compute nodes attach to the remote disks (or sub systems) to create and provide the big, fast chunks of storage.
Now, when you send your high-speed applications to it, you separate out the application servers and database servers within the cloud so the Compute nodes can focus on running one task, instead of hundreds.
Can you add other server instances to the Compute nodes? Yes, but make sure you do some system tests first to make sure you are not using to much horsepower for one application, and then starving another.