“CAP Theory” should have been “PAC Theory”

October 8, 2010

CAP obviously sounds a lot better, as it maps to a real word; that probably got it remembered.

However, I’m guessing it has helped to fail to make this concept understood.  The problem, is that the “P” comes last.

CAP: Consistency, Availability, Partitions.  Consistency == Good.  Availability == Good.  Partitions = Bad.

So we know we want C and A, but we don’t want P.  When we talk about CAP, we want to talk about how we want C and A, and let’s try to get around the P.

Except, this is the entire principle behind the “CAP Theory”, is that Partitions are a real event that can’t be avoided.  Have 100 nodes?  Some will fail, and you will have partitions between them.  Have cables or other media between nodes?  Some of those will fail, and nodes will have partitions between them.

Partitions can’t be avoided.  Have a single node?  It will fail, and you will have a partition between you and the resources it provides.

Perhaps had CAP been called PAC, then Partitions would have been front and center:

Due to Partitions, you must choose to optimize for Consistency or Availability.

The critical thing to understand is that this is not an abstract theory, this is set theory applied to reality.  If you have nodes that can become parted (going down, losing connectivity), and this can not be avoided in reality, then you have to choose between whether the remaining nodes operate in a “Maximize for Consistency” or “Maximize for Availability” mode.

If you choose to Maximize for Consistency, you may need to fail to respond, causing non-Availability in the service, because you cannot guarantee Consistency if you respond in a system with partitions, where not all the data is still guaranteed to be accurate.  Why can it not be guaranteed to be accurate?  Because there is a partition, and it cannot be known what is on the other side of the partition.  In this case, not being able to guarantee accuracy of the reported data means it will not be Consistent, and so the appropriate response to queries are to fail, so they do not receive inconsistent data.  You have traded availability, as you are now down, for consistency.

If you choose to Availability, you will be able to make a quorum of data, or make a best-guess as to the best data, and then return data.  Even with a network partition, requests can still be served, with the best possible data.  But is it always the most accurate data?  No, this cannot be known, because there is a partition in the system, not all versions of the data are known.  Concepts of a quorum of nodes exist to try to deal with this, but with the complex ways partitions can occur, these cannot be guaranteed to be accurate.  Perhaps they can be “accurate enough”, and that means again, that Consistency has been given up for Availability.

Often, giving up Consistency for Availability is a good choice.  For things like message forums, community sites, games, or other systems that deal with non-scare resources, this is a problem that is benefited by releasing the requirement for Consistency, because it’s more important people can use the service, and the data will “catch up” at some point and look pretty-consistent.

If you are dealing with scare resources like money, airplane seat reservations (!), or who will win an election, then Consistency is more important.  There are scarce resources being reserved by the request; to be inconsistent in approving requests means the scarce resources will be over-committed and there will be penalties external to the system to deal with.

The reality of working with systems has always had this give and take to it.  It is the nature of things to not be all things to all people, they only are what they are, and the CAP theory is just an explanation that you can’t have everything, and since you can’t, here is a clear definition of the choices you have:  Consistency or Availability.

You don’t get to choose not to have Partitions, and that is why the P:A/C theory matters.

Advertisements

remlite on the path to R.E.M.

February 15, 2010

Messing around with names for my projects a bit.  I originally created Red Eye Monitor to manage cloud systems, then I expanded the design to manage multiple cloud vendors and private data centers in an integrated fashion.  Then I made a newer-lighter version called remlite, so that I could have something that takes much less time to update, but still has understanding of different services and deployments (prod, staging, QA), so it would work in a larger environment than the original software, but didn’t take as much forethought as Red Eye Monitor’s integrated multi-cloud and datacenter took.

Now remlite has so many improvements over REM, specifically it’s very cool dynamic script running system which integrates any scripts being run with monitoring, graphing and alerting, and it’s equally cool simplification of RRD graphing (which I have always found a pain to set up, but is now fast and easy).

Because remlite is running so smoothly, and has so many good non-organizational features, I’ve decided to make it the core of the Red Eye Monitor project.  Since remlite runs on YAML files and REM runs on a fairly massive MySQL table structure I am going to split them up, so you will always be able to run REM off of YAML files, for a simple case.

The cloud infrastructure is going to be broken out so that wrapping Amazon EC2 API calls and creating your own in-house cloud, or connecting to other cloud vendors will sit in it’s own project.  Temporarily I’m thinking about this as the Red Eye Monitor Cloud, which will abstract all things cloudy.  The requesting of instances, storage, load balanced or external IP addresses, etc, will all be wrapped into the Red Eye Cloud package, which will be stand-alone and contain a cache for read commands so failure to connect to APIs will still result in expected usage.

remlite, which is all the management of interfacing with the cloud, and defining deployments and services to do so, will become Red Eye Monitor.  This is the core package, and will depend on Red Eye Monitor Cloud for interfacing with EC2 or any other cloud vendors (including an internal  Home Cloud).

The old REM specification, with all it’s massive data modeling of your physical and virtual environment, will be known as Red Eye Control.  This will be able to be used as a stand-alone system that could act as the organizational brain for any operations center, and not interface with Red Eye Monitor.

Finally, all the HTTP and XMLRPC stuff I built into REM is being turned into a standalone or embedded web server called dropSTAR (Scripts Templates and RPC).  This is a Python based HTTP/S and XMLRPC server that can run multiple listening thread pools on different ports, and maps page paths to a series of scripts, much like the format remlite scripts, to render data to a webpage or an RPC response.

So the final package list will look like this:

  • Red Eye Monitor (REM).  Currently “remlite”, this will be a YAML configured system for managing services and deployment in a cloud.
  • Red Eye Cloud.  Required by REM, wraps commands for any number of cloud environments, including your Home Cloud hardware wrappers.  Amazon EC2 is included.  This can be run as a library or stand-alone as well.
  • Red Eye Control.  This is a massive brain for your operational environments.  It will track every piece of hardware, down to their components, and all media connections between components to give you a complete understanding of your current infrastructure, and provides a comprehensive dependency graph for alert suppression.  In a standard REM configuration, Red Eye Control will be the source that is used to create all the YAML files that run Red Eye Monitor.  This will separate the brain data from the control scripts, but still allow one to drive the other.
  • dropSTAR.  HTTP and XMLRPC server.  Integrates into any Python program to provide threaded web server with easy Web 2.0 functionality.  Much work has been done to work out a system to easily create dynamic pages and interact and update them from any long-running Python program.  Also works as a drop-in web server, which is much easier to throw onto a non-web system to provide insight into what is going on.

I may also break out the RRD automatic creation and updating, as this is such a hard thing to get right, and being able to dynamically throw any data into RRDs with minimal specification is very useful.  I have to figure out how to do this still, and I’ll probably start looking at it after these other projects are completed and launched.


Utility Computing vs. Cloud Computing

February 7, 2010

I have spent some time thinking about the functional differences between the terms Utility Computing and Cloud Computing, both as I think they are used today, and as how they could be used to differentiate a different class of service.

I see Utility Computing as a service provider that sells computing instances, computing time slices, networked and “local” storage, computing services (Map Reduce, Key Stores, Message Queue), the network bandwidth needed for this, and ways to reliably target traffic to your site to a single or multiple machines (floating IP address or load balancer).

The way Utility Computing service providers deliver these things to you gives you details about the instances, the volumes, the descriptor names for their network services, but the important point is that you are given a label for a real VM instance on real hardware.  You are tracking something that is essentially a fixed service; an EC2 instance gives us it’s instance ID number (i-12345678), and with that we can reference only this one particular assignment of the physical hardware and Xen VM instance.

To contrast this, a Cloud Computing provider would give you an idealized system, and the actual VM instance or real hardware behind it would forever be abstracted.  You would know you simply have a MySQL database with two 200GB network attached volumes in a RAID 1 configuration, with 32GB of RAM and 20 CPU units, and the Cloud Computer provider gives you a label to the stored concept of this goal, which could presently have an actual instance behind it, or not.  Whether it has a running instance behind the goal, depends on its current configuration state, which could change at any time.  The Cloud Computing provider would ensure that a new instance, of the correct specification, is brought back under the goal, and that in a pool of 20 machines, each can have a several volumes assigned to respective device paths, and when a replacement instance launched, all volumes will be re-attached in their proper place.  The machine’s configuration process is initiated with the knowledge that this machine is part of a pool, and may load only a certain data set (sharding).

I see the difference between having a label for a machine instance, and having a label for the goal of what you want any instances behind that label to perform, and I believe this is the difference between a useful labeling of Utility Computer and Cloud Computing.  I think this underlines my feeling that, at present, Amazon’s EC2 service is a Utility Computing service, and is only starting to become a Cloud Computing service with their new service RDS (Relational Database Service), which allows you to specify a goal for a database system, with its own backup and restore automation, though I haven’t launched one yet to see if this offering still leans more towards Utility or is delivering the Cloud abstraction and management as presently offered.

Presently, if you want Cloud Computing, you have to implement it yourself, or pay someone to help implement it for you so that your computing goals remain functioning even as the underlying hardware has failures, is replaced (perhaps with hardware in a different data center or region) and re-configured so that the goal can be picked up with a new set of hardware, and still serve the same function.  I believe that is the reason Cloud Computing creates so much interest, and it appears to become a foundational pillar of the next wave of computing.


remlite: The Light and Fluffy Red Eye Monitor

January 26, 2010

The full install of Red Eye Monitor (REM) is a major project and will take a few more weeks to complete.  To get a jump on doing cloud management centrally, I am pulling a lot of pieces out of REM and creating remlite.  This will be a much smaller, non-relational database backed, non-Total Systems Automation implementation of a cloud manager.

Here’s the design overview for remlite:

http://redeyemon.sourceforge.net/remlite/