One of the topics discussed during the TripleO mid-cycle meetup in RDU was our status in relation to deploying OpenStack in a highly available manner. This had been worked on for some time and recently reached a usable state.
Majority of complications seem to come from two factors: 1) we need to guarantee availability of external services too, like the database and the message broker, which aren't exactly designed for a scale-out scenario, 2) despite the OpenStack services being designed around a scale-out concept, while attempting to achieve that in TripleO we spotted a number of weak angles, some of which could be worked around, others instead still need some changes in the core service. You're encouraged to try what we have available today and help with the rest.
export OVERCLOUD_CONTROLSCALE=3 source scripts/devtest_variables.sh ...
Don't forget this is only tested on a few distros for now, I'd pick some Fedora 20.
On the controller nodes, MariaDB with Galera (for Fedora) is going to provide for a reliable SQL. There is still some work in progress to make sure the Galera cluster can be restarted correctly should all the controllers go down at the same time but, for single node failures, this should be safe to use.
RabbitMQ nodes are clustered and balanced (via HAProxy), queues replicated.
And with regards to the OpenStack services, these are configured in a balancing manner (again, using HAProxy) except for those cases where this wouldn't have worked, notably the Neutron L3 agent and the Ceilometer Central agent, yet these are under control via Pacemaker and a single instance is expected to be running at all times. Cinder instead remains uncovered as volumes would require a shared storage for proper HA. A spec has been proposed for this though.
Also, behind the scenes, the Heat template language addon shipped as merge.py and included in tripleo-heat-templates, which allows for example for scaling of the resources definition, is currently going to be removed and replaced with code living entirely in Heat.
And there is more so once you tried, join us on #tripleo @ freenode for the real fun!