Index ¦ Archives ¦ Atom > Tag: openstack ¦ Atom

Ceph for Cinder in TripleO

A wrap up on the status of TripleO's Cinder HA spec. First, a link to the cinder-ha blueprint, where you can find even more links, to the actual spec (under review) and the code changes (again, still under review). Intent of the blueprint is for the TripleO deployments to keep Cinder volumes available and Cinder operational in case of failures of any node.

This said, should $subject sound interesting to you, beware the code still …


TripleO vs OpenStack HA

One of the topics discussed during the TripleO mid-cycle meetup in RDU was our status in relation to deploying OpenStack in a highly available manner. This had been worked on for some time and recently reached a usable state.

Majority of complications seem to come from two factors: 1) we need to guarantee availability of external services too, like the database and the message broker, which aren't exactly designed for a scale-out scenario, 2) despite …


OpenStack Glance - Use Swift as backend

On OpenStack again. Glance is the component in charge of hosting the images (and image snapshots) to be cloned for the ephemeral instances. Images usually are just some random big files so it makes perfect sense to use Swift for such an object (a File Object storage)!

As usual, some assumptions before we start:

  • you're familiar with the general OpenStack architecture
  • you have already some Glance image node configured and working as expected

This said …


OpenStack Cinder - Configure multiple backends

Following my first post of the series discussing how to scale OpenStack Cinder to multiple nodes, with this I want to approach the configuration and usage of the multibackend feature landed in Cinder with the Grizzly release.

This feature allows you to configure a single volume node for use with more than a single backend driver. You can find all about the few configuration bits needed also in the OpenStack block storage documentation. That makes …


OpenStack Cinder - Add more volume nodes

With this being the first of a short series, I'd like to publish some articles intendend to cover the required steps to configure Cinder (OpenStack block storage service) in a mid/large deployment scenario. The idea is to discuss at least three topics: how to scale the service by adding more volume nodes; how to ensure high-availablity for the API and Scheduler sub-services; leverage the multi-backend feature landed in Grizzly.

I'm starting with this post …

© Giulio Fidente. Built using Pelican. Theme by Giulio Fidente on github. Member of the Internet Defense League.