This is a part of a series of posts on the details of how Nova supports live upgrades. It focuses on a very important layer that plays several roles in the system, providing a versioned RPC and database-independent facade for our data. Originally incubated in Nova, the versioned object code is now spun out into an Oslo library for general consumption, called oslo.versionedobjects.
As discussed in the post on RPC versioning, sending complex structures over RPC is hard to get right, as the structures are created and maintained elsewhere and simply sent over the wire between services. When running different levels of code on services in a deployment, changes to these structures must be handled and communicated carefully — something that the general oslo.messaging versioning doesn’t handle well.
The versioned objects that Nova uses to represent internal data help us when communicating over RPC, but they also help us tolerate a shifting persistence layer. They’re a critical facade within which we hide things like online data migrations and general tolerance of multiple versions of data in our database.
What follows is not an exhaustive explanation of versioned objects, but provides just enough for you to see how it applies to Nova’s live upgrade capabilities.
Versioned Objects as Schema
The easiest place to start digging into the object layer in Nova is to look at how we pass a relatively simple structure over RPC as an object, instead of just an unstructured dict. To get an appreciation of why this is important, refer back to the rescue_instance() method in the previous post. After our change, it looked like this:
def rescue_instance(self, context, instance, rescue_password, rescue_image_ref=None): ....
Again, the first two parameters (self and context) are implied, and not of concern here. The rescue_password is just a string, as is the rescue_image_ref. However, the instance parameter is far more than a simple string — at version 3.0 of our RPC API, it was a giant dictionary that represented most of what nova knows about its primary data structure. For reference, this is mostly what it looked like in Juno, which is a fixture we use for testing when we need an instance. In reality, that doesn’t even include some of the complex nested structures contained within. You can imagine that we could easily add, remove, or change attributes of that structure elsewhere in the code or database without accounting for the change in the RPC interface in any way. If you end up with a newer node making the above call to an older node, the instance structure could be changed in subtle ways that the receiving end doesn’t understand. Since there is no version provided, the receiver can’t even know that it should fail fast, and in reality, it will likely fail deep in the middle of an operation. Proof of this comes from the test structure itself which is actually not even in sync with the current state of our database schema, using strings in places where integers are actually specified!
In Nova we addressed this by growing a versioned structure that defines the schema we want, independent of what is actually stored in the database at any given point. Just like for the RPC API, we attach a version number to the structure, and we increment that version every time we make a change. When we send the object over RPC to another node, the version can be used to determine if the receiver can understand what is inside, and take action if not. Since our versioned objects are self-serializing, they show up on the other side as rich objects and not just dicts.
An important element of making this work is getting a handle on the types and arrangement of data inside the structure. As I mentioned above, our “test instance” structure had strings where integers were actually expected, and vice versa. To see how this works, lets examine a simple structure in Nova:
@base.NovaObjectRegistry.register class Flavor(base.NovaObject): # Version 1.0: Initial version VERSION = '1.0' fields = { 'id': fields.IntegerField(), 'name': fields.StringField(nullable=True), 'memory_mb': fields.IntegerField(), 'vcpus': fields.IntegerField(), 'root_gb': fields.IntegerField(), 'ephemeral_gb': fields.IntegerField(), 'flavorid': fields.StringField(), 'swap': fields.IntegerField(), 'rxtx_factor': fields.FloatField(nullable=True, default=1.0), 'vcpu_weight': fields.IntegerField(nullable=True), 'disabled': fields.BooleanField(), 'is_public': fields.BooleanField(), 'extra_specs': fields.DictOfStringsField(), 'projects': fields.ListOfStringsField(), }
Here, we define what the object looks like. It consists of several fields of data, integers, floats, booleans, strings, and even some more complicated structures like a dict of strings. The object can have other types of attributes, but they are not part of the schema if they’re not in the fields list, and thus they don’t go over RPC. In case it’s not clear, if I try to set one of the integer properties, such as “swap” with a string, I’ll get a ValueError since a string is not a valid value for that field.
As long as I’ve told oslo.messaging to use the VersionedObjectSerializer from oslo.versionedobjects, I can provide a Flavor object as an argument to an RPC method and it is magically serialized and deserialized for me, showing up on the other end exactly as I sent it, including the version and including the type checking.
If I want to make a change to the Flavor object, I can do so, but I need to make two important changes. First, I need to bump the version, and second I need to account for the change in the class’ obj_make_compatible() method. This method is the routine that I can use to take a Flavor 1.1 object and turn it into a Flavor 1.0, if I need to for an older node.
Let’s say I wanted to add a new property of “foobars” to the Flavor object, which is merely a count of the number of foobars an instance is allowed. I would denote the change in the comment above the version, bump the version, and make a change to the compatibility method to allow backports:
@base.NovaObjectRegistry.register class Flavor(base.NovaObject): # Version 1.0: Initial version # Version 1.1: Add foobars VERSION = '1.1' fields = { . . . 'foobars': fields.IntegerField(), } def obj_make_compatible(self, primitive, target_version): super(Flavor, self).obj_make_compatible( primitive, target_version) target_version = utils.convert_version_to_tuple( target_version) if target_version < (1, 1): del primitive['foobars']
The code in obj_make_compatible() boils down to removing the foobars field if we’re being asked to downgrade the object to version 1.0. There have been many times in nova where we have moved data from one attribute to another, or disaggregated some composite attribute into separate ones. In those cases, the task of obj_make_compatible() is to reform the data into something that looks like the version being asked for. Within a single major version of an object, that should always be possible. If it’s not then the change requires a major version bump.
Knowing when a version bump is required can be a bit of a challenge. Bumping too often can create unnecessary backport work, but not bumping when it’s necessary can lead to failure. The object schema forms a contract between any two nodes that use them to communicate, so if something you’re doing changes that contract, you need a version bump. The oslo.versionedobjects library provides some test fixtures to help automate detection, but sharing some of the Nova team’s experiences in this area is good subject matter for a follow-on post.
Once you have your data encapsulated like this, one approach to providing compatibility is to have version pins as described for RPC. Thus, during an upgrade, you can allow the operator to pin a given object (or all objects) to the version(s) that are supported by the oldest code in the deployment. Once everything is upgraded, the pins can be lifted.
The next thing to consider is how we get data in and out of this object form when we’re using a database for persistence. In Nova, we do this using a series of methods on the object class for querying and saving data. Consider these Flavor methods for loading from and saving to the database:
class Flavor(base.NovaObject): . . . @classmethod def get_by_id(cls, context, id): flavor = cls(context=context) db_flavor = db.get_flavor(context, id) # NOTE(danms): This only works if the flavor # object looks like the database object! for field in flavor.fields: setattr(flavor, field, db_flavor[field]) flavor.obj_reset_changes() return flavor def save(self): # Here, updates is a dict of field=value items, # and only what has changed updates = self.obj_get_updates() db.set_flavor(self._context, self.id, updates) self.obj_reset_changes()
With this, we can pull Flavor objects out of the database, modify them, and save them back like this:
flavor = Flavor.get_by_id(context, 123) flavor.memory_mb = 512 flavor.save()
Now, if you’re familiar with any sort of ORM, this doesn’t look new to you at all. Where it comes into play for Nova’s upgrades is how these objects provide RPC and database-independent facades.
Nova Conductor
Before we jump into objects as facades for the RPC and database layers, I need to explain a bit about the conductor service in Nova.
Skipping over lots of details, the nova-conductor service is a stateless component of Nova that you can scale horizontally according to load. It provides an RPC-based interface to do various things on behalf of other nodes. Unlike the nova-compute service, it is allowed to talk to the database directly. Also unlike nova-compute, it is required that the nova-conductor service is always the newest service in your system during an upgrade. So, when you set out to upgrade from Kilo to Liberty, you start with your conductor service.
In addition to some generic object routines that conductor handles, it also serves as a backport service for the compute nodes. Using the Flavor example above, if an older compute node receives a Flavor object at version 1.1 that it does not understand, it can bounce that object over RPC to the conductor service, requesting that it be backported to version 1.0, which that node understands. Since nova-conductor is required to be the newest service in the deployment, it can do that. In fact, it’s quite easy, it just calls obj_make_compatible() on the object at the target version requested by the compute node and returns it back. Thus if one of the API nodes (which are also new) looks up a Flavor object from the database at version 1.1 and passes it to an older compute node, that compute node automatically asks conductor to backport the object on its behalf so that it can satisfy the request.
Versioned Objects as RPC Facade
So, nova-conductor serves an important role for older compute nodes, providing object backports for compatibility. However, except for the most boring of calls, the older compute node is almost definitely going to have to take some action, which will involve reading and writing data, thus interacting with the database.
As I hinted above, nova-compute is not actually allowed to talk directly to the database, and hasn’t for some time, even predating Versioned Objects. Thus, when nova-compute wants to read or write data, it must ask the conductor to do so on its behalf. This turns out to help us a lot for upgrades, because it insulates the compute nodes from the database — more on that in the next section.
However, in order to support everything nova-compute might want to do in the database means a lot of RPC calls, all of which need to be versioned and tolerant of shifting schemas, such as Instance or Flavor objects. Luckily, the versioned object infrastructure helps us here by providing some decorators that turn object methods into RPC calls back to conductor. They look like this:
class Flavor(base.NovaObject): . . . @base.remotable_classmethod def get_by_id(cls, context, id): . . . @base.remotable def save(self): . . .
With these decorators in place, a call to something like Flavor.get_by_id() on nova-compute turns into an RPC call to conductor, where the actual method is run. The call reports the version of the object that nova-compute knows about, which lets conductor ensure that it returns a compatible version from the method. In the case of save(), the object instance is wrapped up, sent over the wire, the method is run, and any changes to the object are reflected back on the calling side. This means that code doesn’t need to know whether it’s running on compute (and thus needs to make an RPC call) or on another service (and thus needs to make a database call). The object effectively handles the versioned RPC bit for you, based on the version of the object.
Versioned Objects as Database Facade
Based on everything above, you can see that in Nova, we delegate most of the database manipulation responsibility to conductor over RPC. We do that with versioned objects, which ensure that on either side of a conversation between two nodes, we always know what version we’re talking about, and we tightly control the structure and format of the data we’re working on. It pays off immediately purely from the RPC perspective, where writing new RPC calls is much simpler and the versioning is handled for you.
Where this really becomes a multiplier for improving upgrades is where the facade meets the database. Before Nova was insulating the compute nodes from the database, all the nodes in a deployment had to be upgraded at the same time as a schema change was applied to the database. There was no isolation and thus everything was tied together. Even when we required compute nodes to make their database calls over RPC to conductor, they still had too much direct knowledge of the schema in the database and thus couldn’t really operate with a newer schema once it was applied.
The object layer in Nova sadly doesn’t automatically make this better for you without extra effort. However, it does provide a clean place to hide transitions between the current state of the database schema and the desired schema (i.e. the objects). I’ll discuss strategies for that next.
The final major tenet in Nova’s upgrade strategy is decoupling the actual database schema changes from the process of upgrading the nodes that access that schema directly (i.e conductor, api, etc). That is a critical part of achieving the goal.
One Response in another blog/article
[…] series of deep dives, learn how Nova handles live upgrades and what steps you needs to take with objects, RFC APIs, and database migrations to keep your applications up and running through an […]