Rewrite of MUC Clustering
There have been a number of MUC clustering problems resolved in Openfire 4.7 back through to 4.2 and possibly before. It's becoming clearer that rather than attempt to patch each issue in turn, that a revised approach on clustering for MUCs is required in order to achieve stability, involving rework to caches, tasks and state.
The old architecture is based around the concept of each cluster node maintaining its own copy of the full state of the system (primarily: what users are occupants in what rooms), depending on cluster tasks to synchronize this state. This is proven to be error-prone, as state on cluster nodes over time drifts apart. In the new approach, the task-based synchronization is replaced with one that is based on a shared data structure (a clustered cache) that is available to all cluster members.
An additional difficulty in the old architecture is introduced by having a concept of “local” and “remote” MUC users and roles. In this context, ‘local’ refers to a entity that is connected to the local cluster node, while “remote” refers to an entity connected to another node in the cluster. This distinction cannot be made for entities that join a MUC room from remote domains (through server-to-server functionality). Server-to-server connectivity can arbitrarily be established to any one, but also multiple, nodes of the cluster. Worse, a server-to-server connection can be teared down, and re-established to a new cluster node. Because of this, it’s not possible to uniquely qualify a MUC user that is a user from another XMPP domain as a ‘remote’ or ‘local’ MUC user. It can be either, or both, but also change over time. A potential resolution for this is based on the removal of the distinction between “local” and “remote” MUC users and roles.
When a node leaves a cluster, it may still be operational and still providing services to connected users. For both the leaving node and for any node observing a leaving node (which may be the same in a 2 node cluster) all node-local users require a leave presence for all users on the now-unreachable nodes of any MUC rooms. This will likely require that each node maintains a local copy of state as to which user is connected to which cluster node.
Given that some caches are cleared upon joining/leaving a cluster (since you’re swapping from a local to a Hazelcast version, and state needs to be guaranteed) each node should hold enough information locally to repopulate a local cache upon leaving a cluster and losing access to the cluster cache.
The state of the Hazelcast-backed cluster cache is critical to the smooth running of the cluster. When an item is taken from the cache and modified, it must be explicitly re-added to the cache to ensure other nodes have access to those changes. This piece of work must make every effort to guarantee that this happens.
When a node joins a cluster, it may be newly joining or it may be rejoining after a short interruption. These should be treated the same way since leaving should have put this node (and all other nodes) into the same consistent unjoined state.
When a node joins the cluster:
it will (should?) already share the same list of persistent MUC rooms. The list of non-persistent rooms needs to be synchronised between the nodes (which may entail collision detection).
If room information (e.g. subject, moderator lists) for persistent rooms might be divergent in memory at the point of cluster join then this needs to be resync’d too
the MUC service configuration contains largely “room defaults” information - this should be database backed, but all nodes should be in sync (perhaps by reloading the in-memory config from DB)
membership of rooms needs to be synchronised - this means that the joining node needs “join” presences from all users of all other nodes, and that users on those nodes need a join presence from all users on the joining node
synchronising membership of rooms may lead to nickname collisions, which will need to be handled