blob: 2283a09c080b8996f9767eeb415e8d4fbdc940af [file] [log] [blame]
zhanghailiange59887d2016-10-27 14:43:07 +08001COarse-grained LOck-stepping Virtual Machines for Non-stop Service
2----------------------------------------
3Copyright (c) 2016 Intel Corporation
4Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
5Copyright (c) 2016 Fujitsu, Corp.
6
7This work is licensed under the terms of the GNU GPL, version 2 or later.
8See the COPYING file in the top-level directory.
9
10This document gives an overview of COLO's design and how to use it.
11
12== Background ==
13Virtual machine (VM) replication is a well known technique for providing
14application-agnostic software-implemented hardware fault tolerance,
15also known as "non-stop service".
16
17COLO (COarse-grained LOck-stepping) is a high availability solution.
18Both primary VM (PVM) and secondary VM (SVM) run in parallel. They receive the
19same request from client, and generate response in parallel too.
20If the response packets from PVM and SVM are identical, they are released
21immediately. Otherwise, a VM checkpoint (on demand) is conducted.
22
23== Architecture ==
24
25The architecture of COLO is shown in the diagram below.
26It consists of a pair of networked physical nodes:
27The primary node running the PVM, and the secondary node running the SVM
28to maintain a valid replica of the PVM.
29PVM and SVM execute in parallel and generate output of response packets for
30client requests according to the application semantics.
31
32The incoming packets from the client or external network are received by the
33primary node, and then forwarded to the secondary node, so that both the PVM
34and the SVM are stimulated with the same requests.
35
36COLO receives the outbound packets from both the PVM and SVM and compares them
37before allowing the output to be sent to clients.
38
39The SVM is qualified as a valid replica of the PVM, as long as it generates
40identical responses to all client requests. Once the differences in the outputs
41are detected between the PVM and SVM, COLO withholds transmission of the
42outbound packets until it has successfully synchronized the PVM state to the SVM.
43
Zhang Chena38299b2016-11-01 11:38:12 +080044 Primary Node Secondary Node
45+------------+ +-----------------------+ +------------------------+ +------------+
46| | | HeartBeat +<----->+ HeartBeat | | |
47| Primary VM | +-----------+-----------+ +-----------+------------+ |Secondary VM|
48| | | | | |
49| | +-----------|-----------+ +-----------|------------+ | |
50| | |QEMU +---v----+ | |QEMU +----v---+ | | |
51| | | |Failover| | | |Failover| | | |
52| | | +--------+ | | +--------+ | | |
53| | | +---------------+ | | +---------------+ | | |
54| | | | VM Checkpoint +-------------->+ VM Checkpoint | | | |
55| | | +---------------+ | | +---------------+ | | |
56|Requests<--------------------------\ /-----------------\ /--------------------->Requests|
57| | | ^ ^ | | | | | | |
58|Responses+---------------------\ /-|-|------------\ /-------------------------+Responses|
59| | | | | | | | | | | | | | | |
60| | | +-----------+ | | | | | | | | | | +----------+ | | |
61| | | | COLO disk | | | | | | | | | | | | COLO disk| | | |
62| | | | Manager +---------------------------->| Manager | | | |
63| | | ++----------+ v v | | | | | v v | +---------++ | | |
64| | | |+-----------+-+-+-++| | ++-+--+-+---------+ | | | |
65| | | || COLO Proxy || | | COLO Proxy | | | | |
66| | | || (compare packet || | |(adjust sequence | | | | |
67| | | ||and mirror packet)|| | | and ACK) | | | | |
68| | | |+------------+---+-+| | +-----------------+ | | | |
69+------------+ +-----------------------+ +------------------------+ +------------+
70+------------+ | | | | +------------+
71| VM Monitor | | | | | | VM Monitor |
72+------------+ | | | | +------------+
73+---------------------------------------+ +----------------------------------------+
74| Kernel | | | | | Kernel | |
75+---------------------------------------+ +----------------------------------------+
76 | | | |
77 +--------------v+ +---------v---+--+ +------------------+ +v-------------+
78 | Storage | |External Network| | External Network | | Storage |
79 +---------------+ +----------------+ +------------------+ +--------------+
80
zhanghailiange59887d2016-10-27 14:43:07 +080081
82== Components introduction ==
83
84You can see there are several components in COLO's diagram of architecture.
85Their functions are described below.
86
87HeartBeat:
88Runs on both the primary and secondary nodes, to periodically check platform
89availability. When the primary node suffers a hardware fail-stop failure,
90the heartbeat stops responding, the secondary node will trigger a failover
91as soon as it determines the absence.
92
93COLO disk Manager:
zhaolichang76ca4b52020-09-17 15:50:22 +080094When primary VM writes data into image, the colo disk manager captures this data
zhanghailiange59887d2016-10-27 14:43:07 +080095and sends it to secondary VM's which makes sure the context of secondary VM's
96image is consistent with the context of primary VM 's image.
97For more details, please refer to docs/block-replication.txt.
98
99Checkpoint/Failover Controller:
100Modifications of save/restore flow to realize continuous migration,
101to make sure the state of VM in Secondary side is always consistent with VM in
102Primary side.
103
104COLO Proxy:
Like Xu806be372019-02-20 13:27:26 +0800105Delivers packets to Primary and Secondary, and then compare the responses from
zhanghailiange59887d2016-10-27 14:43:07 +0800106both side. Then decide whether to start a checkpoint according to some rules.
Stefan Weil963e64a2018-07-13 14:17:27 +0200107Please refer to docs/colo-proxy.txt for more information.
zhanghailiange59887d2016-10-27 14:43:07 +0800108
109Note:
110HeartBeat has not been implemented yet, so you need to trigger failover process
111by using 'x-colo-lost-heartbeat' command.
112
Zhang Chen8e640892018-09-03 12:39:00 +0800113== COLO operation status ==
114
115+-----------------+
116| |
117| Start COLO |
118| |
119+--------+--------+
120 |
121 | Main qmp command:
122 | migrate-set-capabilities with x-colo
123 | migrate
124 |
125 v
126+--------+--------+
127| |
128| COLO running |
129| |
130+--------+--------+
131 |
132 | Main qmp command:
133 | x-colo-lost-heartbeat
134 | or
135 | some error happened
136 v
137+--------+--------+
138| | send qmp event:
139| COLO failover | COLO_EXIT
140| |
141+-----------------+
142
143COLO use the qmp command to switch and report operation status.
144The diagram just shows the main qmp command, you can get the detail
145in test procedure.
146
zhanghailiange59887d2016-10-27 14:43:07 +0800147== Test procedure ==
Lukas Straub90dfe592019-10-24 16:25:57 +0200148Note: Here we are running both instances on the same host for testing,
zhaolichang76ca4b52020-09-17 15:50:22 +0800149change the IP Addresses if you want to run it on two hosts. Initially
Lukas Straub90dfe592019-10-24 16:25:57 +0200150127.0.0.1 is the Primary Host and 127.0.0.2 is the Secondary Host.
zhanghailiange59887d2016-10-27 14:43:07 +0800151
Lukas Straub90dfe592019-10-24 16:25:57 +0200152== Startup qemu ==
1531. Primary:
zhaolichang76ca4b52020-09-17 15:50:22 +0800154Note: Initially, $imagefolder/primary.qcow2 needs to be copied to all hosts.
Lukas Straub90dfe592019-10-24 16:25:57 +0200155You don't need to change any IP's here, because 0.0.0.0 listens on any
156interface. The chardev's with 127.0.0.1 IP's loopback to the local qemu
157instance.
158
159# imagefolder="/mnt/vms/colo-test-primary"
160
Daniel P. Berrangé1bd39ea2021-02-16 19:10:26 +0000161# qemu-system-x86_64 -enable-kvm -cpu qemu64,kvmclock=on -m 512 -smp 1 -qmp stdio \
Lukas Straub90dfe592019-10-24 16:25:57 +0200162 -device piix3-usb-uhci -device usb-tablet -name primary \
163 -netdev tap,id=hn0,vhost=off,helper=/usr/lib/qemu/qemu-bridge-helper \
164 -device rtl8139,id=e0,netdev=hn0 \
Daniel P. Berrangéc2387412021-02-16 19:10:24 +0000165 -chardev socket,id=mirror0,host=0.0.0.0,port=9003,server=on,wait=off \
166 -chardev socket,id=compare1,host=0.0.0.0,port=9004,server=on,wait=on \
167 -chardev socket,id=compare0,host=127.0.0.1,port=9001,server=on,wait=off \
Lukas Straub90dfe592019-10-24 16:25:57 +0200168 -chardev socket,id=compare0-0,host=127.0.0.1,port=9001 \
Daniel P. Berrangéc2387412021-02-16 19:10:24 +0000169 -chardev socket,id=compare_out,host=127.0.0.1,port=9005,server=on,wait=off \
Lukas Straub90dfe592019-10-24 16:25:57 +0200170 -chardev socket,id=compare_out0,host=127.0.0.1,port=9005 \
171 -object filter-mirror,id=m0,netdev=hn0,queue=tx,outdev=mirror0 \
172 -object filter-redirector,netdev=hn0,id=redire0,queue=rx,indev=compare_out \
173 -object filter-redirector,netdev=hn0,id=redire1,queue=rx,outdev=compare0 \
174 -object iothread,id=iothread1 \
175 -object colo-compare,id=comp0,primary_in=compare0-0,secondary_in=compare1,\
176outdev=compare_out0,iothread=iothread1 \
177 -drive if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,\
178children.0.file.filename=$imagefolder/primary.qcow2,children.0.driver=qcow2 -S
179
1802. Secondary:
181Note: Active and hidden images need to be created only once and the
182size should be the same as primary.qcow2. Again, you don't need to change
183any IP's here, except for the $primary_ip variable.
184
185# imagefolder="/mnt/vms/colo-test-secondary"
186# primary_ip=127.0.0.1
187
188# qemu-img create -f qcow2 $imagefolder/secondary-active.qcow2 10G
189
190# qemu-img create -f qcow2 $imagefolder/secondary-hidden.qcow2 10G
191
Daniel P. Berrangé1bd39ea2021-02-16 19:10:26 +0000192# qemu-system-x86_64 -enable-kvm -cpu qemu64,kvmclock=on -m 512 -smp 1 -qmp stdio \
Lukas Straub90dfe592019-10-24 16:25:57 +0200193 -device piix3-usb-uhci -device usb-tablet -name secondary \
194 -netdev tap,id=hn0,vhost=off,helper=/usr/lib/qemu/qemu-bridge-helper \
195 -device rtl8139,id=e0,netdev=hn0 \
Daniil Tatianin96e610b2024-10-25 10:35:25 +0300196 -chardev socket,id=red0,host=$primary_ip,port=9003,reconnect-ms=1000 \
197 -chardev socket,id=red1,host=$primary_ip,port=9004,reconnect-ms=1000 \
Lukas Straub90dfe592019-10-24 16:25:57 +0200198 -object filter-redirector,id=f1,netdev=hn0,queue=tx,indev=red0 \
199 -object filter-redirector,id=f2,netdev=hn0,queue=rx,outdev=red1 \
200 -object filter-rewriter,id=rew0,netdev=hn0,queue=all \
201 -drive if=none,id=parent0,file.filename=$imagefolder/primary.qcow2,driver=qcow2 \
202 -drive if=none,id=childs0,driver=replication,mode=secondary,file.driver=qcow2,\
203top-id=colo-disk0,file.file.filename=$imagefolder/secondary-active.qcow2,\
204file.backing.driver=qcow2,file.backing.file.filename=$imagefolder/secondary-hidden.qcow2,\
205file.backing.backing=parent0 \
206 -drive if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,\
207children.0=childs0 \
208 -incoming tcp:0.0.0.0:9998
209
210
2113. On Secondary VM's QEMU monitor, issue command
Rao, Leieff708a2021-11-22 15:49:47 +0800212{"execute":"qmp_capabilities"}
Vladimir Sementsov-Ogievskiy121cced2023-04-28 22:49:28 +0300213{"execute": "migrate-set-capabilities", "arguments": {"capabilities": [ {"capability": "x-colo", "state": true } ] } }
Rao, Leieff708a2021-11-22 15:49:47 +0800214{"execute": "nbd-server-start", "arguments": {"addr": {"type": "inet", "data": {"host": "0.0.0.0", "port": "9999"} } } }
215{"execute": "nbd-server-add", "arguments": {"device": "parent0", "writable": true } }
zhanghailiange59887d2016-10-27 14:43:07 +0800216
217Note:
218 a. The qmp command nbd-server-start and nbd-server-add must be run
219 before running the qmp command migrate on primary QEMU
220 b. Active disk, hidden disk and nbd target's length should be the
221 same.
Lukas Straub90dfe592019-10-24 16:25:57 +0200222 c. It is better to put active disk and hidden disk in ramdisk. They
223 will be merged into the parent disk on failover.
zhanghailiange59887d2016-10-27 14:43:07 +0800224
Lukas Straub90dfe592019-10-24 16:25:57 +02002254. On Primary VM's QEMU monitor, issue command:
Rao, Leieff708a2021-11-22 15:49:47 +0800226{"execute":"qmp_capabilities"}
227{"execute": "human-monitor-command", "arguments": {"command-line": "drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.2,file.port=9999,file.export=parent0,node-name=replication0"}}
228{"execute": "x-blockdev-change", "arguments":{"parent": "colo-disk0", "node": "replication0" } }
229{"execute": "migrate-set-capabilities", "arguments": {"capabilities": [ {"capability": "x-colo", "state": true } ] } }
230{"execute": "migrate", "arguments": {"uri": "tcp:127.0.0.2:9998" } }
zhanghailiange59887d2016-10-27 14:43:07 +0800231
232 Note:
233 a. There should be only one NBD Client for each primary disk.
Lukas Straub90dfe592019-10-24 16:25:57 +0200234 b. The qmp command line must be run after running qmp command line in
zhanghailiange59887d2016-10-27 14:43:07 +0800235 secondary qemu.
236
Lukas Straub90dfe592019-10-24 16:25:57 +02002375. After the above steps, you will see, whenever you make changes to PVM, SVM will be synced.
zhanghailiange59887d2016-10-27 14:43:07 +0800238You can issue command '{ "execute": "migrate-set-parameters" , "arguments":{ "x-checkpoint-delay": 2000 } }'
Lukas Straub90dfe592019-10-24 16:25:57 +0200239to change the idle checkpoint period time
zhanghailiange59887d2016-10-27 14:43:07 +0800240
Lukas Straub90dfe592019-10-24 16:25:57 +02002416. Failover test
242You can kill one of the VMs and Failover on the surviving VM:
zhanghailiange59887d2016-10-27 14:43:07 +0800243
Lukas Straub90dfe592019-10-24 16:25:57 +0200244If you killed the Secondary, then follow "Primary Failover". After that,
245if you want to resume the replication, follow "Primary resume replication"
zhanghailiange59887d2016-10-27 14:43:07 +0800246
Lukas Straub90dfe592019-10-24 16:25:57 +0200247If you killed the Primary, then follow "Secondary Failover". After that,
248if you want to resume the replication, follow "Secondary resume replication"
249
250== Primary Failover ==
251The Secondary died, resume on the Primary
252
Rao, Leieff708a2021-11-22 15:49:47 +0800253{"execute": "x-blockdev-change", "arguments":{ "parent": "colo-disk0", "child": "children.1"} }
254{"execute": "human-monitor-command", "arguments":{ "command-line": "drive_del replication0" } }
255{"execute": "object-del", "arguments":{ "id": "comp0" } }
256{"execute": "object-del", "arguments":{ "id": "iothread1" } }
257{"execute": "object-del", "arguments":{ "id": "m0" } }
258{"execute": "object-del", "arguments":{ "id": "redire0" } }
259{"execute": "object-del", "arguments":{ "id": "redire1" } }
260{"execute": "x-colo-lost-heartbeat" }
Lukas Straub90dfe592019-10-24 16:25:57 +0200261
262== Secondary Failover ==
263The Primary died, resume on the Secondary and prepare to become the new Primary
264
Rao, Leieff708a2021-11-22 15:49:47 +0800265{"execute": "nbd-server-stop"}
266{"execute": "x-colo-lost-heartbeat"}
Lukas Straub90dfe592019-10-24 16:25:57 +0200267
Rao, Leieff708a2021-11-22 15:49:47 +0800268{"execute": "object-del", "arguments":{ "id": "f2" } }
269{"execute": "object-del", "arguments":{ "id": "f1" } }
270{"execute": "chardev-remove", "arguments":{ "id": "red1" } }
271{"execute": "chardev-remove", "arguments":{ "id": "red0" } }
Lukas Straub90dfe592019-10-24 16:25:57 +0200272
Rao, Leieff708a2021-11-22 15:49:47 +0800273{"execute": "chardev-add", "arguments":{ "id": "mirror0", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "0.0.0.0", "port": "9003" } }, "server": true } } } }
274{"execute": "chardev-add", "arguments":{ "id": "compare1", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "0.0.0.0", "port": "9004" } }, "server": true } } } }
275{"execute": "chardev-add", "arguments":{ "id": "compare0", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "127.0.0.1", "port": "9001" } }, "server": true } } } }
276{"execute": "chardev-add", "arguments":{ "id": "compare0-0", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "127.0.0.1", "port": "9001" } }, "server": false } } } }
277{"execute": "chardev-add", "arguments":{ "id": "compare_out", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "127.0.0.1", "port": "9005" } }, "server": true } } } }
278{"execute": "chardev-add", "arguments":{ "id": "compare_out0", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "127.0.0.1", "port": "9005" } }, "server": false } } } }
Lukas Straub90dfe592019-10-24 16:25:57 +0200279
280== Primary resume replication ==
281Resume replication after new Secondary is up.
282
283Start the new Secondary (Steps 2 and 3 above), then on the Primary:
Rao, Leieff708a2021-11-22 15:49:47 +0800284{"execute": "drive-mirror", "arguments":{ "device": "colo-disk0", "job-id": "resync", "target": "nbd://127.0.0.2:9999/parent0", "mode": "existing", "format": "raw", "sync": "full"} }
Lukas Straub90dfe592019-10-24 16:25:57 +0200285
286Wait until disk is synced, then:
Rao, Leieff708a2021-11-22 15:49:47 +0800287{"execute": "stop"}
288{"execute": "block-job-cancel", "arguments":{ "device": "resync"} }
Lukas Straub90dfe592019-10-24 16:25:57 +0200289
Rao, Leieff708a2021-11-22 15:49:47 +0800290{"execute": "human-monitor-command", "arguments":{ "command-line": "drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.2,file.port=9999,file.export=parent0,node-name=replication0"}}
291{"execute": "x-blockdev-change", "arguments":{ "parent": "colo-disk0", "node": "replication0" } }
Lukas Straub90dfe592019-10-24 16:25:57 +0200292
Rao, Leieff708a2021-11-22 15:49:47 +0800293{"execute": "object-add", "arguments":{ "qom-type": "filter-mirror", "id": "m0", "netdev": "hn0", "queue": "tx", "outdev": "mirror0" } }
294{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector", "id": "redire0", "netdev": "hn0", "queue": "rx", "indev": "compare_out" } }
295{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector", "id": "redire1", "netdev": "hn0", "queue": "rx", "outdev": "compare0" } }
296{"execute": "object-add", "arguments":{ "qom-type": "iothread", "id": "iothread1" } }
297{"execute": "object-add", "arguments":{ "qom-type": "colo-compare", "id": "comp0", "primary_in": "compare0-0", "secondary_in": "compare1", "outdev": "compare_out0", "iothread": "iothread1" } }
Lukas Straub90dfe592019-10-24 16:25:57 +0200298
Rao, Leieff708a2021-11-22 15:49:47 +0800299{"execute": "migrate-set-capabilities", "arguments":{ "capabilities": [ {"capability": "x-colo", "state": true } ] } }
300{"execute": "migrate", "arguments":{ "uri": "tcp:127.0.0.2:9998" } }
Lukas Straub90dfe592019-10-24 16:25:57 +0200301
302Note:
303If this Primary previously was a Secondary, then we need to insert the
304filters before the filter-rewriter by using the
Rao, Leieff708a2021-11-22 15:49:47 +0800305""insert": "before", "position": "id=rew0"" Options. See below.
Lukas Straub90dfe592019-10-24 16:25:57 +0200306
307== Secondary resume replication ==
308Become Primary and resume replication after new Secondary is up. Note
309that now 127.0.0.1 is the Secondary and 127.0.0.2 is the Primary.
310
311Start the new Secondary (Steps 2 and 3 above, but with primary_ip=127.0.0.2),
312then on the old Secondary:
Rao, Leieff708a2021-11-22 15:49:47 +0800313{"execute": "drive-mirror", "arguments":{ "device": "colo-disk0", "job-id": "resync", "target": "nbd://127.0.0.1:9999/parent0", "mode": "existing", "format": "raw", "sync": "full"} }
Lukas Straub90dfe592019-10-24 16:25:57 +0200314
315Wait until disk is synced, then:
Rao, Leieff708a2021-11-22 15:49:47 +0800316{"execute": "stop"}
317{"execute": "block-job-cancel", "arguments":{ "device": "resync" } }
Lukas Straub90dfe592019-10-24 16:25:57 +0200318
Rao, Leieff708a2021-11-22 15:49:47 +0800319{"execute": "human-monitor-command", "arguments":{ "command-line": "drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.1,file.port=9999,file.export=parent0,node-name=replication0"}}
320{"execute": "x-blockdev-change", "arguments":{ "parent": "colo-disk0", "node": "replication0" } }
Lukas Straub90dfe592019-10-24 16:25:57 +0200321
Rao, Leieff708a2021-11-22 15:49:47 +0800322{"execute": "object-add", "arguments":{ "qom-type": "filter-mirror", "id": "m0", "insert": "before", "position": "id=rew0", "netdev": "hn0", "queue": "tx", "outdev": "mirror0" } }
323{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector", "id": "redire0", "insert": "before", "position": "id=rew0", "netdev": "hn0", "queue": "rx", "indev": "compare_out" } }
324{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector", "id": "redire1", "insert": "before", "position": "id=rew0", "netdev": "hn0", "queue": "rx", "outdev": "compare0" } }
325{"execute": "object-add", "arguments":{ "qom-type": "iothread", "id": "iothread1" } }
326{"execute": "object-add", "arguments":{ "qom-type": "colo-compare", "id": "comp0", "primary_in": "compare0-0", "secondary_in": "compare1", "outdev": "compare_out0", "iothread": "iothread1" } }
Lukas Straub90dfe592019-10-24 16:25:57 +0200327
Rao, Leieff708a2021-11-22 15:49:47 +0800328{"execute": "migrate-set-capabilities", "arguments":{ "capabilities": [ {"capability": "x-colo", "state": true } ] } }
329{"execute": "migrate", "arguments":{ "uri": "tcp:127.0.0.1:9998" } }
zhanghailiange59887d2016-10-27 14:43:07 +0800330
331== TODO ==
Lukas Straub90dfe592019-10-24 16:25:57 +02003321. Support shared storage.
3332. Develop the heartbeat part.
3343. Reduce checkpoint VM’s downtime while doing checkpoint.