A set of benchmarks to evaluate the performance of mass OBJECT transfers in block batches. Two
computers are used with dual 10Gbit point-to-point connections and ESXi 7.0. One computer is
running CSRBrouter under a FreeBSD 12.1 VM, and the other computer is running two instances of
CSRBnode.
NODE SPECIFICATIONS
LEFT NODE
Hypervisor: ESXi 7.0
OS: Ubuntu 20.04 5.4.0-37-generic
CPU: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz
Network: Dual port Emulex Corporation OneConnect OCe10100/OCe10102 Series 10 GbE (rev
02)
RIGHT NODE
Hypervisor: ESXi 7.0
OS: FreeBSD 12.1-RELEASE-p6
CPU: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz
Network: Dual port Emulex Corporation OneConnect OCe10100/OCe10102 Series 10 GbE (rev
02)
A Python3 benchmark program runs on the Left Node interfaced with CSRBnode-A
FUSE VFS, repeatedly generates a large random data block, writes it via
CSRBnode-A FUSE VFS to objects of CSRBnode-B, and then reads/verifies the data.
CSRB CONFIGURATION
CSRBnode-A (Left Node) connected via 10G Port 1 to 10G Port 1 of CSRBrouter
(Right Node)
CSRBnode-B (Left Node) connected via 10G Port 2 to 10G Port 2 of CSRBrouter
(Right Node)
CSRBnode-A runs on the Left Node as a CSRBnode with enabled FUSE VFS
CSRBnode-B runs on the Left Node as a CSRBnode with CSRBdb (LevelDB) in RAM
tmpfs
A pair of Python3 benchmark programs run on the Left Node, one interfaced with
CSRBnode-A FUSE VFS and one interfaced with CSRBnode-B FUSE VFS, and both
repeatedly generate a large random data block, write it via their interfaced FUSE
VFS to objects of CSRBnode-A / CSRBnode-B, and then read/verify the data.
CSRB CONFIGURATION
CSRBnode-A (Left Node) connected via 10G Port 1 to 10G Port 1 of CSRBrouter
(Right Node)
CSRBnode-B (Left Node) connected via 10G Port 2 to 10G Port 2 of CSRBrouter
(Right Node)
CSRBnode-A runs on the Left Node as a CSRBnode with FUSE VFS and CSRBdb
(LevelDB) in RAM tmpfs
CSRBnode-B runs on the Left Node as a CSRBnode with FUSE VFS and CSRBdb
(LevelDB) in RAM tmpfs
Same configuration as CSRB VFS Benchmark #2, but with double instances of
the Python3 benchmark programs running on each CSRBnode, competing with each other
and overwriting the same block space.
This tests the ability of the system to handle concurrent overlapping accesses to
the same OBJECT blocks.
CSRB VFS Stress-Test #2:
3 Hour run of CSRB VFS Stress-Test #1
The CSRB VFS Stress-Test #1 was left running for 3 hours to evaluate
stability and reliable operation.
ZFS Benchmarks
CSRBnode OBJECT blocks are used as file-based VDEVs to create ZPOOLs and evaluate CSRB Network's
performance and stability. Multiple VDEVs (8 to 9) are assigned per each CSRBnode used for
storage, to demonstrate and evaluate the parallel operation of the CSRB Network.
The CSRB Network setup for these benchmarks was selected to provide a more realistic real-world
configuration of an operationally noisy environment. It consists of two computers acting as
CSRBnodes for OBJECT storage (Node-A/B), and a third computer (Node-C) as a CSRBnode with FUSE
VFS acting as the ZPOOL master. Node-B is a low-spec computer with multiple medium-load docker
based services running on the background. Node-A s a mid-spec computer with additional low-load
services running on the background. Node-A also runs a CSRBrouter instance to which all nodes
are connected to. Node-C is a medium-spec computer without any substantial background services.
All nodes are connected via 1Gbit ethernet, with Node-C using a USB3 GigE adaptor. Both A &
B nodes use spinning hard-drives for the LevelDB backend storage and their performance impact is
evidently shown. The low-spec Node-B specifically has an extensive to the ZPOOL performance due
to LevelDB's background compaction running over old hard-drives in RAID1.
Two ZPOOL configurations are used, one using Z2 with 9 VDEV OBJECTBLOCK per Nodes A & B, and
one using RAID0 (STRIPE) with 8 VDEV OBJECTBLOCK per Nodes A & B. Both configurations use
512k recordsize and 32k ashift, aligning ZPOOL records to the effective 16 OBJECT RAID rows.
SETUP
NODE SPECIFICATIONS
Node-A: i7-4790K, ZFS RAIDz2 with 4xWD5000HHTZ
Node-B: i3-550, ZFS RAID1 with 2xST3500320NS
Node-C: i7-5500U, USB3 GigE
CSRB CONFIGURATION
Node-A (Top Left): Running CSRBrouter and CSRBnode
Node-B (Top Right): Running CSRBnode
Node-C (Bottom): Running CSRBvfsFUSE and ZFS
All: Linux 5.6.14, LevelDB backend storage
TEST RUNS
CSRB VFS ZFS #1 [Z2,18x4GB,2xNodes]:
Recovering from sudden VDEV disconnection
During normal operation of the ZPOOL the CSRBnode FUSE VFS was SIGKILLed to cause a
sudden disconnection of all VDEVs, leading to a large amount of errors detected and
the ZPOOL suspended. The demonstration shows the restart of the CSRBnode and import
of the ZPOOL triggering a SCRUB.
ZPOOL CONFIGURATION
RAID-Z2
18x4GB CSRB VFS OBJECTBLOCKs, 32KB OBJECT, 9 per Node-A/B
ashift=15
recordsize=512K
CSRB VFS ZFS #2 [Z2,18x4GB,2xNodes]:
Write
Generic large-file write benchmark.
ZPOOL CONFIGURATION
RAID-Z2
18x4GB CSRB VFS OBJECTBLOCKs, 32KB OBJECT, 9 per Node-A/B
ashift=15
recordsize=512K
CSRB VFS ZFS #3 [Z2,18x4GB,2xNodes]:
Scrub
ZPOOL Scrub benchmark.
ZPOOL CONFIGURATION
RAID-Z2
18x4GB CSRB VFS OBJECTBLOCKs, 32KB OBJECT, 9 per Node-A/B
ashift=15
recordsize=512K
CSRB VFS ZFS #4 [Z2,18x4GB,2xNodes]:
Read
Generic large-file read benchmark.
ZPOOL CONFIGURATION
RAID-Z2
18x4GB CSRB VFS OBJECTBLOCKs, 32KB OBJECT, 9 per Node-A/B
ashift=15
recordsize=512K
CSRB VFS ZFS #5 [R0,16x4GB,2xNodes]:
Write
Generic large-file write benchmark.
ZPOOL CONFIGURATION
RAID0
16x4GB CSRB VFS OBJECTBLOCKs, 32KB OBJECT, 8 per Node-A/B
ashift=15
recordsize=512K
CSRB VFS ZFS #6 [R0,16x4GB,2xNodes]:
Scrub
ZPOOL Scrub benchmark.
ZPOOL CONFIGURATION
RAID0
16x4GB CSRB VFS OBJECTBLOCKs, 32KB OBJECT, 8 per Node-A/B
ashift=15
recordsize=512K
CSRB VFS ZFS #7 [R0,16x4GB,2xNodes]:
Read
Generic large-file read benchmark.
ZPOOL CONFIGURATION
RAID0
16x4GB CSRB VFS OBJECTBLOCKs, 32KB OBJECT, 8 per Node-A/B