My journey with Btrfs - Part 2 - Concentration Enhancing Menu Initialiser

This is part two of my journey with btrfs where I'm going on a quest for reliable and robust storage. In the last post I had a raid 5 Btrfs with two 2TB hard-drives. I was a bit surprised because Btrfs behaved like a raid 1 but reported storage of a raid 0. I've just realized that I didn't explain what RAID was. RAID stands for Redundant Array of Independent Disks. The different level of RAID describe how the data are stored, whether or not it's failure proof. I'll just explain the most common RAID used.
  • RAID 0: use all your disk to act as one. There is no redundancy to speak of. If one of your drive fails, your entire array fail.
  • RAID 1: use half of the devices you give it as a mirror for your data.
  • RAID 5: spread data among the hard drives, if one drives fail it can still restore it using the data from others drives.
Those explanation are simplifications, each type of raid as its advantage and disadvantage. RAID 5 is, like RAID 1, failure proof against the loss of one hard-drive. I said in the previous post I would do some experiment with Btrfs. But since I wanted to have three hard-drive first, to have my array behaving like a RAID 5. During my conversation on IRC I learned three things:
  • 2 drives in raid 5 Btrfs behave like a raid 1 Btrfs
  • The free space for the system should be 1.8TiB, in my case it's reporting, 3.6 TiB like it was a raid 0. This appears to be a bug.
  • Btrfs wiki says "In case of a 2 device RAID5 filesystem, one device has data and the other has parity data." this is apparently a simplification and it would be more accurate to say that for the data stored on one drive the parity is stored on the other and vice-versa.
So before I start messing around with my drives I wanted to have first three devices in my array. First step to add a new drive to the array: identify his name in linux. Just like before we do that with fdisk.
root@vaur:/home/vaur# fdisk -l Disk /dev/sdc_ 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x0009a54b Device     Boot Start        End    Sectors   Size Id Type /dev/sdc1          63 1953520064 1953520002 931.5G 83 Linux Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 330FCD63-59EA-4EAF-AD76-BA224132321D Device     Start        End    Sectors  Size Type /dev/sdf1   2048 7814037134 7814035087  3.7T Linux filesystem Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sde: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x6687616f Device     Boot Start        End    Sectors   Size Id Type /dev/sde1        2048 1953523711 1953521664 931.5G 83 Linux Disk /dev/sda: 298.1 GiB, 320072933376 bytes, 625142448 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x0000ef64 Device     Boot    Start       End   Sectors   Size Id Type /dev/sda1  *        2048  19531775  19529728   9.3G 83 Linux /dev/sda2       19533822 625141759 605607938 288.8G  5 Extended /dev/sda5       19533824  36102143  16568320   7.9G 82 Linux swap / Solaris /dev/sda6       36104192 625141759 589037568 280.9G 83 Linux Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes root@vaur:/home/vaur#
The hard-drive that I want to add is a 1TB. Adding a 1TB drive to my array means that I won't be able to fully utilize my 2TB drives as shown by this tool . But it won't impact the usable space in a negative way. The drive that I want to use in my case is /dev/sde . If you are unsure about which one is the  one you want to use, just mount it and see what data is contained. Next step is adding this drive to my array.
root@vaur:/media# btrfs device add /dev/sde /media/vaur-disk /dev/sde appears to contain a partition table (dos). Use the -f option to force overwrite. root@vaur:/media# btrfs device add /dev/sde /media/vaur-disk -f ERROR: unable to open /dev/sde: Device or resource busy
Just like before Btrfs warns us that the drive contains data. And I forgot to unmount my drive ... Woups.
root@vaur:/media# umount /dev/sde1 root@vaur:/media# btrfs device add /dev/sde /media/vaur-disk -f root@vaur:/media# btrfs filesystem show /dev/sdb Label: none  uuid: 2b820b02-e391-4f7a-9d9c-335f5abae4f5 Total devices 3 FS bytes used 234.06GiB devid    1 size 1.82TiB used 253.01GiB path /dev/sdb devid    2 size 1.82TiB used 253.01GiB path /dev/sdd devid    3 size 931.51GiB used 0.00B path /dev/sde
/dev/sde is now part of my Btrfs array. But there is nothing stored on it. To spread data to a new device I need to balance  my array like this. Wiki advice to balanc any array that isn't RAID 0
btrfs filesystem balance /media/vaur-disk
It is worth noting that I didn't have to unmount my array and could still access it during the operation. I haven't tried to benchmark transfers during it. I wanted to know how long balancing would take under normal conditions. After one hour, the execution of the command is finally done.
Done, had to relocate 254 out of 254 chunks
My array now looks like this.
root@vaur:/media/vaur-disk# btrfs fi show Label: none  uuid: 2b820b02-e391-4f7a-9d9c-335f5abae4f5 Total devices 3 FS bytes used 234.00GiB devid    1 size 1.82TiB used 121.03GiB path /dev/sdb devid    2 size 1.82TiB used 121.03GiB path /dev/sdd devid    3 size 931.51GiB used 120.00GiB path /dev/sde
As you can see Btrfs filled /dev/sde with data and removed what was in excess, from the two others drives. In my previous post I was a bit concerned about the announced free space. This command explains why.
root@vaur:/media/vaur-disk# btrfs fi usage /media/vaur-disk WARNING: RAID56 detected, not implemented Overall: Device size:                   4.55TiB Device allocated:              2.06GiB Device unallocated:            4.55TiB Device missing:                  0.00B Used:                        481.38MiB Free (estimated):                0.00B      (min: 8.00EiB) Data ratio:                       0.00 Metadata ratio:                   2.00 Global reserve:               96.00MiB      (used: 0.00B) Data,RAID5: Size:240.00GiB, Used:233.76GiB /dev/sdb      120.00GiB /dev/sdd      120.00GiB /dev/sde      120.00GiB Metadata,RAID1: Size:1.00GiB, Used:240.64MiB /dev/sdb        1.00GiB /dev/sdd        1.00GiB System,RAID1: Size:32.00MiB, Used:48.00KiB /dev/sdb       32.00MiB /dev/sdd       32.00MiB Unallocated: /dev/sdb        1.70TiB /dev/sdd        1.70TiB /dev/sde      811.51GiB
As you can see estimating the space available isn't implemented yet in Btrfs for a RAID 5. I'll have to monitor my drives to see if I need more space.            

This article is my 3rd oldest. It is 1163 words long, and it’s got 0 comments for now.