iodyne Pro Data is the first all-NVMe all-SSD Thunderbolt RAID device that combines multiple SSDs and multiple Thunderbolt port pairs in a single elegant form factor. Pro Data also keeps all your creative assets protected by RAID-6 and Encryption. In this article, we'll learn how Pro Data's Thunderbolt Multipathing, Thunderbolt Daisy-Chaining, the Thunderbolt performance of new Apple Silicon Macs, and features of macOS can work together to build scalable Pro Data setups.
New demanding projects and new customers can show up at any time, and using these tips, your setup can be ready to expand to handle any new project. We'll see that you can scale up both storage capacity and storage performance by expanding your Pro Data setup on-the-fly. We'll also learn how to create single Apple Filesystem containers that span more than one Pro Data, for project scenarios that require massive single filesystem capacity or performance even beyond one Pro Data or the Mac's built-in SSD.
Tools for Scaling
First, let's review a few important tools in our toolbox to tackle a scalable Pro Data setup:
- Pro Data provides four Thunderbolt port pairs to allow for Thunderbolt NVMe multipathing for maximum performance, and multi-computer access. Connect two Thunderbolt ports to Pro Data from your Mac for maximum performance.
- Each Pro Data port pair has an upstream port to your computer, and a downstream port you can use to create a Thunderbolt daisy-chain. You can connect a downstream port to other Pro Datas, as well as other Thunderbolt or USB accessories, Docks, or Displays. Each Thunderbolt port on your Mac can support a daisy-chain of up to six devices.
- With the ability to Multipath and Daisy-Chain, you can expand your Pro Data storage configuration on-the-fly, adding capacity as needed, without sacrificing performance. On Apple Silicon Macs like the MacBook Pro and Mac Studio, each Thunderbolt port can provide storage performance up to 3GB/s. And a Mac Studio with M1 Ultra packs an incredible six ports. So you can have up to three multipathed daisy-chains at the same time! Daisy-chains are a great way to expand your configuration when needed, since they preserve all of your investment in your existing Mac and existing Pro Data devices.
- When you add Pro Data devices to your setup, you don't add complexity: all your devices are visible from the one user interface provided by the iodyne Utility. So you can manage and create Containers from many Pro Data devices just as easily as from one device.
- Pro Data comes with an interlocking stand to place it in a vertical orientation. The stand makes it easy to organize multiple Pro Datas on a desktop or in a rack, with the stand feet interlocked to increase stability, simplify cabling, and save desk or rack space.
- macOS provides a set of features called AppleRAID that allow us to stripe files across storage from multiple devices. We'll see below how we can combine Pro Data's hardware-accelerated RAID-6 with AppleRAID to create huge RAID-6+0 or RAID-0+0 containers. These containers will provide single filesystem capacity that can expand beyond a single 24TB Pro Data device, or scale performance for a single project to unprecedented levels like playback of 18× 8K30p streams.
Pro Data and AppleRAID
Since Pro Data has hardware-accelerated RAID-6 and RAID-0 built-in, all of your containers stripe data across all of the SSDs inside Pro Data, for maximum efficiency and reliability. And if each of your projects fits inside the 12TB or 24TB capacity of a single Pro Data device, or its performance envelope of up to 5GB/s, you can simply daisy-chain multiple Pro Data devices and use separate containers to manage all of your projects and workflows. Containers provide an easy way to dynamically expand your configuration as you add new Pro Data devices.
But there are two specific scenarios where you might want a single project's data to span multiple Pro Data devices:
- You need a single filesystem volume that is larger than one Pro Data. You might encounter this scenario when trying to build something like a giant cache of your team's shared Dropbox that contains large media files, or a large expanding B-Roll library.
- You need a single filesystem volume that can scale performance beyond the limit of one Pro Data. You might encounter this scenario when trying to build a single video project that needs to be able perform picture-in-picture playback of a very large number of 4K or 8K streams simultaneously.
macOS AppleRAID software layered on top of Pro Data will allow us to create a single container that scales capacity, performance, or both. Let's review how AppleRAID works, and then we'll provide a detailed example of how to configure AppleRAID with Pro Data.
AppleRAID Modes
AppleRAID (AR) can be configured within Disk Utility, or using Terminal and the macOS command-line. The Terminal commands will provide access to more detailed settings and status than are currently available in macOS Disk Utility, so we’ll use those. When you configure AR, you are setting up a "virtual disk" on your Mac that combines one or more other "participant disks" (in our case, Pro Data containers) in a particular layout: Mirror, Stripe, or Concat. In our case, we want both capacity and performance, so we'll use Stripe.
Writing to a Striped AR virtual disk alternates writes across the participants at a pre-defined "chunk size". If we have two participants, that means we'll get the combined capacity and the combined performance of both participants. Typical large i/o's like reading frames of a 4K or 8K video will parallelize across all participating Pro Datas.
We'll refer to an AR "virtual disk" as a Set. Each Set will be built from Pro Data containers. You can have as many Sets as you like, depending on your project requirements. In summary, you'll want to construct Sets using the Stripe layout when you know capacity needs to exceed the space available in one Pro Data, or when your read or write performance requirements exceed the performance of one Pro Data.
Now that we understand AppleRAID, let's work through an example of how to build a single filesystem with multiple Pro Data devices. In the example, you'll first create your participant containers using the iodyne Utility, assigning each container a name, size, RAID protection level, and encryption password. Make sure to use the same RAID protection level and encryption setting for each Container within a Set. Make note of the disk number (diskX for some number X) that is assigned to each Container. Then you'll open the Terminal utility to construct the AppleRAID set, following the recipes shown below, and inserting the disk numbers for each participating container.
Example: Stripe
To create a Stripe of containers, use the iodyne App to create a set of containers for the stripe, one per Pro Data device. Since our Set is going to be called "example1", we'll name our containers "example1A" and "example1B", and refer to their disk numbers as <diskA> and <diskB>. When creating a participant container for a Stripe, be sure that:
- Each container has the same RAID protection level assigned
- Each container has the same size
- Each container has its Type set to Raw
Use the < > buttons in the Usage tab of the iodyne Utility to navigate to the Pro Data device providing storage for each container. Use the + button to create a new container:
Repeat this process for each participating Pro Data, until you have one container per Pro Data, and you have recorded the disk numbers shown in the Usage tab in your notes. In our example, the list of disk numbers will be marked as <diskA> and <diskB>. You'll substitute this list of disk numbers into the command below. Now type these commands in your macOS Terminal:
$ diskutil ar create stripe example1 free <diskA> <diskB>
Started RAID operation
Unmounting proposed new member <diskA>
Unmounting proposed new member <diskB>
Repartitioning <diskA> so it can be in a RAID set
Unmounting disk
Creating the partition map
Using <diskA>s2 as a data slice
Repartitioning <diskB> so it can be in a RAID set
Unmounting disk
Creating the partition map
Using <diskB>s2 as a data slice
Creating a RAID set
Bringing the RAID partitions online
Waiting for the new RAID to spin up "<set-id>"
Mounting disk
Could not mount <diskS> after erase
Finished RAID operation
You can replace the string "example1" with a name you prefer for your Set and its APFS volume. At the end of the "arcreate" operation's output is an identifier string <set-id> you can use with other "diskutil ar" commands, and a new disk number that we've marked as <diskS>. This disk number is the new Set's virtual disk. You'll now plug this disk number into the following commands.
$ diskutil ar update ChunkSize 1048576 <diskS>
The RAID has been successfully updated
The value 1048576 (1MiB) used for the ChunkSize parameter is the optimal size for distributing i/o across Pro Data containers within the stripe.
$ diskutil apfs create <diskS> example1
Started APFS operation on <diskS>
Creating a new empty APFS Container
Unmounting Volumes
Switching <diskS> to APFS
Creating APFS Container
Created new APFS Container <diskF>
Disk from APFS operation: <diskF>
Finished APFS operation on <diskS>
Started APFS operation on <diskF>
Preparing to add APFS Volume to APFS Container <diskF>
Creating APFS Volume
Created new APFS Volume <diskF>s1
Mounting APFS Volume
Setting volume permissions
Disk from APFS operation: <diskF>s1
Finished APFS operation on diskF
You can replace the string "example1" with a name you prefer for your Set and its APFS volume.
At the end of the "apfs create" operation's output is another new disk number <diskF>: this is another virtual disk created by APFS, that you can use in other APFS operations, or locate in the Disk Utility app. You should now see a new filesystem "example1" mounted on your Desktop or visible in the Finder sidebar, like this:
You can also use this Terminal command to list your Sets:
$ diskutil ar list
AppleRAID sets (1 found)
===============================================================================
Name: example1
Unique ID: <set-id>
Type: Stripe
Status: Online
Size: 20.0 TB (19991022796800 Bytes)
Rebuild: manual
Device Node: <diskS>
-------------------------------------------------------------------------------
# DevNode UUID Status Size
-------------------------------------------------------------------------------
0 <diskA>s2 9A7D56A8-34E3-4CAB-85D4-39CC4BFEF946 Online 319856364748800
1 <diskB>s2 87CB9D7C-3375-420C-A3F1-88634F6A8A29 Online 319856364748800
===============================================================================
Your Stripe over two separate Pro Data containers is now ready for use. It has the total capacity of all the participant namespaces, and approximately twice the bandwidth performance when apps use parallel sequential i/o. You can use this recipe with any number of Pro Data containers across two, three or more Pro Data devices.