Skip to content

External storage access

Access and configure the 17.4 TB external NVMe storage available on Voltage Park deployments.

Overview

Voltage Park nodes include substantial external storage in addition to the boot drive. While the main boot drive shows approximately 300 GB, each node provides 6 additional NVMe drives totaling ~17.4 TB of external storage capacity.

Storage layout

  • nvme0n1 (~447 GB): primary OS boot drive, mounted at /
  • nvme1n1 through nvme6n1: 6 external data drives at 2.9 TB each (~17.4 TB total)

These external drives are available but not automatically mounted. Configure them based on your requirements.

Setup options

Choose the configuration that best matches your needs:

Option 1: Individual drives (recommended)

Pros:
  • If one drive fails, only data on that drive is lost
  • Easy to recover: mount the remaining drives
  • Simple to manage and troubleshoot
  • Can move drives to another system
Cons:
  • Need to manage 6 separate mount points

Use this option when data recovery and safety are priorities.

Option 2: RAID 6 (balance of capacity and safety)

Pros:
  • Survives up to 2 simultaneous drive failures
  • Data remains accessible with failed drives
  • Single large volume (~11.6 TB usable)
Cons:
  • Requires RAID knowledge to recover
  • Loses ~5.8 TB to redundancy

Use this option when you need both capacity and redundancy.

Option 3: RAID 0 (maximum capacity)

Pros:
  • Full ~17.4 TB usable capacity
  • Single mount point
Cons:
  • No redundancy

Option 4: LVM (flexible management)

Pros:
  • Flexible volume management
  • Can add/remove drives dynamically
  • Single large volume
Cons:
  • Without RAID, no redundancy (similar to RAID 0)
  • More complex to manage

Use this option when you need flexibility in storage management.

Recommended setup: individual drives

This approach provides the best balance of simplicity and data safety.

Create mount points

sudo mkdir -p /mnt/nvme{1..6}

Format the drives

Format each drive with the ext4 filesystem:

sudo mkfs.ext4 /dev/nvme1n1
sudo mkfs.ext4 /dev/nvme2n1
sudo mkfs.ext4 /dev/nvme3n1
sudo mkfs.ext4 /dev/nvme4n1
sudo mkfs.ext4 /dev/nvme5n1
sudo mkfs.ext4 /dev/nvme6n1

Mount the drives

for i in {1..6}; do
  sudo mount /dev/nvme${i}n1 /mnt/nvme${i}
done

Make mounts persistent

Enable auto-mount on boot by adding entries to /etc/fstab:

for i in {1..6}; do
  echo "/dev/nvme${i}n1 /mnt/nvme${i} ext4 defaults 0 2" | sudo tee -a /etc/fstab
done

Set ownership

Make the drives writable by your user:

for i in {1..6}; do
  sudo chown ubuntu:ubuntu /mnt/nvme${i}
done

Verify setup

Check that all drives are mounted:

df -h | grep nvme

Expected output:

/dev/nvme0n1p2  439G   28G  389G   7% /
/dev/nvme0n1p1  511M  6.1M  505M   2% /boot/efi
/dev/nvme1n1    2.9T   28K  2.8T   1% /mnt/nvme1
/dev/nvme2n1    2.9T   28K  2.8T   1% /mnt/nvme2
/dev/nvme3n1    2.9T   28K  2.8T   1% /mnt/nvme3
/dev/nvme4n1    2.9T   28K  2.8T   1% /mnt/nvme4
/dev/nvme5n1    2.9T   28K  2.8T   1% /mnt/nvme5
/dev/nvme6n1    2.9T   28K  2.8T   1% /mnt/nvme6

Usage

Access the drives

Each drive is accessible at its mount point:

# Access drive 1
cd /mnt/nvme1
 
# Create files
echo "test data" > /mnt/nvme1/myfile.txt
 
# List contents
ls -lh /mnt/nvme1/

Check space usage

# Check space on all drives
df -h | grep nvme
 
# Check space on a specific drive
df -h /mnt/nvme1

Distribute data

Distribute your data across all drives:

# Store different datasets on different drives
cp -r /path/to/dataset1 /mnt/nvme1/
cp -r /path/to/dataset2 /mnt/nvme2/
# ... and so on

Alternative setup: RAID 6

If you prefer redundancy over maximum capacity:

Install RAID tools

sudo apt update
sudo apt install mdadm

Create RAID 6 array

# Create RAID 6 (survives 2 drive failures)
sudo mdadm --create /dev/md0 --level=6 --raid-devices=6 \
  /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 \
  /dev/nvme4n1 /dev/nvme5n1 /dev/nvme6n1
 
# Format the array
sudo mkfs.ext4 /dev/md0
 
# Create mount point and mount
sudo mkdir -p /mnt/raid
sudo mount /dev/md0 /mnt/raid
 
# Make persistent
echo '/dev/md0 /mnt/raid ext4 defaults 0 2' | sudo tee -a /etc/fstab
 
# Save RAID configuration
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
sudo update-initramfs -u

This provides ~11.6 TB usable space with 2-drive fault tolerance.

Alternative setup: LVM

For maximum flexibility in storage management:

Install LVM tools

sudo apt update
sudo apt install lvm2

Create LVM setup

# Create physical volumes
sudo pvcreate /dev/nvme{1..6}n1
 
# Create volume group
sudo vgcreate data_vg /dev/nvme{1..6}n1
 
# Create logical volume with all space
sudo lvcreate -l 100%FREE -n data_lv data_vg
 
# Format and mount
sudo mkfs.ext4 /dev/data_vg/data_lv
sudo mkdir -p /mnt/data
sudo mount /dev/data_vg/data_lv /mnt/data
 
# Make persistent
echo '/dev/data_vg/data_lv /mnt/data ext4 defaults 0 2' | sudo tee -a /etc/fstab

Troubleshooting

Check drive status

# List all block devices
lsblk
 
# Check drive health
sudo smartctl -a /dev/nvme1n1  # Requires smartmontools package

Drives don't mount on boot

# Check fstab syntax
cat /etc/fstab
 
# Try manual mount to test
sudo mount -a
 
# Check system logs
sudo journalctl -xe | grep mount

Unmount drives

# Unmount a specific drive
sudo umount /mnt/nvme1
 
# Unmount all data drives
for i in {1..6}; do
  sudo umount /mnt/nvme${i}
done

Remove drives from fstab

To undo the auto-mount configuration:

# Edit fstab and remove the nvme entries
sudo nano /etc/fstab
 
# Or use sed to remove them
sudo sed -i '/\/mnt\/nvme[1-6]/d' /etc/fstab

Recovery scenarios

If one drive fails

With individual drives setup:

  1. Identify the failed drive using dmesg or lsblk
  2. The other 5 drives remain fully accessible
  3. Only data on the failed drive is lost
  4. Replace the failed drive and format it
  5. Restore data from backups for that drive only

Move drives to another system

  1. Unmount the drives: sudo umount /mnt/nvme{1..6}
  2. Physically move the drives
  3. On the new system, mount them: sudo mount /dev/nvmeXn1 /mnt/target
  4. All data remains intact

Best practices

Organization:
  • Keep track of what data is stored on which drive
  • Use descriptive names via symlinks:
    ln -s /mnt/nvme1 /mnt/datasets
    ln -s /mnt/nvme2 /mnt/models
Monitoring:
  • Install and use smartmontools to monitor drive health
  • Check space regularly to avoid running out: df -h
Data safety:
  • Maintain regular backups of critical data
  • Even with redundancy, backups are essential
  • Test your backup restoration process
Planning:
  • Plan your data distribution strategy before filling drives
  • Document which data goes where

What's next