Pluralis Node0-7.5B
Deploy a Pluralis Node0-7.5B on a Spheron GPU instance. Pluralis Protocol Learning allows multiple participants to collaboratively train large-scale foundation models without central ownership. Node0-7.5B enables permissionless participation in distributed AI model pretraining with 16GB+ VRAM.
Overview
Models remain unextractable and become collectively owned protocol assets under the Pluralis Protocol Learning framework.
Node0-7.5B: Permissionless, model-parallel pretraining framework for GPUs with 16GB+ VRAM.
Requirements
Hardware:- GPU: 16GB+ VRAM
- RAM: 16GB+ recommended
- Storage: 50GB free
- Network: Stable connection
- RTX 4090, A100, H100
- Ubuntu 22.04 or 24.04
- Python 3.11
- Miniconda
- Git
Prerequisites
- Spheron account (sign up)
- Payment method configured
- SSH key (see SSH connection guide)
- HuggingFace account and token (get token)
Step 1: Deploy GPU on Spheron
- Sign up at app.spheron.ai.
- Add credits: Click Credits → Add funds (card/crypto).
-
Deploy:
- Click Deploy in the sidebar.
- Select GPU: RTX 4090, A100, or H100 (16GB+ VRAM).
- Region: Closest to you.
- OS: Ubuntu 22.04 or 24.04 LTS.
- Select your SSH key.
- Click Deploy Instance.
Step 2: Connect to instance
ssh root@<your-instance-ip>Step 3: Install dependencies
# Install PyTorch (CPU version for setup)
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
# Install Git
sudo apt install -y gitStep 4: Clone repository
git clone https://github.com/PluralisResearch/node0
cd node0Step 5: Install Miniconda
# Download installer
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
# Install
bash ~/miniconda.sh -b -p ~/miniconda3
# Initialize
~/miniconda3/bin/conda init bash
# Clean up
rm ~/miniconda.sh
# Verify
source ~/miniconda3/etc/profile.d/conda.sh && conda --versionStep 6: Create Conda environment
# Create environment
conda create -n node0 python=3.11 -y
# Activate
conda activate node0
# Install Node0
pip install .Step 7: Configure Node0
# Generate configuration
python3 generate_script.py --host_port 49200 --announce_port 22When prompted, enter your HuggingFace token:
- Visit huggingface.co/settings/tokens.
- Create a new token with "Read" permissions.
- Copy and paste when prompted.
Step 8: Start Node0 server
./start_server.shThe server starts and begins listening on the configured ports.
Verification
Check server status:
# Monitor logs
tail -f logs/node0.log
# Verify process running
ps aux | grep node0Confirm participation:
- Check the Pluralis dashboard for your node.
- Verify network connectivity.
- Monitor contribution metrics.
Troubleshooting
Issue: Installation fails
Symptoms: pip or conda errors during setup. Resolution:
# Verify Python version
python --version
# Check conda environment
conda env listIssue: HuggingFace token error
Symptoms: Authentication failure when generating configuration. Resolution:
- Verify the token has "Read" permissions.
- Regenerate the token if expired.
- Check that the token was copied without extra spaces.
Issue: Server won't start
Symptoms: start_server.sh exits with an error.
Resolution:
# Check ports available
lsof -i :49200
lsof -i :22
# View error logs
cat logs/node0.logIssue: Connection issues
Symptoms: Node cannot reach the Pluralis network. Resolution:
- Verify the firewall allows ports 49200 and 22.
- Check GPU is accessible:
nvidia-smi - Ensure sufficient VRAM is available.
What's next
- Pluralis Research GitHub
- Getting Started: Spheron deployment basics
- SSH Connection: SSH setup guide
- General Info: Support channels