Gonka AI Node
Deploy a Gonka AI node on a Spheron GPU instance. Gonka is a decentralized AI compute network that uses Proof of Work 2.0, directing GPU compute toward real AI training and inference workloads. Operators earn rewards for providing verifiable compute.
Overview
Gonka transforms GPU compute into useful AI work through Proof of Work 2.0, where computational power advances real AI models instead of solving arbitrary puzzles. Operators earn rewards for delivering verifiable compute.
Key features:- Real AI workloads (not wasteful mining)
- Honest-majority validation
- Reputation-based trust system
- Open, censorship-free LLM inference and training
Hardware requirements
Minimum per MLNode:- VRAM: 40GB+ usable
- GPUs: 2-5 Network Nodes recommended
Large models (DeepSeek R1, Qwen3-235B):
- 2+ MLNodes, each with 8x H200 GPUs
- 640GB+ VRAM per MLNode
Medium models (Qwen3-32B, Gemma-3-27B):
- 2+ MLNodes, each with 4x A100 or 2x H100
- 80GB+ VRAM per MLNode
- CPU: 16-core
- RAM: 64GB+
- Storage: 1TB NVMe SSD
- Network: Stable high-speed connection
- RAM: 1.5x GPU VRAM
- CPU: 16-core
- NVIDIA Container Toolkit with CUDA 12.6-12.9
Key management overview
Gonka uses a three-key system:
- Account Key (Cold): Created locally, high-privilege, store offline
- Consensus Key (TMKMS): Managed by secure service for block validation
- ML Operational Key (Warm): Created on server for automated transactions
Read the Gonka Key Management Guide before production deployment.
Prerequisites
- Spheron AI account (sign up)
- Payment method configured
- SSH key (see SSH connection guide)
- Local secure machine for Account Key generation
- HuggingFace account and token
Part A: Local machine setup
Step 1: Install CLI tool
Download the inferenced binary from Gonka releases:
chmod +x inferenced
./inferenced --helpOn macOS, allow execution in System Settings → Privacy & Security if prompted.
Step 2: Create Account Key
./inferenced keys add gonka-account-key --keyring-backend fileSave the mnemonic phrase securely offline. This is your only recovery method.
Part B: Deploy GPU on Spheron
Step 3: Sign up and add credits
- Go to app.spheron.ai and sign up.
- Click Credits → Add funds (card or crypto).
Step 4: Deploy instance
- Click Deploy in the sidebar.
- Select GPU: A100 (80GB) or H100 (40GB+ VRAM required).
- Region: Closest to you.
- OS: Ubuntu 22.04 LTS + CUDA 12.8.
- Select your SSH key.
- Click Deploy Instance.
Part C: Server setup
Step 5: Connect to instance
ssh root@<your-instance-ip>Step 6: Install dependencies
sudo apt update && apt upgrade -y
sudo apt install git docker.io docker-compose -yStep 7: Install NVIDIA container toolkit
sudo apt install nvidia-container-toolkit -y
sudo nvidia-ctk runtime configure --runtime=docker
systemctl restart dockerVerify GPU access:
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smiStep 8: Clone Gonka repository
git clone https://github.com/gonka-ai/gonka.git -b main
cp /root/gonka/deploy/join/config.env.template /root/gonka/deploy/join/config.env
cd /root/gonka/deploy/joinStep 9: Configure environment
# Create HuggingFace cache directory
mkdir -p /mnt/sharedEdit config.env:
nano config.envRequired fields:
- Key name
- Public URL of your node
- Account public key
- SSH ports
Load the configuration:
source config.envConfigure node-config.json:
- Define MLNodes and inference ports
- Specify models to load
- Set concurrent request limits
Step 10: Download model weights
# Setup HuggingFace cache
mkdir -p $HF_HOME
sudo apt update && apt install -y python3-pip pipx
pipx install huggingface_hub[cli]
pipx ensurepath
export PATH="$HOME/.local/bin:$PATH"
# Download model
hf download Qwen/Qwen2.5-7B-InstructStep 11: Pull containers
# Pull all images
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml pull
# Start chain components
source config.env && docker compose up tmkms node -d --no-deps
# Check logs
docker compose logs tmkms node -fStep 12: Create ML operational key
Enter the API container:
docker compose run --rm --no-deps -it api /bin/shCreate the warm key:
printf '%s\n%s\n' "$KEYRING_PASSWORD" "$KEYRING_PASSWORD" | inferenced keys add "$KEY_NAME" --keyring-backend fileSave the mnemonic, then exit the container:
exitStep 13: Register host
Re-enter the API container:
docker compose run --rm --no-deps -it api /bin/shRegister the participant:
inferenced register-new-participant \
$DAPI_API__PUBLIC_URL \
$ACCOUNT_PUBKEY \
--node-address $DAPI_CHAIN_NODE__SEED_API_URLExit:
exitStep 14: Grant permissions (switch to local machine)
./inferenced tx inference grant-ml-ops-permissions \
gonka-account-key \
<ml-operational-key-address-from-step-12> \
--from gonka-account-key \
--keyring-backend file \
--gas 2000000 \
--node <seed_api_url>/chain-rpc/This grants the ML Operational Key permission to submit inference proofs.
Step 15: Launch node (switch back to server)
source config.env && \
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml up -dAll services start: chain node, API node, MLNodes.
Verification
Check participant registration
http://node2.gonka.ai:8000/v1/participants/<your-account-address>The response displays your public key in JSON.
Check current epoch
After Proof of Compute completes (every 24 hours):
http://node2.gonka.ai:8000/v1/epochs/current/participantsMonitor dashboard
http://node2.gonka.ai:8000/dashboard/gonka/validatorTrack the next Proof of Compute session timing.
Check node status
Using public IP:
curl http://<PUBLIC_IP>:<PUBLIC_RPC_PORT>/statusUsing private (on server):
curl http://0.0.0.0:26657/statusUsing genesis node:
curl http://node2.gonka.ai:26657/statusProof of Compute
Simulation: Test PoC on MLNode before the actual PoC phase begins.
Timing:- Runs every 24 hours
- Check the dashboard for the next session
- Stop the server between sessions and restart before PoC
Troubleshooting
Issue: Container won't start
Symptoms: Container exits immediately or fails to start. Diagnosis:
docker ps -a
docker compose logsResolution: Verify configuration and reload:
source config.env
env | grep DAPIIssue: GPU not accessible
Symptoms: NVIDIA toolkit not found or GPU not visible in container. Resolution:
nvidia-ctk --version
sudo nvidia-ctk runtime configure --runtime=docker
systemctl restart dockerIssue: Permission grant failed
Symptoms: Transaction rejected or timeout. Resolution:
- Verify the Account Key is correct.
- Check network connectivity to the seed node.
- Ensure sufficient gas.
- Verify the ML Operational Key address.
Issue: PoC failures
Symptoms: Proof of Compute does not complete. Resolution:
- Verify all MLNodes have sufficient VRAM.
- Confirm model weights downloaded correctly.
- Review MLNode logs:
docker compose logs mlnode
Managing your node
Update profile: Update host name, website, and avatar on the dashboard to help the network identify your node.
Monitor performance:- Check PoC completion status.
- View earned rewards.
- Monitor GPU usage:
nvidia-smi -l 1
docker compose downsource config.env && \
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml up -dWhat's next
- Gonka GitHub
- Gonka Dashboard
- Getting Started: Spheron deployment basics
- SSH Connection: SSH setup guide