Skip to content

Gonka AI Node

Deploy a Gonka AI node on a Spheron GPU instance. Gonka is a decentralized AI compute network that uses Proof of Work 2.0, directing GPU compute toward real AI training and inference workloads. Operators earn rewards for providing verifiable compute.

Overview

Gonka transforms GPU compute into useful AI work through Proof of Work 2.0, where computational power advances real AI models instead of solving arbitrary puzzles. Operators earn rewards for delivering verifiable compute.

Key features:
  • Real AI workloads (not wasteful mining)
  • Honest-majority validation
  • Reputation-based trust system
  • Open, censorship-free LLM inference and training

Hardware requirements

Minimum per MLNode:
  • VRAM: 40GB+ usable
  • GPUs: 2-5 Network Nodes recommended

Large models (DeepSeek R1, Qwen3-235B):

  • 2+ MLNodes, each with 8x H200 GPUs
  • 640GB+ VRAM per MLNode

Medium models (Qwen3-32B, Gemma-3-27B):

  • 2+ MLNodes, each with 4x A100 or 2x H100
  • 80GB+ VRAM per MLNode
Network Node server:
  • CPU: 16-core
  • RAM: 64GB+
  • Storage: 1TB NVMe SSD
  • Network: Stable high-speed connection
MLNode server:
  • RAM: 1.5x GPU VRAM
  • CPU: 16-core
  • NVIDIA Container Toolkit with CUDA 12.6-12.9

Key management overview

Gonka uses a three-key system:

  • Account Key (Cold): Created locally, high-privilege, store offline
  • Consensus Key (TMKMS): Managed by secure service for block validation
  • ML Operational Key (Warm): Created on server for automated transactions

Read the Gonka Key Management Guide before production deployment.

Prerequisites

  • Spheron AI account (sign up)
  • Payment method configured
  • SSH key (see SSH connection guide)
  • Local secure machine for Account Key generation
  • HuggingFace account and token

Part A: Local machine setup

Step 1: Install CLI tool

Download the inferenced binary from Gonka releases:

chmod +x inferenced
./inferenced --help

On macOS, allow execution in System Settings → Privacy & Security if prompted.

Step 2: Create Account Key

./inferenced keys add gonka-account-key --keyring-backend file

Save the mnemonic phrase securely offline. This is your only recovery method.

Part B: Deploy GPU on Spheron

Step 3: Sign up and add credits

  1. Go to app.spheron.ai and sign up.
  2. Click Credits → Add funds (card or crypto).

Step 4: Deploy instance

  1. Click Deploy in the sidebar.
  2. Select GPU: A100 (80GB) or H100 (40GB+ VRAM required).
  3. Region: Closest to you.
  4. OS: Ubuntu 22.04 LTS + CUDA 12.8.
  5. Select your SSH key.
  6. Click Deploy Instance.

Part C: Server setup

Step 5: Connect to instance

ssh root@<your-instance-ip>

Step 6: Install dependencies

sudo apt update && apt upgrade -y
sudo apt install git docker.io docker-compose -y

Step 7: Install NVIDIA container toolkit

sudo apt install nvidia-container-toolkit -y
sudo nvidia-ctk runtime configure --runtime=docker
systemctl restart docker

Verify GPU access:

docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi

Step 8: Clone Gonka repository

git clone https://github.com/gonka-ai/gonka.git -b main
cp /root/gonka/deploy/join/config.env.template /root/gonka/deploy/join/config.env
cd /root/gonka/deploy/join

Step 9: Configure environment

# Create HuggingFace cache directory
mkdir -p /mnt/shared

Edit config.env:

nano config.env

Required fields:

  • Key name
  • Public URL of your node
  • Account public key
  • SSH ports

Load the configuration:

source config.env

Configure node-config.json:

  • Define MLNodes and inference ports
  • Specify models to load
  • Set concurrent request limits

Step 10: Download model weights

# Setup HuggingFace cache
mkdir -p $HF_HOME
sudo apt update && apt install -y python3-pip pipx
pipx install huggingface_hub[cli]
pipx ensurepath
export PATH="$HOME/.local/bin:$PATH"
 
# Download model
hf download Qwen/Qwen2.5-7B-Instruct

Step 11: Pull containers

# Pull all images
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml pull
 
# Start chain components
source config.env && docker compose up tmkms node -d --no-deps
 
# Check logs
docker compose logs tmkms node -f

Step 12: Create ML operational key

Enter the API container:

docker compose run --rm --no-deps -it api /bin/sh

Create the warm key:

printf '%s\n%s\n' "$KEYRING_PASSWORD" "$KEYRING_PASSWORD" | inferenced keys add "$KEY_NAME" --keyring-backend file

Save the mnemonic, then exit the container:

exit

Step 13: Register host

Re-enter the API container:

docker compose run --rm --no-deps -it api /bin/sh

Register the participant:

inferenced register-new-participant \
    $DAPI_API__PUBLIC_URL \
    $ACCOUNT_PUBKEY \
    --node-address $DAPI_CHAIN_NODE__SEED_API_URL

Exit:

exit

Step 14: Grant permissions (switch to local machine)

./inferenced tx inference grant-ml-ops-permissions \
    gonka-account-key \
    <ml-operational-key-address-from-step-12> \
    --from gonka-account-key \
    --keyring-backend file \
    --gas 2000000 \
    --node <seed_api_url>/chain-rpc/

This grants the ML Operational Key permission to submit inference proofs.

Step 15: Launch node (switch back to server)

source config.env && \
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml up -d

All services start: chain node, API node, MLNodes.

Verification

Check participant registration

http://node2.gonka.ai:8000/v1/participants/<your-account-address>

The response displays your public key in JSON.

Check current epoch

After Proof of Compute completes (every 24 hours):

http://node2.gonka.ai:8000/v1/epochs/current/participants

Monitor dashboard

http://node2.gonka.ai:8000/dashboard/gonka/validator

Track the next Proof of Compute session timing.

Check node status

Using public IP:

curl http://<PUBLIC_IP>:<PUBLIC_RPC_PORT>/status

Using private (on server):

curl http://0.0.0.0:26657/status

Using genesis node:

curl http://node2.gonka.ai:26657/status

Proof of Compute

Simulation: Test PoC on MLNode before the actual PoC phase begins.

Timing:
  • Runs every 24 hours
  • Check the dashboard for the next session
  • Stop the server between sessions and restart before PoC

Troubleshooting

Issue: Container won't start

Symptoms: Container exits immediately or fails to start. Diagnosis:

docker ps -a
docker compose logs

Resolution: Verify configuration and reload:

source config.env
env | grep DAPI

Issue: GPU not accessible

Symptoms: NVIDIA toolkit not found or GPU not visible in container. Resolution:

nvidia-ctk --version
sudo nvidia-ctk runtime configure --runtime=docker
systemctl restart docker

Issue: Permission grant failed

Symptoms: Transaction rejected or timeout. Resolution:

  • Verify the Account Key is correct.
  • Check network connectivity to the seed node.
  • Ensure sufficient gas.
  • Verify the ML Operational Key address.

Issue: PoC failures

Symptoms: Proof of Compute does not complete. Resolution:

  • Verify all MLNodes have sufficient VRAM.
  • Confirm model weights downloaded correctly.
  • Review MLNode logs: docker compose logs mlnode

Managing your node

Update profile: Update host name, website, and avatar on the dashboard to help the network identify your node.

Monitor performance:
  • Check PoC completion status.
  • View earned rewards.
  • Monitor GPU usage: nvidia-smi -l 1
Stop node:
docker compose down
Restart node:
source config.env && \
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml up -d

What's next