Back to Home

Documentation

Technical reference for developers and contributors

Overview

Ailo Network is a decentralized AI training platform that enables distributed training of Large Language Models (LLMs) directly in web browsers. Instead of relying on centralized datacenters, Ailo harnesses the collective computing power of thousands of users to train and improve an AI model collaboratively.

Key Features

  • Browser-Based: No installation required. Everything runs in JavaScript/WebGL.
  • Federated Learning: Training happens locally; only gradients are shared.
  • Proof-of-Contribution: Earn AiloCoin by contributing to AI training.
  • Privacy-Preserving: Raw data never leaves your device.

Quick Start

To start contributing to Ailo Network:

  1. Open Dashboard in your browser
  2. Create an account or log in
  3. Wait for the model to download (cached locally)
  4. Click "Start Mining" to begin contributing

Integrated Experience

Mining is now integrated directly into the Dashboard. You can monitor your earnings, track network stats, and control mining all from one place.

System Requirements

Component Minimum Recommended
Browser Chrome 90+, Firefox 88+ Latest Chrome/Edge
RAM 8 GB 16 GB+
Storage 4 GB free 10 GB free
GPU WebGL 2.0 support Dedicated GPU

Neural Engine

The Neural Engine is the core component that handles model inference and training. It's a pure JavaScript implementation of a Transformer decoder-only architecture.

Model Architecture (AILO-1B)

Parameters ~840M (target 1.2B)
Layers 24
Embedding Dim 1600
Hidden Dim (FFN) 6400
Attention Heads 25
Vocabulary 50,257 tokens (GPT-2)
Context Length 2048 tokens

Core Methods

// Initialize the neural engine const engine = new NeuralEngine(config); // Forward pass (inference) const logits = await engine.forward(tokenIds); // Training step const loss = await engine.train(textSamples, epochs); // Generate text const response = await engine.generate(prompt, maxLength);

Federated Learning

Ailo uses Federated Averaging (FedAvg) to train the model across distributed clients without sharing raw data.

Training Flow

  1. Download: Client downloads latest global model
  2. Local Training: Client trains on local data (Wikipedia, StackOverflow)
  3. Gradient Computation: Calculate weight updates via backpropagation
  4. Submit: Send gradients to coordinator server
  5. Aggregation: Server averages gradients from multiple clients
  6. Update: Global model is updated; new version published

Privacy

Only model gradients are transmitted, never raw training data. The gradients contain no personally identifiable information.

Weight Sharding

To handle the 3.2 GB model, weights are split into 35 binary chunks (~90 MB each). This enables:

  • Parallel downloading
  • Incremental caching in IndexedDB
  • Memory-efficient loading via VirtualTensors

VirtualTensor System

Instead of loading all weights into memory at once, Ailo uses VirtualTensors that reference chunks on-demand:

// V2 VirtualTensor structure { isVirtual: true, data: Float32Array, // Direct reference to weight data rows: 1600, cols: 6400, length: 10240000 }

Status API

GET /api/status

Returns current network status and statistics.

Response

{ "activeNodes": 12, "globalHashrate": "3.8 Tok/s", "globalParams": 840652800, "targetParams": 1200000000, "globalModelVersion": 2000, "isNetworkActive": true }

Model API

GET /api/model

Returns the latest model manifest with chunk list.

GET /api/model/chunk/:name

Downloads a specific weight chunk (binary).

Manifest Structure

{ "type": "sharded", "version": 2000, "manifest": { "config": { ... }, "chunks": ["chunk_0.bin", "chunk_1.bin", ...], "tensorMap": { ... } } }

Submit API

POST /api/submit_gradient

Submit training gradient for aggregation.

Request Body

Field Type Description
username string User identifier for reward
loss number Training loss value
gradient object Gradient tensor data
batchSize number Number of samples trained

Mining Process

Mining in Ailo Network means contributing compute power to train the AI model.

Mining Loop

  1. Fetch random text batch from dataset (Wikipedia/StackOverflow)
  2. Tokenize text using BPE tokenizer
  3. Run forward pass through transformer layers
  4. Calculate cross-entropy loss
  5. Backpropagate to compute gradients
  6. Submit gradient to server
  7. Receive AiloCoin reward
  8. Repeat
// Simplified mining loop while (isMining) { const batch = await datasetManager.getRandomBatch(); const { loss } = await neuralEngine.train([batch], 1); await submitGradient(loss); updateUI(loss); }

CUDA GPU Client

For high-performance mining, use the native Python client for NVIDIA GPUs. This client offers direct hardware access, significantly faster training speeds (10x-50x), and stable background operation.

AILO-1B Model

The CUDA client trains the AILO-1B model with the following architecture:

  • 841 Million Parameters
  • 24 Transformer Layers × 1600d Embedding
  • 50,257 Token Vocabulary (GPT-2)
  • Mixed Precision (FP16) Training

Hardware Disclaimer

IMPORTANT: GPU mining puts significant load on your hardware. By using this software, you acknowledge and accept that:

  • Ailo Network is NOT responsible for any hardware damage, overheating, or wear
  • You should monitor temperatures and ensure adequate cooling
  • Use the power slider (20-90%) to control GPU usage
  • Mining voids some manufacturer warranties

💡 Tip: Start with 70% power level and monitor your GPU temperature. Keep it below 80°C for longevity.

Installation

1. Requirements:

  • NVIDIA GPU with 8GB+ VRAM (RTX 3060 or better)
  • Python 3.10+
  • CUDA Toolkit 11.8 or 12.x

2. Setup:

# Clone and install git clone https://github.com/xxrickyxx/ailo-network.git cd ailo-network/cuda-client pip install -r requirements.txt # For RTX 50 Series (Nightly PyTorch): pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu128

Usage

Run the miner with your wallet address:

# Console Mode python ailo_miner.py --wallet 0xYourWalletAddress # With GUI (Charts, Logs, Settings) python gui_app.py

CLI Arguments

Argument Description Default
--wallet Your Ailo wallet address (Required) -
--power Power limit (20-90%) to control heat/fan noise 70
--cpu Force CPU usage (slower, ~10x) False
--no-gui Run in headless console mode False

GUI Features

The graphical interface (gui_app.py) includes:

  • 📊 Dashboard: Real-time stats (hashrate, loss, epochs, rewards)
  • 📈 Charts: Loss and hashrate history graphs
  • 📋 Logs: Full mining log viewer with export
  • âš™ī¸ Settings: GPU selection, CPU toggle, power slider
  • â„šī¸ About: Version info and links

Rewards

Contributors earn AiloCoin based on their training contribution quality.

Reward Calculation

  • 🚀 GPU Mining (CUDA): ~0.1 AILO per gradient submission
    • Lower loss = higher quality multiplier (up to 10x)
    • Requires NVIDIA GPU with 8GB+ VRAM
  • đŸ’Ŧ Inference Rewards: 0.05 AILO per distributed inference

Wallet Features

Your AiloCoin wallet supports the following operations:

  • Send ALC: Transfer coins to any wallet address via POST /api/ledger/transfer
  • Receive ALC: Generate QR code with your wallet address for easy receiving
  • Real-time Balance: Balance syncs from server database on page load
  • Transaction History: View all incoming and outgoing transactions

Coming Soon: Staking and Swap features are currently in development.

Account Security

Ailo Network provides secure account recovery without email verification:

  • Recovery Key: Generate a unique 36-character key from Settings → Recovery Key
  • Download & Store: Download the key as a .txt file and store it safely offline
  • Account Recovery: Use the key to reset your password if forgotten
  • One-Time Use: Each recovery key can only be used once for security

Warning: The recovery key is the ONLY way to recover your account. Store it safely!

Inference Rewards

Earn AiloCoin by contributing to distributed AI inference:

  • Distributed Models: Select a distributed model (e.g., Qwen 1.8B) in Chat
  • Join the Network: Your browser joins a room with other nodes
  • Process Requests: When a user sends a message, contributing nodes process it together
  • Earn Rewards: Each node that processes a request earns 0.05 ALC

Rewards are credited automatically after each successful inference.