Nidum Architecture
The Nidum platform is designed with a robust, modular architecture that enables decentralized, secure, and efficient AI operations across all major devices and platforms. Below is a breakdown of the full system stack from hardware to application layer.
1. Apps & Supported Platforms
Nidum supports a wide range of platforms, ensuring seamless accessibility:
Mobile: iOS and Android
Web: All major browsers
Desktop: macOS, Windows, and Linux
2. User Layer
This is the interface layer responsible for the user experience.
Frontend: Built using modern web technologies such as React, Vue.js, and Electron.js for cross-platform compatibility.
Backend: Powered by Node.js, Python, and Go—enabling efficient, scalable, and performance-optimized AI services.
3. Network Layer
This layer manages peer-to-peer connectivity, protocol management, and security infrastructure.
Decentralized Infrastructure: Powered by Nidum Chain, the core blockchain protocol that underpins trustless AI compute sharing.
Protocols: Uses gRPC for fast and efficient communication between nodes.
Optimization Technologies: Leverages TCP/UDP and DNS tunneling for resilient, low-latency communication.
Security & Encryption: Implements industry-standard TLS/SSL and Zero-Knowledge Proofs (ZKPs) for data privacy and transaction integrity.
4. Data Layer
The data layer handles storage, retrieval, and real-time data operations for RAG-based AI applications.
RAG Data (Vector Database): Utilizes Chroma DB for fast, dense vector search operations essential for Retrieval-Augmented Generation (RAG).
Shared Inference: MongoDB is used to coordinate shared computation and model results across devices.
Local AI Storage: Redux manages persistent local state for offline-first AI interactions.
5. ML Layer
The machine learning layer defines how models are run, compressed, and deployed within the Nidum ecosystem.
Frameworks Supported:
PyTorch
ONNX
MLX (Apple's machine learning framework)
Model Compression / Quantization:
Hugging Face Transformers for pre-trained model integration.
Llama.cpp for running lightweight, quantized models on local devices.
6. Hardware Layer
Nidum is hardware-agnostic and supports a broad range of compute devices, enabling users to contribute compute or run models locally.
Supported Chipsets:
Intel
AMD
Apple Silicon (M1/M2)
Qualcomm Snapdragon
NVIDIA GPUs
7. AI Options & Integrations
Nidum is extensible with support for leading AI model hubs and inference engines:
Nidum Decentralized – For tokenized, peer-to-peer compute contribution.
Nidum Shared – For private, invite-only AI compute networks.
Ollama – Lightweight local model execution.
Hugging Face – Integration with HF Transformers for model hosting.
Groq – High-speed inference accelerators.
SambaNova – Enterprise-scale model serving.
OpenAI – Access to GPT and other OpenAI models.
Anthropic – Claude integration for aligned AI agents.

Summary
The Nidum Architecture is built to scale across personal, shared, and decentralized AI environments—supporting real-time inference, offline access, secure data handling, and tokenized contribution models. Whether you're an individual developer, a community contributor, or an enterprise user, the architecture is designed to provide both flexibility and power in building next-gen AI applications.
Last updated