Skip to content

System Requirements

Hardware, software, and network requirements for running Guts nodes.

Hardware Requirements

Minimum (Development/Testing)

Suitable for local development, CI/CD pipelines, and testing environments.

ComponentMinimumNotes
CPU2 coresx86_64 or ARM64
RAM4 GB
Storage50 GB SSDNVMe preferred
Network10 MbpsStable connection

Suitable for production full nodes serving API traffic and participating in P2P replication.

ComponentRecommendedNotes
CPU8 coresDedicated, not shared/burstable
RAM32 GBECC preferred for data integrity
Storage500 GB NVMeRAID-1 for reliability
Network1 GbpsLow latency preferred (<50ms to peers)

Validator Requirements

Validators participate in consensus and must meet higher requirements for network reliability.

ComponentRequiredNotes
CPU16+ coresHigh single-thread performance (3.5GHz+)
RAM64 GBECC required
Storage2 TB NVMeRAID-1 required, 3+ DWPD endurance
Network1 Gbps symmetric99.9% uptime required
UPSYesGraceful shutdown capability
RedundancyRecommendedRedundant power, network paths

Storage Sizing Guide

Storage requirements depend on the number and size of repositories:

WorkloadRepositoriesAvg SizeStorage Needed
Small< 10050 MB50 GB
Medium100-1,000100 MB200 GB
Large1,000-10,000200 MB1 TB
Enterprise10,000+500 MB5+ TB

Formula: Storage = (Repo Count × Avg Size × 1.5) + 50GB overhead

The 1.5 multiplier accounts for:

  • Git pack files and loose objects
  • Collaboration data (PRs, issues, comments)
  • Consensus state and logs

Software Requirements

Operating System

OSVersionStatus
Ubuntu22.04 LTS, 24.04 LTS✅ Recommended
Debian12 (Bookworm)✅ Supported
RHEL/Rocky/Alma9.x✅ Supported
Amazon Linux2023✅ Supported
macOS13+ (Ventura)⚠️ Development only
WindowsWSL2⚠️ Development only

Kernel Requirements:

  • Linux kernel 5.10+ (for io_uring support)
  • CONFIG_CGROUPS enabled (for container deployments)

Container Runtime

RuntimeVersionNotes
Docker24.0+Recommended for single-node
containerd1.7+Kubernetes default
Podman4.0+Alternative to Docker

Kubernetes

ComponentVersionNotes
Kubernetes1.28+Any conformant distribution
Helm3.12+For Helm chart deployment
kubectl1.28+Match cluster version

Dependencies (Bare Metal)

For bare metal installations, these system libraries are required:

bash
# Ubuntu/Debian
apt-get install -y \
  libssl3 \
  ca-certificates \
  curl \
  jq

# RHEL/Rocky/Alma
dnf install -y \
  openssl-libs \
  ca-certificates \
  curl \
  jq

Network Requirements

Port Requirements

PortProtocolDirectionPurposeRequired
8080TCPInboundHTTP APIYes
9000TCPInboundP2P (TCP)Yes
9000UDPInboundP2P (QUIC)Yes
9090TCPInboundMetrics (Prometheus)Recommended
443TCPInboundHTTPS (via proxy)Production

Firewall Configuration

UFW (Ubuntu/Debian)

bash
# Allow API access
sudo ufw allow 8080/tcp comment "Guts HTTP API"

# Allow P2P
sudo ufw allow 9000/tcp comment "Guts P2P TCP"
sudo ufw allow 9000/udp comment "Guts P2P QUIC"

# Metrics (internal only)
sudo ufw allow from 10.0.0.0/8 to any port 9090 comment "Guts Metrics"

# Apply
sudo ufw enable

firewalld (RHEL/Rocky)

bash
# Create service definition
sudo firewall-cmd --permanent --new-service=guts-node
sudo firewall-cmd --permanent --service=guts-node --add-port=8080/tcp
sudo firewall-cmd --permanent --service=guts-node --add-port=9000/tcp
sudo firewall-cmd --permanent --service=guts-node --add-port=9000/udp

# Enable service
sudo firewall-cmd --permanent --add-service=guts-node
sudo firewall-cmd --reload

iptables

bash
# API
iptables -A INPUT -p tcp --dport 8080 -j ACCEPT

# P2P
iptables -A INPUT -p tcp --dport 9000 -j ACCEPT
iptables -A INPUT -p udp --dport 9000 -j ACCEPT

# Metrics (internal only)
iptables -A INPUT -p tcp --dport 9090 -s 10.0.0.0/8 -j ACCEPT

AWS Security Group

hcl
resource "aws_security_group" "guts_node" {
  name        = "guts-node"
  description = "Security group for Guts node"

  ingress {
    from_port   = 8080
    to_port     = 8080
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "HTTP API"
  }

  ingress {
    from_port   = 9000
    to_port     = 9000
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "P2P TCP"
  }

  ingress {
    from_port   = 9000
    to_port     = 9000
    protocol    = "udp"
    cidr_blocks = ["0.0.0.0/0"]
    description = "P2P QUIC"
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Network Latency Requirements

Node TypeMax Latency to PeersNotes
Full Node< 500msHigher latency affects sync
Validator< 100msCritical for consensus

Bandwidth Requirements

ActivityBandwidthNotes
Idle< 1 MbpsHeartbeats, peer discovery
Light Usage10-50 MbpsNormal operations
Heavy Sync100-500 MbpsInitial sync, large repos
Peak (Validator)500+ MbpsBlock propagation

Cloud Instance Recommendations

AWS

Use CaseInstance TypevCPUsRAMStorage
Developmentt3.medium24 GB50 GB gp3
Productionc6i.2xlarge816 GB500 GB gp3
Validatorc6i.4xlarge1632 GB2 TB io2

GCP

Use CaseMachine TypevCPUsRAMStorage
Developmente2-medium24 GB50 GB pd-ssd
Productionc2-standard-8832 GB500 GB pd-ssd
Validatorc2-standard-161664 GB2 TB pd-extreme

Azure

Use CaseVM SizevCPUsRAMStorage
DevelopmentStandard_B2s24 GB50 GB Premium SSD
ProductionStandard_F8s_v2816 GB500 GB Premium SSD
ValidatorStandard_F16s_v21632 GB2 TB Ultra Disk

Performance Tuning

System Limits

For production deployments, increase system limits:

bash
# /etc/security/limits.d/guts.conf
guts soft nofile 65535
guts hard nofile 65535
guts soft nproc 32768
guts hard nproc 32768

# /etc/sysctl.d/99-guts.conf
# Network tuning
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 65535
net.ipv4.tcp_max_syn_backlog = 65535
net.ipv4.ip_local_port_range = 1024 65535

# Memory
vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2

# File system
fs.file-max = 2097152
fs.inotify.max_user_watches = 524288

# Apply without reboot
sudo sysctl -p /etc/sysctl.d/99-guts.conf

Storage Optimization

For NVMe storage with high write workloads:

bash
# Disable access time updates
# Add 'noatime' to /etc/fstab

# Use deadline scheduler for NVMe
echo "none" > /sys/block/nvme0n1/queue/scheduler

# Increase read-ahead for sequential workloads
echo 256 > /sys/block/nvme0n1/queue/read_ahead_kb

Monitoring Readiness

Before going to production, ensure you can monitor:

  • [ ] CPU, memory, disk, network metrics
  • [ ] Guts-specific metrics (via /metrics endpoint)
  • [ ] Log aggregation configured
  • [ ] Alerting rules defined

See Monitoring Guide for setup instructions.

Checklist

Development Environment

  • [ ] 2+ CPU cores available
  • [ ] 4+ GB RAM free
  • [ ] 50+ GB disk space
  • [ ] Docker or Kubernetes installed
  • [ ] Ports 8080, 9000 available

Production Environment

  • [ ] Meets recommended hardware specifications
  • [ ] Operating system updated and hardened
  • [ ] Firewall configured correctly
  • [ ] Network latency to peers acceptable
  • [ ] Monitoring infrastructure ready
  • [ ] Backup strategy defined
  • [ ] On-call procedures documented

Validator Environment

  • [ ] Meets validator hardware requirements
  • [ ] 99.9%+ network uptime achievable
  • [ ] UPS installed and tested
  • [ ] Secure key management in place
  • [ ] 24/7 monitoring configured
  • [ ] Incident response team identified

Released under the MIT License.