Skip to content

PuppyGraph Helm Chart Installation Guide

This guide walks you through installing PuppyGraph using Helm Charts, covering step-by-step instructions for different environments.


Install Kubernetes

Before deploying PuppyGraph, you need a running Kubernetes cluster. You can use either a local setup for testing or a managed Kubernetes service in the cloud.

Read me first

The options below are implementation examples.
Pick one local setup (Minikube / Docker Desktop / Kind) or one managed service (EKS / GKE / AKS) that fits your environment.
Regions, zones, machine types, and node counts shown here are examples. Adjust them to your needs.
Managed services may incur cloud costs.

Local Setup

Minikube (Linux/macOS/Windows)

Minikube runs a single-node Kubernetes cluster locally:

# macOS (Homebrew)
brew install minikube
minikube start

# Linux
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube start

# Windows (Chocolatey)
choco install minikube
minikube start

Docker Desktop (Windows/macOS)

Docker Desktop includes an option to enable a Kubernetes cluster:

  • Open Settings > Kubernetes
  • Enable Kubernetes
  • Click Apply & Restart

Once restarted, kubectl will be configured automatically.

Kind (Kubernetes in Docker, Linux/macOS/Windows)

Kind, Lightweight Kubernetes clusters inside Docker:

# Install kind (requires Go installed)
GO111MODULE="on" go install sigs.k8s.io/kind@v0.23.0

# Create a cluster
kind create cluster --name puppygraph-cluster

Managed Kubernetes Services

Amazon EKS (Linux/macOS/Windows)

Install AWS CLI and eksctl.

Sample scripts: create a new EKS cluster named puppygraph-cluster in region us-east-1 with 3 nodes:

eksctl create cluster --name puppygraph-cluster --region us-east-1 --nodes 3
This command will:

  • Provision the EKS control plane in AWS
  • Create 3 worker nodes and connect them to the cluster
  • Update your local kubeconfig so you can access the cluster using kubectl

Google Kubernetes Engine (GKE)

Install Google Cloud SDK (includes gcloud).

Sample scripts: create a new GKE cluster named puppygraph-cluster with 3 nodes:

gcloud container clusters create puppygraph-cluster --zone us-east1-a --num-nodes 3
gcloud container clusters get-credentials puppygraph-cluster --zone us-east1-a
These commands will:

  • Provision the GKE control plane
  • Create 3 worker nodes
  • Update your local kubeconfig for kubectl access

Azure Kubernetes Service (AKS)

Install Azure CLI.

Sample scripts: create a new AKS cluster named puppygraph-cluster with 3 nodes:

az aks create -g MyResourceGroup -n puppygraph-cluster --node-count 3 --generate-ssh-keys
az aks get-credentials -g MyResourceGroup -n puppygraph-cluster
These commands will:

  • Provision the AKS control plane
  • Create 3 worker nodes
  • Configure your local kubeconfig for kubectl access

Verify Kubernetes Installation

Run the following commands to verify installation and connectivity:

# Check kubectl client installation
kubectl version --client
# Expected: prints client version info, e.g. "GitVersion:"v1.31.0""

# Check connection to Kubernetes API server
kubectl cluster-info
# Expected: shows addresses of Kubernetes control plane and services

# Check if cluster nodes are ready
kubectl get nodes
# Expected: lists nodes with STATUS "Ready"

If all commands run successfully and show expected output, your Kubernetes installation is verified.


Install Helm

Once Kubernetes is ready, install Helm.

# macOS (Homebrew)
brew install helm

# Linux (via script)
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Windows (Chocolatey)
choco install kubernetes-helm

For more installation details, refer to the official guide: Installing Helm

Verify Helm installation:

Run:

helm version
If the command executes successfully and outputs version information such as version.BuildInfo, Helm is installed correctly.


Add the PuppyGraph Helm Repository

Add the PuppyGraph Helm repository to access the latest charts:

# Add PuppyGraph Helm repository
helm repo add puppygraph https://puppygraph.github.io/puppygraph-helm-chart/

# Update local Helm chart repository cache
helm repo update

Custom Deployment Guide

Deploy PuppyGraph on any Kubernetes cluster by customizing configuration values.

Deploying a specific version?

If you want to deploy a specific version of PuppyGraph, please refer to the Customize Configuration section.

Generate Default Values

# Generate default values file
helm show values puppygraph/puppygraph > values.yaml

A sample of the key components of the default values file is shown below:

image:
  repository: docker.io/puppygraph/puppygraph
  pullPolicy: Always
  tag: "stable"

# resource for leader nodes
leader:
  replicas: 1
  min_replicas: 1
  resources:
    requests:
      cpu: "16"
      memory: "64Gi"
      ephemeral-storage: "50Gi"
    limits:
      cpu: "16"
      memory: "64Gi"
      ephemeral-storage: "50Gi"

# resource for compute nodes
compute:
  replicas: 3
  resources:
    requests:
      cpu: "16"
      memory: "64Gi"
      ephemeral-storage: "50Gi"
    limits:
      cpu: "16"
      memory: "64Gi"
      ephemeral-storage: "50Gi"

# define data storage
# if provisioner is not provided, will use preset storage class by name
# other key value pairs are parameters compatible with the provisioner
storage:
  name: ""
  size: "200Gi"
  provisioner: ""
  type: ""

env:
  # applied to both leader and compute pods
  common:
    PRIORITY_IP_CIDR: ""
  # applied to leader pods only
  leader:
    CLUSTER_ID: "1000"
    CLUSTER_STARTUPTIMEOUT: "10m"

Customize Configuration

Edit values.yaml to match your environment. Key configurations include:

  • Image Version: To deploy a specific version of PuppyGraph, update the image.tag field in your values.yaml.

    Example for specifying PuppyGraph version

    image:
      repository: docker.io/puppygraph/puppygraph
      tag: 0.93
      pullPolicy: Always
    
    To find out the latest versions, please check released PuppyGraph versions.

  • Network CIDR: Set env.common.PRIORITY_IP_CIDR to match your cluster node IP range.

    Check Node IP Range

    Before deployment, check your node internal IPs to configure the network correctly:

    kubectl get nodes -o wide
    

  • Storage Size: Adjust storage.size based on data requirements (default: 10Gi).

    Storage Auto-Detection

    If you don't specify storage.provisioner and storage.type, the chart will automatically use the default storage class configured in your Kubernetes cluster.

  • Resources: Adjust CPU and memory requests/limits based on cluster capacity.

  • Replicas: Set appropriate replica counts for leader and compute pods.
  • Environment Variables: Configure PuppyGraph-specific settings under env.*.

Deploy with Custom Configuration

# Set environment variables
export CLUSTER_NAME=puppygraph-test
export NAMESPACE=pg-test-ns

# Deploy with custom configuration
helm upgrade --install $CLUSTER_NAME puppygraph/puppygraph \
  --namespace $NAMESPACE \
  --create-namespace \
  -f values.yaml

Override Specific Values

You can also override specific values without editing the file using --set flags:

helm upgrade --install $CLUSTER_NAME puppygraph/puppygraph \
    -f values.yaml \
    --set leader.replicas=3 \
    --set compute.replicas=3

Verify Deployment

# Check pod status
kubectl get pods -n $NAMESPACE

# Check persistent volumes
kubectl get pvc -n $NAMESPACE

# Check services
kubectl get svc -n $NAMESPACE

Access PuppyGraph

After deployment, you can access PuppyGraph by port forwarding:

kubectl -n $NAMESPACE port-forward --address 0.0.0.0 svc/$CLUSTER_NAME-cluster-proxy 8081:8081 8182:8182 7687:7687

Then open the following URL in your browser: http://localhost:8081

Access Method

LoadBalancer services are supported on cloud platforms (AWS EKS, GCP GKE, Azure AKS). For local development with Docker Desktop, use the port-forward method.


Uninstall and Clean Up

Data Loss Warning

These steps will permanently delete all PuppyGraph data.

# Uninstall Helm release
helm uninstall $CLUSTER_NAME --namespace $NAMESPACE

# Delete persistent volume claims
kubectl get pvc -n $NAMESPACE | cut -f 1 -d ' ' | grep -E "data-${CLUSTER_NAME}\S+" | xargs kubectl delete pvc -n $NAMESPACE