Skip to main content

Installing Infrahub

Hardware requirements

Please ensure the systems on which you want to install Infrahub meet the hardware requirements.

info

Allocating more CPU cores to the Neo4j database will only improve performance on Infrahub Enterprise as it's leverage parallel query execution.

Community

Infrahub is deployed as a container-based architecture, and can be deployed for testing or production use in a number of different ways:

Quick start via curl

To quickly spin up the latest Infrahub locally, you can retrieve the Docker Compose file from infrahub.opsmill.io. Using https://infrahub.opsmill.io or https://infrahub.opsmill.io/stable will retrieve the latest stable version.

Alternatively, you can specify a specific version or the develop branch in the URL:

warning

We do not recommend using the development version (develop branch) for production purposes or as an introduction to Infrahub. For a list of versions, check our release notes on GitHub.

To spin up an Infrahub environment

curl https://infrahub.opsmill.io | docker compose -p infrahub -f - up -d

To spin down and remove an Infrahub environment

curl https://infrahub.opsmill.io | docker compose -p infrahub -f - down -v

From Git repository

Create the base directory for the Infrahub installation. For this guide, we'll use /opt/infrahub.

sudo mkdir -p /opt/infrahub/
cd /opt/infrahub/
warning

Depending on your system configuration, you might have to give other users write permissions to the /opt/infrahub directory.

Usage of the /opt/infrahub directory is merely a suggestion. You can use any directory on your system, especially for development or demo purposes.

mkdir -p ~/source/infrahub/
cd ~/source/infrahub/

Next, clone the Infrahub GitHub repository into the current directory.

git clone --recursive --depth 1 https://github.com/opsmill/infrahub.git
note

The command above utilizes a "shallow clone" to retrieve only the most recent commit. If you need to download the entire history, omit the --depth 1 argument.

The git clone command should generate output similar to the following:

Cloning into '.'...
remote: Enumerating objects: 1312, done.
remote: Counting objects: 100% (1312/1312), done.
remote: Compressing objects: 100% (1150/1150), done.
remote: Total 1312 (delta 187), reused 691 (delta 104), pack-reused 0
Receiving objects: 100% (1312/1312), 33.37 MiB | 14.46 MiB/s, done.
Resolving deltas: 100% (187/187), done.

Docker Compose

The recommended way to run Infrahub is to use the Docker Compose files included with the project combined with the helper commands defined in invoke.

The pre-requisites for this type of deployment are to have:

Invoke

On MacOS, Python is installed by default so you should be able to install invoke directly. Invoke works best when you install it in the main Python environment, but you can also install it in a virtual environment if you prefer. To install invoke and toml, run the following command:

pip3 install invoke toml

Docker

To install Docker, follow the official instructions on the Docker website for your platform.

Once Docker Desktop and Invoke are installed you can start and initialize Infrahub:

cd infrahub
invoke demo.start
Check the documentation of the demo environment for more information../topics/local-demo-environment

GitHub Codespaces

The Infrahub GitHub repository is designed to launch an instance via GitHub Codespaces. We have two devcontainer configurations:

  • infrahub: a deployment running without any Schema or data pre-installed
  • infrahub-demo: a deployment running the demo environment
No DataDemo Data
Launch in GitHub Codespaces (No Data)Launch in GitHub Codespaces (Demo Data)
note

The default devcontainer .devcontainer/devcontainer.json launches Infrahub with no schema or data. If you want to launch a deployment with the demo schema and data, you will need to choose the alternate Dev container configuration in the GitHub Codespaces creation options.

Infrahub devcontainer filehttps://github.com/opsmill/infrahub/tree/stable/.devcontainer/devcontainer.json

K8s with Helm charts

A first version of our K8S helm-chart is available in our repository.

Infrahub Helm Charthttps://github.com/opsmill/infrahub-helm/tree/stable/charts/infrahub ArtifactHubhttps://artifacthub.io/packages/helm/infrahub/infrahub

The following are required for production deployments using Helm:

  • data persistence for the database must be enabled
  • multiple replicas of the Infrahub API Server and Infrahub Task workers should be deployed: you can make use of the affinity variable to define the affinity policy for the pods
  • S3 storage should be configured for the Infrahub API Server, it is required if you have multiple replicas
warning

We do not recommend using the included dependencies (Neo4j, RabbitMQ, Redis) for production. They are present to ease deployment on non-production environments.

You can use the following values example:

infrahubServer:
replicas: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- infrahub-server
topologyKey: topology.kubernetes.io/zone
persistence:
enabled: false
ingress:
enabled: true
infrahubServer:
env:
INFRAHUB_ALLOW_ANONYMOUS_ACCESS: "true"
INFRAHUB_CACHE_PORT: 6379
INFRAHUB_CONFIG: /config/infrahub.toml
INFRAHUB_DB_TYPE: neo4j
INFRAHUB_LOG_LEVEL: INFO
INFRAHUB_PRODUCTION: "true"
INFRAHUB_INITIAL_ADMIN_TOKEN: 06438eb2-8019-4776-878c-0941b1f1d1ec
INFRAHUB_SECURITY_SECRET_KEY: 327f747f-efac-42be-9e73-999f08f86b92
INFRAHUB_STORAGE_DRIVER: s3
AWS_ACCESS_KEY_ID: xxxx
AWS_SECRET_ACCESS_KEY: xxxx
AWS_S3_BUCKET_NAME: infrahub-data
AWS_S3_ENDPOINT_URL: https://s3

infrahubTaskWorker:
replicas: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- infrahub-task-worker
topologyKey: topology.kubernetes.io/zone

neo4j:
services:
admin:
enabled: true
volumes:
data:
mode: dynamic
dynamic:
storageClassName: premium-rwo
requests:
storage: 100Gi
helm install infrahub -f values.yml path/to/infrahub/chart

You can also install the chart using the OpsMill registry.

helm install infrahub -f values.yml oci://registry.opsmill.io/opsmill/chart/infrahub

EnterpriseEnterprise Edition

Enterprise is based off the Community version, with several changes for:

  • enterprise features
  • high availability
  • better performance
  • security hardening (Docker image, etc)

Infrahub Enterprise can be deployed the same way as Infrahub Community, using the curl method or using Helm:

Quick start via curl

To quickly spin up the latest Infrahub locally, you can retrieve the Docker Compose file from infrahub.opsmill.io/enterprise.

To spin up an Infrahub environment

curl https://infrahub.opsmill.io/enterprise | docker compose -p infrahub -f - up -d

To spin down and remove an Infrahub environment

curl https://infrahub.opsmill.io/enterprise | docker compose -p infrahub -f - down -v

K8s with Helm charts

Enterprise Helm chart is based of the original Infrahub chart and using it as a Helm dependency. Thus most of the configuration related to Infrahub goes inside the infrahub top-level key.

Infrahub Helm Charthttps://github.com/opsmill/infrahub-helm/tree/stable/charts/infrahub-enterprise ArtifactHubhttps://artifacthub.io/packages/helm/infrahub-enterprise/infrahub-enterprise

The following are required for production deployments using Helm:

  • data persistence for the database must be enabled
  • multiple replicas of the Infrahub API Server and Infrahub Task workers should be deployed: you can make use of the affinity variable to define the affinity policy for the pods
  • S3 storage should be configured for the Infrahub API Server, it is required if you have multiple replicas
warning

We do not recommend using the included dependencies (Neo4j, RabbitMQ, Redis) for production. They are present to ease deployment on non-production environments.

You can use the following values example:

infrahub:
infrahubServer:
replicas: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- infrahub-server
topologyKey: topology.kubernetes.io/zone
persistence:
enabled: false
ingress:
enabled: true
infrahubServer:
env:
INFRAHUB_ALLOW_ANONYMOUS_ACCESS: "true"
INFRAHUB_CACHE_PORT: 6379
INFRAHUB_CONFIG: /config/infrahub.toml
INFRAHUB_DB_TYPE: neo4j
INFRAHUB_LOG_LEVEL: INFO
INFRAHUB_PRODUCTION: "true"
INFRAHUB_INITIAL_ADMIN_TOKEN: 06438eb2-8019-4776-878c-0941b1f1d1ec
INFRAHUB_SECURITY_SECRET_KEY: 327f747f-efac-42be-9e73-999f08f86b92
INFRAHUB_STORAGE_DRIVER: s3
AWS_ACCESS_KEY_ID: xxxx
AWS_SECRET_ACCESS_KEY: xxxx
AWS_S3_BUCKET_NAME: infrahub-data
AWS_S3_ENDPOINT_URL: https://s3

infrahubTaskWorker:
replicas: 3
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- infrahub-task-worker
topologyKey: topology.kubernetes.io/zone

neo4j:
services:
admin:
enabled: true
volumes:
data:
mode: dynamic
dynamic:
storageClassName: premium-rwo
requests:
storage: 100Gi
helm install infrahub -f values.yml path/to/infrahub-enterprise/chart

You can also install the chart using the OpsMill registry.

helm install infrahub -f values.yml oci://registry.opsmill.io/opsmill/chart/infrahub-enterprise

Backup and restore Infrahub Enterprise

Backing Infrahub Enterprise up consists of:

  1. Backup the Neo4j database

You can use the following tools to backup the Neo4j database: Database Backup Thanks to Neo4j Enterprise, you can leverage the online backup feature that does not require stopping the database.

  1. Backup the artifact store

Either the S3 bucket or the local filesystem, accordingly.

  1. Backup the Prefect database (task logs, etc)

You can use standard PostgreSQL tools like pg_dump For example, when running using Docker compose:

docker exec infrahub-task-manager-db-1 pg_dump -Fc -d prefect -U postgres > prefect.dump

Restoring Infrahub Enterprise consists of:

  1. Starting Infrahub
  2. Restoring the Neo4j database (Database Restore)
  3. Restoring the artifact store
  4. Restoring the Prefect database

For example, when running using Docker Compose:

docker exec infrahub-task-manager-db-1 pg_restore -d prefect -U postgres --clean --create prefect.dump

Then restart the task manager (Prefect server)

  1. Restarting Infrahub (API servers, then task workers)

High Availability

High Availability Architecture