Install Charmed Kubeflow in an air-gapped environment

An air-gapped environment is one that does not have access to the public internet. Installing Charmed Kubeflow (CKF) in an air-gapped environment requires special configuration.


Air-gapped Environment Requirements

Canonical does not prescribe how you should set up your specific air-gapped environment. However, it is assumed that the environment meets the following conditions:

  • A K8s cluster is running.
  • A container registry such as Artifactory is reachable from the K8s cluster over HTTPS (note: the “S” is important here or else Juju won’t work!).

MicroK8s DNS

If you are using MicroK8s, the DNS add-on should be configured to the host’s local nameserver. This can be achieved by running:

microk8s enable dns:$(resolvectl status | grep "Current DNS Server" | awk '{print $NF}')

Process Outline

  1. Artifact Generation
  2. Set up an airgapped environment with a K8s cluster and HTTPS-enabled registry.
  3. Extract and load the images from images.tar.gz into their container registry.
  4. Extract all charms from charms.tar.gz.
  5. Setup Juju in the airgapped cluster.
  6. Deploy CKF.

Artifact Generation

The following artifacts must be generated: images.tar.gz, charms.tar.gz. To generate those tarballs you’ll need to utilise our helper scripts that scan a CKF release and gather all the charm and images files.

Clone the bundle-kubeflow repository

git clone

Change directory to the Airgap utility scripts directory

cd bundle-kubeflow/scripts/airgapped

Install the pre-requisites of the utility scripts

pip3 install -r requirements.txt
sudo apt install pigz
sudo snap install docker
sudo snap install yq
sudo snap install jq

Get a list of all the images you need to download for the Charmed Kubeflow bundle you will be deploying. For example, to get the list of images for Charmed Kubeflow 1.8:

./scripts/airgapped/ releases/1.8/stable/kubeflow/bundle.yaml > images.txt

Pull the images to your docker cache using the script

python3 scripts/airgapped/ images.txt

Rename the images in the docker cache to have the URL of the registry in your air-gapped envrionment:

python3 scripts/airgapped/ --new-registry=<your air-gap registry> images.txt

Save the images to images.tar.gz

python3 scripts/airgapped/ retagged-images.txt

Save the charms to charms.tar.gz


python3 scripts/airgapped/ $BUNDLE_PATH

Extracting Artifacts

Both charms and OCI images must be extracted. Charms will be extracted to the same machine as the Juju client. OCI images will be pushed to the private container registry running in their air-gapped environment.

  1. Move the charms.tar.gz tar to the air-gapped machine, then extract its contents to ~/charms directory. This directory will be used in the deployment step.

    mkdir charms
    tar -xzvf charms.tar.gz --directory charms
  2. Move the retagged-images.txt file generated in the previous step to the air-gapped machine under the $HOME directory. This is also needed for the deployment step.

  3. Move the images.tar.gz tar to the air-gapped machine, then load the images into the private registry. Here are some example commands to do this:

    # Extract the images from tar
    mkdir images
    tar -xzvf images.tar.gz --directory images
    rm images.tar.gz
    # Load the images into intermediate Docker client
    for img in images/*.tar; do docker load < \$img && rm \$img; done
    rmdir images
    # Push the images from local docker to Registry
    python3 scripts/airgapped/ retagged-images.txt

    Additionally you need to import the charms’ ubuntu base images to your private registry. The images are:

Setup Juju

See Juju Airgapped.

Deploying Kubeflow

To deploy Kubeflow, use our Airgapped deployment script.

The script assumes the following:

  • a retagged-images.txt file exists in the home directory of you airgapped machine. This is a file containing a list of all the images needed for Charmed Kubeflow, where each image is defined with the airgapped registry. An example of the head of retagged-images.txt file:

    # retagged-images.txt

    In the above example, the airgapped registry is

  • a charms directory exists in the home directory of your airgapped machine. This is a directory containing all the charm files to be deployed in Charmed Kubeflow. An example of the contents of ~/charms directory:

    ls ~/charms
    admission-webhook_r301.charm   jupyter-ui_r858.charm           kfp-profile-controller_r1278.charm  knative-serving_r354.charm          minio_r278.charm               tensorboard-controller_r257.charm
    argo-controller_r424.charm     katib-controller_r446.charm     kfp-schedwf_r1302.charm             kserve-controller_r523.charm        mlmd_r127.charm                tensorboards-web-app_r245.charm
    dex-auth_r422.charm            katib-db-manager_r411.charm     kfp-ui_r1285.charm                  kubeflow-dashboard_r454.charm       mysql-k8s_r127.charm           training-operator_r347.charm
    envoy_r194.charm               katib-ui_r422.charm             kfp-viewer_r1317.charm              kubeflow-profiles_r355.charm        oidc-gatekeeper_r350.charm
    istio-gateway_r723.charm       kfp-api_r1283.charm             kfp-viz_r1235.charm                 kubeflow-roles_r187.charm           pvcviewer-operator_r30.charm
    istio-pilot_r827.charm         kfp-metadata-writer_r334.charm  knative-eventing_r353.charm         kubeflow-volumes_r260.charm         resource-dispatcher_r93.charm
    jupyter-controller_r849.charm  kfp-persistence_r1291.charm     knative-operator_r328.charm         metacontroller-operator_r252.charm  seldon-core_r664.charm

Once you meet the above requirements, you can add the kubeflow model and run the deploy script:

juju add-model kubeflow

Gateway Service Type

In script, the gateway_service_type for the Istio Gateway Configuration is set to LoadBalancer. However, if you don’t have a load balancer within your cluster, you can configure the service to NodePort by adding --config gateway_service_type="NodePort" to the istio-ingressgateway deploy command. The changes in the script will be as follows:

-juju deploy --trust --debug ./$(charm istio-gateway) istio-ingressgateway --config kind=ingress --config proxy-image=$(img istio/proxyv2)
+juju deploy --trust --debug ./$(charm istio-gateway) istio-ingressgateway --config kind=ingress --config proxy-image=$(img istio/proxyv2) --config gateway_service_type="NodePort"


Every setup may be different e.g. the choice of K8s (Charmed K8s, EKS, GKE, AKS, microK8s etc.), the choice of cloud provider (GCP, AWS, Azure etc.), the choice of container registry (Docker, Artifactory etc.). It is impossible for us to cover all combinations of these. But we will give a rough example to demonstrate the process.

Example Air-gapped Environment Setup

In this example, the air-gapped setup is as follows:

  • MicroK8s runs inside a single node VM.
  • The VM has cut-off internet connection (default Gateway has been removed).
  • The Docker daemon is running on the VM, alongside MicroK8s, and the Docker CLI is available to those logged into the VM.
  • A Docker registry is deployed as a container inside that VM (not inside Microk8s cluster). See Deploying a Registry Server from Docker documentation.
  • The Docker registry has HTTPS enabled, using a TLS cert that we created, with domain
  • The VM has been configured to trust our TLS cert for HTTPS traffic and recognise the domain name for our registry.
  • The MicroK8s cluster can reach the Docker registry container via its domain name, to fetch images.

Example Extract and Load Images

It is up to you how to extract and load the images provided to them in images.tar.gz. This example just focuses on how the process might look for one image. Within the overall tarball, there will be a sub-tarball per image. For example, the tarball jupyter-web-app.tar will contain the jupyter-web-app image.

The extraction process might look like this:

  1. The main archive is extracted to retrieve all the sub-tarballs: tar -xzvf images.tar.gz. Inside this extracted archive will be jupyter-web-app.tar.
  2. docker load < jupyter-web-app.tar - this will pull the image from the tarball into Docker.
  3. The image pulled will have the default name assigned to it in production: Note that this image name implies that it lives in the public registry.
  4. A new name is given to the image to specify its new home in our air-gapped registry: docker tag Note: At this point there should be 2 names for the same image, in the docker cache, as can be seen with docker image ls.
  5. The image is pushed to the air-gapped registry with docker push

A similar process would then be followed for all images. The new names of the images, as they appear in the air-gapped registry, should be noted, as they will be needed in the bundle configuration step.

Last updated 15 days ago.