In this post, we are going to go through how to deploy Jupyter Lab on ARM on Kubernetes. We’ll also build a container for use with Dask, but you can skip/customize this step to meet your own needs. In the previous post, I got Dask on ARM on Kubernetes working, while using remote access to allow the Jupyter notebook to run outside of the cluster. After running into a few issues from having the client code outside of the cluster, I decided it was worth the effort to set up Jupyter on ARM on K8s.

Rebuilding the JupyterHub Containers

The default Jupyter containers are not yet cross-built for ARM. If your primary development machine is not an ARM machine, you’ll want to set up Docker buildx for cross-building, and I’ve got some instructions on how to do this.

Warning

One of Jupyter’s containers uses cgo to build a small bootstrap program: this program will not build under QEMU. If you get an error building your containers check out my instructions on cross-building with real hosts. You can also cross-build without QEMU (discussed in the same post).

JupyterLab uses a special program called ChartPress to build it’s images. This program’s compose building capabilities are similar to Docker’s but are Python focused. To make ChartPress use Docker buildx, you’ll want to clone the repo git clone git@github.com:jupyterhub/chartpress.git and replace the following line in chartpress.py

-    cmd = ['docker', 'build', '-t', image_spec, context_path]

+    cmd = ['docker', 'buildx', 'build', '-t', image_spec, context_path, "--platform", "linux/arm64,linux/amd64", "--push"]

Then you can pip install your local version:

pip install -e .

Now that you have ChartPress set up to cross-build for ARM64 and AMD64, you can check out docker-stacks repo and make a few changes. First is the base notebook container targets a specific non-cross platform hash, so we’ll change the "FROM":

-ARG ROOT_CONTAINER=ubuntu:focal-20200925@sha256:2e70e9c81838224b5311970dbf7ed16802fbfe19e7a70b3cbfa3d7522aa285b4

+#ARG ROOT_CONTAINER=ubuntu:focal-20200925@sha256:2e70e9c81838224b5311970dbf7ed16802fbfe19e7a70b3cbfa3d7522aa285b4

+ARG ROOT_CONTAINER=ubuntu:focal

Next, is that Miniconda doesn’t have full ARM64 support, so you’ll want to swap the Miniconda install to Miniforge:

-RUN wget --quiet https://repo.continuum.io/miniconda/Miniconda3-py38_${MINICONDA_VERSION}-Linux-x86_64.sh && \

-    echo "${miniconda_checksum} *Miniconda3-py38_${MINICONDA_VERSION}-Linux-x86_64.sh" | md5sum -c - && \

-    /bin/bash Miniconda3-py38_${MINICONDA_VERSION}-Linux-x86_64.sh -f -b -p $CONDA_DIR && \

-    rm Miniconda3-py38_${MINICONDA_VERSION}-Linux-x86_64.sh && \

+RUN export arch=$(uname -m) && \

+    if [ "$arch" == "aarm64" ]; then \

+      arch="arm64"; \

+    fi; \

+    wget --quiet https://github.com/conda-forge/miniforge/releases/download/4.8.5-1/Miniforge3-4.8.5-1-Linux-${arch}.sh -O miniforge.sh && \

+    chmod a+x miniforge.sh && \

+    ./miniforge.sh -f -b -p $CONDA_DIR && \

+    rm miniforge.sh && \

The docker-stacks notebooks are built with a Makefile so to build the base image, you’ll execute OWNER=holdenk make build/base-notebook, where you set "OWNER" to your dockerhub username.

In addition to the docker-stack images you’ll want to rebuild the zero-to-jupyterhub-k8s and jconfigurable-http-proxy.

With zero-to-jupyter-hub-k8s you’ll also need to change images/singleuser-sample/Dockerfile to use the docker-stack image you built (e.g. in mine I replaced FROM jupyter/base-notebook:45bfe5a474fa with FROM holdenk/base-notebook:latest) . The py-spy package will also need to be removed from the images/hub/requirements.txt file since it is not cross-built (and it is optional anyway). zero-to-jupyterhub-k8s is built with chartpress, so you will just build with a custom image prefix as in [ex_chartpress_build].

chartpress  --image-prefix holdenk/jupyter-hub-magic --force-build

With jconfigurable-http-proxy no changes are necessary to the project it’s self, and you can directly run docker buildx build -t holdenk/jconfigurable-http-proxy:0.0.1 . --platform linux/arm64,linux/amd64 --push. Note this is a different build command as the proxy project does not use chartpress.

Configuring the container images

Now that you’ve built the container images, we need to configure our helm chart to use them. When you run chartpress inside of the zero-to-jupyterhub-k8s repo, it updates the jupyterhub values for the helm chart. You can either use this new chart by following the helm instruction on using a chart repository (e.g. helm install $RELEASE ~/repos/scalingpythonml/scratch/zero-to-jupyterhub-k8s/jupyterhub --namespace $NAMESPACE --create-namespace --values config.yaml) or you can use the existing published helm chart and override the images.

To install with the existing published chart you can run [install_existing].

helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update

To override the images, you’ll need to specify the images in your configuration to helm when your doing your install later as in [img-config]. I put this in a file called config.yaml.

hub:
  image:
    name: holdenk/jupyter-hub-magichub
proxy:
  secretSync:
    image:
      name: holdenk/jupyter-hub-magicsecret-sync
      tag: '0.10.2'
# This one needs to be override either way.
  chp:
    image:
      name: holdenk/jconfigurable-http-proxy
      tag: '0.0.1'
singleuser:
  networkTools:
    image:
      name: holdenk/jupyter-hub-magicnetwork-tools
      tag: '0.10.2'
  image:
    name: holdenk/jupyter-hub-magicsingleuser-sample
    tag: '0.10.2'
prePuller:
  hook:
    image:
      name: holdenk/jupyter-hub-magicimage-awaiter
      tag: '0.10.2'

We’ll keep building on the above configuration, since we need to do more than just override the images.

Setting up a SSL Certificate

Jupyter expects an SSL certificate for its endpoint. If you don’t have cert manager installed, the guide at https://opensource.com/article/20/3/ssl-letsencrypt-k3s shows how to configure SSL using Let’s Encrypt. If you don’t have a publicly accessible IP and domain, you’ll need to use an alternative provider. Once you have cert-manager installed it’s time to request the certificate. The YAML for my certificate request for holdenkarau.mooo.com is shown in [cert-req].

apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
  name: k3s-mooo
  namespace: jhub
spec:
  secretName: k3s-mooo-tls
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  commonName: holdenkarau.mooo.com
  dnsNames:
  - holdenkarau.mooo.com

Once you make your certificate request you can apply with kubectl apply -f le-prod-cert.yaml, and monitor it with kubectl get certificates -n jhub -w -o yaml. If your certificate does not become "Ready", you should check out the cert-manager debugging guide.

Now that you’ve got your SSl certificate stored as a secret in your cluster, you’ll need configure your JupyterHub ingress to use it by adding [ingres-config] to your config.yaml.

ingress:
  enabled: true
  hosts:
    - holdenkarau.mooo.com
  tls:
   - hosts:
      - holdenkarau.mooo.com
     secretName: k3s-mooo-tls

Making Jupyter work without a second public IP

Since my home only has one public IP address, I changed the service type from LodeBalancer to NodePort, since I did not have a spare public IP to assign to Jupyter.

proxy:
  service:
    type: NodePort
  secretToken: DIFFERENTSECRET

With this change the service deployed successfully and Traefik (installed in K3s by default) was able to route the requests.

Setting up Authentication

I was unable to get the JupyterHub GitHub plugin working, but it looks like there is an outstanding issue to refactor the auth configuration, so for now I just hard coded what is known as "dummy" authentication. I recommend using a different kind of authentication as soon as the refactoring is complete.

auth:
  type: dummy
  dummy:
    password: 'mypassword'
  whitelist:
    users:
      - user1
      - user2

Installing JupyterHub with Helm

Now you can install this with Helm:

RELEASE=jhub

NAMESPACE=jhub

helm install    $RELEASE jupyterhub/jupyterhub   --namespace $NAMESPACE   --create-namespace   --version=0.10.2     --values config.yaml

And now you’re ready to rock and roll with JupyterHub! However, part of the config is still commented out; that’s because we have not yet built the single user Dask Jupyter Docker container. If you aren’t using Dask, this can be your stopping point.

Adding Dask Support

Adding Dask Support involves configuring permissions to make sure the notebook can create executors, building an image to work with your JupyterHub launcher, and adding the image to as a profile to your launcher.

Permissions

The default service account used will probably not have the right permissions to launch dask-distributed workers. To start with create a specification for the what permission your notebook is going to need, I called my setup.yaml (not very creative I know) as in [ex-perm-yaml]

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: daskKubernetes
  namespace: dask
rules:
- apiGroups:
  - ""  # indicates the core API group
  resources:
  - "pods"
  verbs:
  - "get"
  - "list"
  - "watch"
  - "create"
  - "delete"
- apiGroups:
  - ""  # indicates the core API group
  resources:
  - "pods/log"
  verbs:
  - "get"
  - "list"
- apiGroups:
  - "" # indicates the core API group
  resources:
  - "services"
  verbs:
  - "get"
  - "list"
  - "watch"
  - "create"
  - "delete"

Now that you’ve specified the permissions you can go ahead and create the accounts, namespaces, and bindings to wire everything together as in [ex-setup-namespace].

kubectl create namespace dask
kubectl create serviceaccount dask --namespace dask
kubectl apply -f setup.yaml
kubectl create rolebinding dask-sa-binding --namespace dask --role=daskKubernetes --user=dask:dask

Building container images

The dask-docker project contains a notebook container file; however, it is not designed for use with JupyterHub’s launcher. The first change needed is commenting out the auto start in notebook/prepare.sh. The other required change is swapping the Dockerfile with your cross-built  — single-user-sample. I updated mine to also install some helpful libraries as in [my_dask_nb_dockerfile]:

FROM  holdenk/jupyter-hub-magicsingleuser-sample:0.10.2

USER root

RUN apt-get update \
    && apt-get install -yq graphviz git build-essential \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

RUN touch /hello_holden

USER $NB_USER

RUN conda install -c conda-forge --yes mamba
RUN mamba install --yes python==3.8.6
RUN (mamba install --yes aiohttp==3.7.1 || pip install aiohttp==3.7.1 )
RUN mamba install --yes \
    python-blosc \
    cytoolz \
    dask==2.30.0 \
    dask-core==2.30.0 \
    lz4 \
    numpy==1.19.2 \
    ipywidgets \
    python-graphviz && \
    mamba install --yes s3fs gcsfs dropboxdrivefs requests dropbox paramiko adlfs pygit2 pyarrow && \
    mamba install --yes bokeh numba llvmlite
RUN (mamba install --yes fastparquet || pip install fastparquet)
RUN (mamba install --yes jupyter-server-proxy || pip install jupyter-server-proxy)
RUN (mamba install --yes dask-labextension==3.0.0 || pip install dask-labextension==3.0.0)
RUN jupyter labextension install @jupyter-widgets/jupyterlab-manager dask-labextension@3.0.0 @jupyterlab/server-proxy \
    && jupyter serverextension enable dask-labextension@3.0.0 @jupyterlab/server-proxy \
    && pip install dask-kubernetes==0.11.0 \
    && jupyter lab clean \
    && jlpm cache clean \
    && npm cache clean --force \
    && find /opt/conda/ -type f,l -name '*.a' -delete \
    && find /opt/conda/ -type f,l -name '*.pyc' -delete \
    && find /opt/conda/ -type f,l -name '*.js.map' -delete \
    && find /opt/conda/lib/python*/site-packages/bokeh/server/static -type f,l -name '*.js' -not -name '*.min.js' -delete || echo "no bokeh static files to cleanup" \
    && rm -rf /opt/conda/pkgs

USER root

# Create the /opt/app directory, and assert that Jupyter's NB_UID/NB_GID values
# haven't changed.
RUN mkdir /opt/app \
    && if [ "$NB_UID" != "1000" ] || [ "$NB_GID" != "100" ]; then \
    echo "Jupyter's NB_UID/NB_GID changed, need to update the Dockerfile"; \
    exit 1; \
    fi

# Copy over the example as NB_USER. Unfortuantely we can't use $NB_UID/$NB_GID
# in the `--chown` statement, so we need to hardcode these values.
COPY --chown=1000:100 examples/ /home/$NB_USER/examples
COPY prepare.sh /usr/bin/prepare.sh

ENTRYPOINT ["tini", "--", "/usr/bin/prepare.sh"]

As before you can build this with the standard docker buildx commands.

Configuring your Jupyter

To enable Dask with Jupyter you’ll need to update your configuration to both specify the service account and add a profile for for the notebook container you’ve created. In my situation this looks like:

Once your done with your config changes, you need to update your install with helm upgrade --cleanup-on-fail --install $RELEASE jupyterhub/jupyterhub --namespace $NAMESPACE --create-namespace --version=0.10.2 --values config.yaml

Conclusion

All in all my confiugration file looks something like [my-total-config] (except with diffferent secrets). Your config file will look a bit different depending on the choices you made while running through this config.

hub:
  image:
    name: holdenk/jupyter-hub-magichub
proxy:
  service:
    type: NodePort
  secretToken: DIFFERENTSECRET
  secretSync:
    image:
      name: holdenk/jupyter-hub-magicsecret-sync
      tag: '0.10.2'
  chp:
    image:
      name: holdenk/jconfigurable-http-proxy
      tag: '0.0.1'
ingress:
  enabled: true
  hosts:
    - holdenkarau.mooo.com
  tls:
   - hosts:
      - holdenkarau.mooo.com
     secretName: k3s-mooo-tls
singleuser:
  serviceAccountName: dask
  networkTools:
    image:
      name: holdenk/jupyter-hub-magicnetwork-tools
      tag: '0.10.2'
  image:
    name: holdenk/jupyter-hub-magicsingleuser-sample
    tag: '0.10.2'
  profileList:
    - display_name: "Minimal environment"
      description: "To avoid too much bells and whistles: Python."
      default: true
    - display_name: "Dask container"
      description: "If you want to run dask"
      kubespawner_override:
        image: holdenk/dask-notebook:v0.9.4b
prePuller:
  hook:
    image:
      name: holdenk/jupyter-hub-magicimage-awaiter
      tag: '0.10.2'
# Do something better here! It's being reworked though - https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/1871
auth:
  type: dummy
  dummy:
    password: 'mypassword'
  whitelist:
    users:
      - user1
      - user2

Now you’re ready to rock-and-role with more stable Dask jobs, that can survive when your notebook goes to sleep or your home internet connection goes out between you and your cluster. The next blog post will explore how this is a bit more involved for building a JupyterHub launcher container for a Spark notebook.