There are various ways to deploy applications in the cloud. Below is a guide on how to deploy using Docker Compose and Kubernetes. Since the application is already containerized, we recommend using Kubernetes for deployment to gain higher flexibility and scalability.
If the ZBook application is small and has low resource requirements, Docker Compose can be used for deployment. Docker Compose helps you quickly deploy containerized applications on either local or remote servers.
Here are the basic steps for deploying with Docker Compose:
Clone the repository and enter the project directory:
bash
Run Docker Compose commands:
Use the docker-compose
command to start the service:
bash
This command will pull the latest images and start the services.
Configure environment variables
Configuring the correct environment variables is key to ensuring the service runs smoothly. The compose.env
file contains various environment variables. You can modify the settings for PostgreSQL, MinIO, email services, WebSocket, OAuth authentication, and more. For details on these parameters and how to configure them, see Configuration.
For more complex applications or if you want better scalability and management capabilities on the cloud platform, Kubernetes is recommended. Kubernetes helps you manage the lifecycle of containers, auto-scaling, and high availability.
Here are the basic steps for deploying with Kubernetes:
Install a Kubernetes cluster:
You can use various Kubernetes distributions to create a cluster, such as Minikube, kubeadm, or Managed Kubernetes Services (like Google Kubernetes Engine, Amazon EKS, Azure Kubernetes Service).
Write Kubernetes deployment files:
Create deployment.yaml
and service.yaml
files to define deployments and services in Kubernetes.
Apply the configuration:
Use the kubectl
command to apply the configuration to the cluster:
bash
For resource-constrained environments or if you want a lightweight Kubernetes deployment locally, you can use K3s. K3s is a lightweight version of Kubernetes, suitable for edge computing, IoT, and development environments.
Install K3s:
Download and run the K3s installation script:
Run the following command on your host to install K3s:
bash
Verify the installation:
After installation, you can use the kubectl
command to verify the status of K3s:
bash
You should see the list of your nodes, indicating that K3s has been successfully installed.
Here is an example Kubernetes YAML configuration, including Persistent Volumes, Persistent Volume Claims, and Redis deployment:
Persistent Volume and Persistent Volume Claim
yaml
Deployment Configuration
yaml
Helm is a package manager for Kubernetes that simplifies the deployment process. Here are the steps to deploy an application using Helm:
Install Helm:
Follow the Helm official documentation for instructions on installing Helm.
Create a Helm Chart:
Use Helm to create a new chart:
bash
This generates a directory structure for a Helm chart, where you can define Kubernetes deployment configurations.
Configure Helm Chart:
Configure your application settings in the zbook/values.yaml
file. You can define environment variables, service settings, and more in values.yaml
.
ZBook Helm chart: https://github.com/zizdlp/zbook-helm-chart. You need to first rename values_template.yaml to values.yaml, then fill in your own configuration details.
Deploy the Application:
Use Helm to deploy the application to the Kubernetes cluster:
bash
You can use helm list
to check the deployment status and helm upgrade
to update the application.
By following these steps, you can manage your application within a Kubernetes cluster and enjoy the benefits of container orchestration. For more detailed help, refer to the Kubernetes official documentation, K3s official documentation, and Helm official documentation.