Requirements for Auto DevOps (FREE ALL)
Before enabling Auto DevOps, we recommend you to prepare it for deployment. If you don't, you can use it to build and test your apps, and then configure the deployment later.
To prepare the deployment:
Define the deployment strategy.
Prepare the base domain.
Define where you want to deploy it:
Auto DevOps deployment strategy
Introduced in GitLab 11.0.
When using Auto DevOps to deploy your applications, choose the continuous deployment strategy that works best for your needs:
|Continuous deployment to production||Enables Auto Deploy with the default branch continuously deployed to production.||Continuous deployment to production.|
|Continuous deployment to production using timed incremental rollout||Sets the
||Continuously deploy to production with a 5 minutes delay between rollouts.|
|Automatic deployment to staging, manual deployment to production||Sets
||The default branch is continuously deployed to staging and continuously delivered to production.|
You can choose the deployment method when enabling Auto DevOps or later:
- On the left sidebar, select Search or go to and find your project.
- Select Settings > CI/CD.
- Expand Auto DevOps.
- Choose the deployment strategy.
- Select Save changes.
NOTE: Use the blue-green deployment technique to minimize downtime and risk.
Auto DevOps base domain
To define the base domain, either:
- In the project, group, or instance level: go to your cluster settings and add it there.
- In the project or group level: add it as an environment variable:
- In the instance level: go to the Admin Area, then Settings > CI/CD > Continuous Integration and Delivery and add it there.
The base domain variable
KUBE_INGRESS_BASE_DOMAIN follows the same order of
precedence as other environment variables.
If you don't specify the base domain in your projects and groups, Auto DevOps uses the instance-wide Auto DevOps domain.
Auto DevOps requires a wildcard DNS
A record that matches the base domains. For
a base domain of
example.com, you'd need a DNS entry like:
*.example.com 3600 A 10.0.2.2
In this case, the deployed applications are served from
is the IP address of your load balancer, generally NGINX (see requirements).
Setting up the DNS record is beyond the scope of this document; check with your
DNS provider for information.
After completing setup, all requests hit the load balancer, which routes requests to the Kubernetes pods running your application.
Auto DevOps requirements for Kubernetes
To make full use of Auto DevOps with Kubernetes, you need:
To enable deployments, you need:
For external HTTP traffic, an Ingress controller is required. For regular deployments, any Ingress controller should work, but as of GitLab 14.0, canary deployments require NGINX Ingress. You can deploy the NGINX Ingress controller to your Kubernetes cluster either through the GitLab Cluster management project template or manually by using the
NOTE: If your cluster is installed on bare metal, see Auto DevOps Requirements for bare metal.
You must specify the Auto DevOps base domain, which all of your Auto DevOps applications use. This domain must be configured with wildcard DNS.
GitLab Runner (for all stages)
Your runner must be configured to run Docker, usually with either the Docker or Kubernetes executors, with privileged mode enabled. The runners don't need to be installed in the Kubernetes cluster, but the Kubernetes executor is easy to use and automatically autoscales. You can configure Docker-based runners to autoscale as well, using Docker Machine.
cert-manager (optional, for TLS/HTTPS)
To enable HTTPS endpoints for your application, you can install cert-manager, a native Kubernetes certificate management controller that helps with issuing certificates. Installing cert-manager on your cluster issues a Let's Encrypt certificate and ensures the certificates are valid and up-to-date.
After all requirements are met, you can enable Auto DevOps.
Auto DevOps requirements for bare metal
According to the Kubernetes Ingress-NGINX docs:
In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers.
The docs linked above explain the issue and present possible solutions, for example: