G
GuideDevOps
Lesson 5 of 10

GitOps Workflows

Part of the GitOps tutorial series.

The Single vs Dual Repository Debate

When you transition a legacy deployment to GitOps, the very first architectural decision you make is where to store your Kubernetes YAML files.

1. The Single Repo Approach (Mono-repo)

In this model, your application source code (Java/Node/Python) and your Kubernetes deployment YAML live in the exact same repository.

my-app-repo/
├── src/           # Application code
├── Dockerfile
└── k8s/
    ├── deployment.yaml
    └── service.yaml

Why this is dangerous: If you use a Single Repo, your CI pipeline and your CD (GitOps) pipeline collide.

  1. A developer pushes a code feature.
  2. The CI pipeline builds the Docker image (my-app:v1.2).
  3. To deploy this new image via GitOps, the CI pipeline must automatically commit the new version tag back into the k8s/deployment.yaml file.
  4. However, making an automated commit back to the exact same repository triggers the CI pipeline to run a massively redundant loop all over again!

2. The Dual Repo Approach (The Industry Standard)

To avoid circular CI/CD loops and separate permissions, the absolute standard Best Practice is to use two separate repositories:

  1. The Application Repo: Contains ONLY source code and the Dockerfile. It generates Docker Images. Only software developers have write access.
  2. The Infrastructure Repo (Config Repo): Contains ONLY Kubernetes YAML manifests or Helm charts. It consumes Docker Images. Software developers have read access, but only DevOps/SREs can approve merges.

The Perfect Dual-Repo Workflow

Let's trace a new feature deployment perfectly using ArgoCD and a Dual-Repo setup.

Stage 1: The Code Change (Application Repo)

  1. Developer writes a new feature in the checkout-service.
  2. Developer opens a Pull Request on the Application Repository.
  3. Tests pass. A teammate approves the PR. The code is merged to main.
  4. The CI pipeline (GitHub Actions) triggers. It compiles the code, runs unit tests, and builds a Docker image tagged with the Git commit hash (e.g., checkout:a1b2c3d).
  5. The CI pipeline pushes the Docker image to the registry (e.g., DockerHub).

Stage 2: The Config Update (Bridging the gap)

The Docker image exists, but the cluster knows nothing about it. How do we update the cluster?

  1. The incredibly smart GitHub Action in the Application Repo finishes pushing the image and executes a final step: An API call to the separate Infrastructure Repo.
  2. This automated script edits the checkout-deployment.yaml file in the Infrastructure Repo, changing image: checkout:old-version to image: checkout:a1b2c3d.
  3. The script commits this change directly to the main branch of the Infrastructure Repo.

Stage 3: The GitOps Deployment (Infrastructure Repo)

  1. ArgoCD (running inside the Kubernetes cluster) wakes up on its 3-minute polling cycle.
  2. It looks at the Infrastructure Repo. It notices the commit hash has changed.
  3. It performs a diff. It realizes the live Kubernetes deployment in the checkout-service is running old-version, but the Git repository demands a1b2c3d.
  4. ArgoCD safely executes a rolling update on the checkout-service Deployment until it matches the new Git target perfectly.

Environment Separation (Folders vs Branches)

When managing Staging and Production environments in the Infrastructure Repository, you must decide how to separate them.

❌ Bad: Branch Separation Do NOT use the main branch for Staging and a production branch for Production. If you use branches, to deploy to Production you must execute a "Git Merge" from Staging to Production. This often results in horrible merge conflicts, missing commits, and unpredictable states.

✅ Good: Folder Separation (The Standard) Use a single main branch, but separate environments by folders. This is often combined perfectly with the Kustomize tool.

infrastructure-repo/
└── checkout-service/
    ├── base/
    │   ├── deployment.yaml  # The core configuration
    │   └── service.yaml
    └── overlays/
        ├── staging/
        │   └── kustomization.yaml # Overrides memory limits and Dev DB URLs
        └── production/
            └── kustomization.yaml # Overrides memory limits and Prod DB URLs

If you configure ArgoCD properly:

  • You tell the "Staging ArgoCD Application" to look exclusively at /overlays/staging.
  • You tell the "Production ArgoCD Application" to look exclusively at /overlays/production.

This completely eliminates merge conflicts while allowing you to easily read the state of the entire company's infrastructure in a single unified branch.