What Is a CI/CD Pipeline? The Automated Path From Code to Production
You built a feature. It works. Now what?
You test it, build it, upload it to the server, restart the service. Every time. For every change. By hand.
This isn't just a waste of time — every manual step is a potential mistake. Deploying the wrong branch, skipping a test, uploading a stale build. A CI/CD pipeline automates this entire process and removes human error from the equation.
CI and CD: Two Separate Concepts
CI (Continuous Integration): Developers integrate code changes frequently (multiple times a day) into a shared branch, and automated tests run on every integration. The goal: catch bugs early, avoid accumulating integration problems.
CD (Continuous Delivery / Continuous Deployment):
Continuous Delivery — code is always kept in a deployable state. Taking it to production requires manual approval, but it's technically ready at any moment.
Continuous Deployment — every successful CI run is automatically deployed to production. No human intervention whatsoever.
Developer → Git Push → CI Triggered → Tests Run → Build Created
↓
CD: Deploy to Staging → Approval → Production
The Anatomy of a Pipeline
A typical CI/CD pipeline consists of these stages:
1. Trigger: An event starts the pipeline. Push to main, PR opened, tag created.
2. Build: Code is compiled (if needed), dependencies installed, artifact created.
3. Test: Unit tests, integration tests, linting, code coverage checks.
4. Security Scan: Vulnerabilities in dependencies, container image scanning.
5. Deploy to Staging: Automatic deploy to test environment.
6. Smoke Tests: Verify basic flows work in a production-like environment.
7. Deploy to Production: Manual approval or automatic.
A Practical Pipeline with GitHub Actions
GitHub Actions is GitHub's integrated CI/CD platform. You define pipelines in YAML files under .github/workflows/ in your repository.
# .github/workflows/ci.yml name: CI/CD Pipeline on: push: branches: [main, develop] pull_request: branches: [main] jobs: test: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v4 - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: "20" cache: "npm" - name: Install dependencies run: npm ci - name: Run linter run: npm run lint - name: Run tests run: npm test -- --coverage build: needs: test runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Build Docker image run: docker build -t myapp:${{ github.sha }} . - name: Push Docker image run: docker push myapp:${{ github.sha }} deploy-production: needs: build runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' environment: production steps: - name: Deploy to production run: | ssh deploy@production.example.com \ "docker pull myapp:${{ github.sha }} && \ docker stop myapp || true && \ docker run -d --name myapp -p 3000:3000 myapp:${{ github.sha }}"
The needs keyword creates a dependency chain — build won't start unless tests pass, deploy won't start unless build passes.
environment: production adds a manual approval step in GitHub — production deploy doesn't start until someone clicks "Approve."
Secrets Management
Passwords, API keys, tokens — sensitive values never go in YAML files. They're managed through GitHub Secrets or a similar vault system.
# Correct: read from secret - name: Deploy env: DATABASE_URL: ${{ secrets.DATABASE_URL }} API_KEY: ${{ secrets.API_KEY }} run: ./deploy.sh # WRONG: never do this - name: Deploy run: DATABASE_URL=postgres://user:password@host/db ./deploy.sh
Secrets are defined in GitHub repository Settings > Secrets. They're automatically masked as *** in logs.
Deployment Strategies
Rolling Deployment: Instances are updated one by one. No downtime, but old and new versions run simultaneously for a period.
Blue/Green Deployment: Two identical environments (blue = active, green = new version). New version deploys to green, gets tested, traffic switches to green all at once. If problems arise, instantly switch back to blue.
Canary Deployment: New version is initially sent to a small percentage of traffic (e.g., 5%). Metrics are monitored, percentage increases if no issues.
Blue/Green:
Traffic → Blue (v1) Traffic → Green (v2)
Green (v2) ready → Blue (v1) standby
Why Working Without a Pipeline Is Dangerous
Pipelines skipped on "small projects" become one of the most painful technical debts as the project grows.
Writing tests stops being mandatory because nobody runs them. Fear of deployment grows, release frequency drops. A "if it works, don't touch it" culture sets in. When a bug reaches production, nobody knows what anyone deployed.
A CI/CD pipeline is easiest to set up when the project is small. In a large project, it's the hardest, most expensive, and most friction-heavy thing to introduce.
Everything that minimizes the time and human intervention between code being committed and reaching production makes software development safer, faster, and more predictable.