The complete CI/CD setup for my Next.js projects. Linting, testing, type checking, Lighthouse audits, and deployment to Vercel—all automated with GitHub Actions.
Every push to my portfolio triggers a full CI pipeline: lint, typecheck, test, build, Lighthouse audit. If anything fails, the PR is blocked. Here's the setup I use across my Next.js projects.
The pipeline runs in stages: 1) Install dependencies (cached), 2) Lint + Typecheck (parallel), 3) Unit tests, 4) Build, 5) E2E tests against preview deployment, 6) Lighthouse audit. Stages 2-3 run in parallel where possible.
ESLint with Next.js config catches common mistakes. TypeScript strict mode catches type errors. These run first because they're fast and catch obvious problems.
I run them in parallel: 'npm run lint' and 'npm run typecheck' as separate jobs. If either fails, the pipeline stops early. No point running tests if the code doesn't compile.
Vitest runs unit and component tests. Fast, parallelized, with coverage reporting. I don't enforce coverage thresholds—I care about meaningful tests, not percentages.
For Hoop Almanac, the Python ML tests run in a separate job. The GitHub Action spins up Python, installs dependencies, runs pytest. It only runs if Python files changed (path filtering).
The build step catches App Router-specific issues: Server/Client Component mismatches, missing 'use client' directives, import errors. If it builds, the code at least runs.
I cache the Next.js build cache between runs. Incremental builds are fast. A full rebuild only happens when dependencies change.
Playwright tests run against Vercel preview deployments. Vercel automatically deploys each PR to a unique URL. The GitHub Action waits for the deployment, then runs Playwright against it.
This tests the real deployment, not a local build. Environment variables, edge functions, ISR—everything works as it would in production.
I use Lighthouse CI to audit the preview deployment. Thresholds: Performance > 90, Accessibility = 100, Best Practices > 90, SEO > 90. If any threshold fails, the PR is blocked.
The results are posted as a PR comment. I can see exactly which metrics changed. This catches performance regressions before they hit production.
Vercel handles deployment automatically. PRs get preview deployments. Merges to main deploy to production. I don't manage this in GitHub Actions—Vercel's integration is better.
The pipeline's job is to gate deployments. If CI passes, Vercel can deploy. If CI fails, the PR can't merge, so production stays safe.
Node modules are cached with actions/cache. The cache key includes package-lock.json hash—dependencies only reinstall when they change. This saves 1-2 minutes per run.
Next.js build cache is also cached. Incremental builds are dramatically faster. A typical CI run takes 3-4 minutes, not 10+.
Secrets are stored in GitHub repository settings. The workflow accesses them via secrets context. Never commit API keys—the CI environment gets them securely.
For Sanity, OpenAI, and database connections, I use different credentials for CI vs production. The CI credentials have limited permissions—enough to run tests, not enough to damage production data.
The CI pipeline barely changes. Same linting, same testing, same deployment. The main difference: build errors might surface Server/Client Component issues. Fix those in development, not CI.
CI should catch problems before production. Lint catches style issues, TypeScript catches type errors, tests catch logic bugs, Lighthouse catches performance regressions. Automate the boring stuff so you can focus on building features.