Trellis 2025 (Technical) Year in Review
Year in Review
2025 was a huge year for Trellis. We achieved significant technical milestones while maintaining a lean team. We believe that productivity comes from smart tooling and DevOps automations.
This is our first technical Year in Review. Since we didn't have comprehensive tracking in place throughout 2024, some metrics lack year-over-year comparisons. Going forward, we will be tracking these metrics consistently to provide better insights into our progress.
One of our most impactful achievements was implementing a DORA metrics tracking system. This allowed us to build a fully automated release pipeline that significantly reduced our deployment lead time and mitigated risk.
Releases
At Trellis, we maintain three deployed environments: develop, staging, and
production. Our trunk-based development workflow deploys code to develop
first, where QA validation occurs, before promoting changes through staging
and finally to production. Throughout 2024 and most of 2025, deployments
required manual intervention, requiring a team member to verify each
Linear issue and manually promote code to the next
environment via release branches.
Our new release pipeline automated the majority of this workflow. Changes tagged
as T0 or T1 (low-risk changes) now automatically progress through all
environments to production without manual intervention. This automation enables
faster, more incremental deployments and ensures that completed work reaches
users quickly, shortening time to value for the work completed by the
development team.
Early results are promising: in the first month of 2026, we deployed 35
production releases with an average Lead Time (delta between first commit and
deployment) of 20–26 hours for T0 and T1 issues. We expect this to shorten
as we improve the speed of our CI/CD.
Year-over-year changes between 2024 and 2025:
| Environment | 2024 | 2025 | % Change |
|---|---|---|---|
| Production | 344 | 273 | -20.6% |
| Staging | 543 | 480 | -11.6% |
| Develop | 1517 | 1775 | +17.0% |
Production deployments decreased by 20% in 2025, while develop deployments
increased by 17%. This shift can be attributed to our investment in several
major infrastructure projects that required extended development cycles before
reaching production. Key projects included migrating to AWS
EKS from Fly.io, and
implementing comprehensive Smoke Testing with k6 and E2E
testing with Playwright.
With our new automated deployment system in place, we expect production deployments to increase significantly in 2026.
For more details on our testing philosophy, see our post on Quality at Trellis: Our Vision for Testing and Automation.
Codebase
Breaking down our codebase by file type reveals where development effort is concentrated. TypeScript dominates our codebase since we are built on Angular and Nest.
Our codebase grew significantly in 2025, in addition to the user-facing features we built, we rewrote and rebuilt infrastructure, doubled down on reusable utilities, and introduced new technologies into the repository for testing.
| File Type | 2024 Lines | 2025 Lines | % Change | Description |
|---|---|---|---|---|
| TypeScript | 571,685 | 1,345,025 | +135.3% | Application and test code |
| Snapshots | 157,846 | 324,078 | +105.3% | Jest/Vitest test snapshots |
| JSON | 85,216 | 123,589 | +45.0% | Configuration Files |
| HTML | 19,664 | 36,502 | +85.6% | Angular Templates |
| Markdown | 16,928 | 31,002 | +83.2% | Documentation |
| SCSS | 7,540 | 11,832 | +56.9% | Sass stylesheets |
| GraphQL | 9,430 | 28,913 | +206.6% | Query, Mutation, and Fragments |
Note: All node_modules, generated files, cache directories, and other
temporary files were cleaned before gathering these metrics. The Statistics
plugin in WebStorm was used to generate these numbers at our current HEAD and
the January 2025 HEAD.
One other way we can measure repository size is the number of Nx projects, which gives insight into the scope of the codebase.
| Metric | 2024 | 2025 | % Change |
|---|---|---|---|
| Projects | 1022 | 1182 | +15.7% |
Despite significant code cleanup, our Nx project count grew by 15%. This increase stems primarily from introducing k6 and Playwright to our testing infrastructure, each requiring supporting libraries and test suites that we structured as discrete Nx projects.
Issues and Tickets
Linear has been our issue tracking platform for several years. Its focus on speed, quality, and user experience makes it an amazing tool for building Trellis. It also serves as an example of an incredibly well-built application that inspires us to build better software.
| Metric | 2024 | 2025 | % Change |
|---|---|---|---|
| Linear Issues | 2,850 | 4,227 | +48.3% |
The number of completed issues in 2025 was 48% higher than in 2024. As we improved the development process, we were able to work on more tasks more incrementally.
Testing
In 2025, we embarked on a comprehensive rebuild of our testing strategy. This meant we had to ask ourselves: What should be automated? At what levels should tests run? Which test types provide the most value?
Our 2025 testing initiatives included:
- E2E Testing: Implemented Playwright for comprehensive end-to-end test coverage
- Smoke Testing: Integrated k6 smoke tests into our CI/CD
pipeline, running after every deployment to
develop,staging, andproduction - Load Testing: Established a k6-based load testing process that runs quarterly or after major releases, enabling performance benchmarking over time
- Integration Testing: Migrated integration tests to Vitest Fixtures, reducing test setup complexity
| Test Type | 2024 | 2025 |
|---|---|---|
| Load Testing | 0 | 2 |
| Smoke Tests | 0 | 19 |
| E2E Tests | 0 | 244 |
| Integration Tests | ?? | 9,142 |
| Unit Tests | ?? | 49,374 |
While we lack 2024 baselines for integration and unit test counts, we know that load testing, smoke testing, and E2E testing were absent from our workflows during the year. Adding these test types to our CI/CD pipeline has drastically increased our release confidence, which is a critical prerequisite for automated deployments.
Major Tooling, Infrastructure, and DevOps Projects
Database Migration
We completed our migration from MongoDB to PostgreSQL, simplifying our infrastructure and providing us stronger consistency guarantees for our data.
Infrastructure Consolidation
We migrated our infrastructure to AWS, consolidating services across EKS, RDS, CloudFront, and ElastiCache. This replaced our previous multi-vendor setup spanning Fly.io, Upstash, and Supabase.
Model Cache Implementation
We designed and implemented a Valkey-backed caching layer called the Model Cache. This system caches data models that change infrequently but face high request volumes, reducing the load on the database and improving response times. The cache integrates directly with our GraphQL APIs and Data Loaders, transparently improving the performance of our system. Thanks to our load tests, we were able to validate that the Model Cache significantly improves system performance.
Integration Testing Migration
We migrated our integration tests from a custom Jest setup using Describe factories to Vitest with Fixtures. This migration reduced test boilerplate, improved test isolation, and reduced flaky test runs by having a more deterministic setup process.
Smoke Testing Automation
We implemented automated smoke testing using k6, with tests running after every deployment to each environment. These tests verify that critical functionality remains operational once deployed.
Load Testing Process
We established a load testing process using k6 that runs quarterly or after major releases. This regular cadence enables us to benchmark system performance over time and detect performance regressions.
E2E Testing Infrastructure
We implemented Playwright for end-to-end testing, including a custom Nx hasher that determines which tests need to run based on the code changes we explicitly care about. This optimization prevents unnecessary test execution across multiple browser and device configurations, significantly reducing CI time.
DORA Metrics
We started tracking DORA metrics to measure our delivery performance. We now track Deployment Frequency, Lead Time for Changes, Change Failure Rate, and Mean Time to Recover. These metrics provide objective data for identifying bottlenecks and measuring improvement over time.
What's Next
2025 was about building the foundation of an automated development process; 2026 is about leveraging it. The automated release pipeline we built is already showing results with our average Lead Time dropping to 20–26 hours for low-risk changes. As we continue to optimize our CI/CD pipeline, via Nx Agents, and expand our automated deployment coverage, we expect to see that number continue to drop.
The comprehensive testing strategy we established, from unit tests up through load testing with k6, gives us the confidence to move faster without breaking things. With 49,374 unit tests, 9,142 integration tests, 244 E2E tests, and automated smoke testing on every deployment, we have layers of safety nets that catch issues before they reach production. This foundation allows us to continue our incremental, atomic workflow while shipping value more frequently.
Our infrastructure consolidation on AWS and the migration to PostgreSQL simplifies our operations and provides a stable platform to build on. The Valkey-backed Model Cache we implemented has already proven its value in our load tests, reducing database load and improving response times under stress. Thanks to the load testing system we built, we validated these improvements before enabling the cache in production.
Looking at the numbers, our productivity gains are clear: 48% more completed issues. But what matters most is that we did this while maintaining a stable system throughout the year and during the busiest time of the year for Trellis and our customers.
We're not done yet. There is still work to do on expanding our E2E test coverage, further optimizing our CI/CD pipeline, and continuing to refine our automated deployment process. But we're moving in the right direction, and we're excited to see what we can achieve with this foundation in place and report back at the end of 2026!
Want to learn more about how we develop at Trellis? Check out these related posts: