Created: February 17, 2026

Image

Vadim Shmakov Tech Lead

AI

[AI Adoption in Development: From Scripts to Multi-Agent Systems]

Analyze with AI

Get AI-powered insights from this Enji tech article:

What problem were we solving?

This article doesn't only focus on "how to choose the right model," but mostly on how we gradually built a multi-agent system that helps deploy features end-to-end across a multi-repository product.

The original problem was straightforward: scale development across multiple repositories without drowning the team in context-switching, process overhead, and manual synchronization.

The pain: multiple repositories, repetitive work, and manual coordination

Here's what we were dealing with from the start:

  • Multiple repositories within a single product.
  • The same repetitive steps are followed every time a new feature is kicked off.
  • Manual synchronization between code, specs, and team agreements.
  • Heavy code reviews, especially when changes touched multiple components.

At this stage, AI wasn't even on our radar. We had one goal: reduce the number of routine decisions developers had to make every day.

First step: plain engineering automation

Here's what we did first:

  • Scripts for generating templates and boilerplate.
  • A unified feature structure across repositories.
  • Navigation utilities to quickly understand where things lived and where to add new code.
  • Automation of standard pre-PR steps: checks, formatting, description preparation.

Essentially, we encoded the answers to these questions in scripts:

"What's the right way to create a new feature here?" "How do we correctly distribute changes across services?" "How do we prepare a PR so it can be reviewed efficiently?"

In hindsight, we realized those scripts already embedded behavioral policies, rules, and constraints, just without an LLM.

Second step: LLM as an interface to those rules

When LLM-based tools became available, we didn't go all-in on "autocomplete everything." Instead, we made the model an interface to our existing automation and style guides, not a "smart black box" that figures everything out on its own.

Core principles:

  • Source of truth = code and configs, not model responses
  • Architectural decisions and constraints are defined upfront
  • LLMs operate strictly within defined boundaries and trigger the right workflows

In practice, it looked like this:

  1. Developer describes intent (what they want to do)
  2. Agent understands project context (repositories, services, architecture)
  3. Agent calls the right scripts/patterns rather than "inventing the world from scratch."

For the team, this didn't feel like "AI magic." It felt like a convenient layer on top of familiar processes.

The natural evolution toward multi-agent architecture

As use cases grew, a single "universal" agent became unmanageable. It was simultaneously trying to plan, make architectural decisions, write code across different domains, and verify results. We'd hit the classic "god object" problem, just in AI agent form.

The logical move was to split responsibilities, not by abstract roles, but into separate execution agents, each running in its own context and operating independently. We ended up with specialized agents for:

Planning & architecture

Planner / Architect: builds a coherent plan of changes across all affected repositories and defines boundaries, contracts, and implementation order

Development

  • Frontend Developer: UI changes and client-side logic.
  • Backend Developer: core server-side code.
  • Legacy Backend Developer: adapts existing legacy services.
  • Plugin Developer: integrations, extensions, external connection points.

Quality

AQA: writes and maintains e2e tests and verifies behavioral system invariants.

The outcome: near full-cycle feature implementation

In the end, each agent owns a specific part of the workflow, and the whole stream — from architectural planning to final verification — turns into an almost end‑to‑end feature implementation with minimal human involvement.​

When did we know the system was actually working? When we reached the point where a feature could move through almost the entire cycle with minimal human intervention:

  1. Input: spec / user story / product requirement
  2. Automatic decomposition:
    - Identifies affected services and repositories
    - Distributes work by role
  3. Change planning:
    - What to add or modify in each service
    - Which contracts or migrations are affected
  4. Code generation and implementation:
    - Code written in the correct style and structure from the start
    - Accounts for the specifics of each service
  5. Tests and checks:
    - Generates or updates tests
    - Runs linters, tests, and basic scenarios
  6. Documentation and PR:
    - Updates docs and contracts
    - Writes a meaningful PR description

Human role:

  • Sign off on architectural and product decisions
  • Review critical areas
  • Make the final call on the merge

The result: fewer arbitrary decisions, fewer diverging approaches, more reproducibility.

Side effects: product and support teams win too

We initially targeted only the engineering workflow, but benefits for other roles showed up quickly.

For Product Owners:

  • Fast pipeline from "rough idea" to formalized user story.
  • Hypothesis validation before entering the development cycle.
  • API/flow prototypes available before full implementation.

For support:

  • Agents that understand the architecture and know common error patterns.
  • Automatic classification and prioritization of incoming requests.
  • Fewer escalations to developers and faster incident response.

What it takes to get here

From the outside, our journey might look like "we just plugged AI into our development process." In reality, it worked because we'd already reached a solid baseline of engineering maturity before we started seriously adopting AI.

Engineering practices and discipline

What turned out to be critical for us:

  • Unified engineering standards: STYLE_GUIDEs at multiple levels (root, backend, frontend, services)
  • Formalized roles and expectations: AGENTS files describing who owns what and how decisions get made
  • A strict stance on infrastructure debt: automated checks, linters, CI/CD pipelines that can't be quietly bypassed

Without this foundation, AI would have just scaled the mess, accelerating unmanageable decisions rather than reproducible processes.

Experience with multi-repository architecture

Multi-repo architecture is non-trivial on its own. It requires experience in:

  • Building boundary contracts between services
  • Separating ownership across repositories and teams
  • Managing cross-cutting changes (when a single feature touches the API, background jobs, and frontend simultaneously)

We invested early in:

  • Agreeing on where interfaces and contracts live
  • Standardizing feature structure across services
  • Formalizing the workflow for cross-service changes

Only on top of that foundation could we train agents to respect boundaries and not break neighboring systems.

Automation and developer experience

Another distinct layer was developer experience and operational automation:

  • The ability to spot recurring actions and turn them into scenarios or tools
  • The habit of defining a process before automating it
  • Experience shipping internal tooling that developers actually use rather than work around

We already had experience building internal developer tools before AI entered the picture. LLMs weren't a replacement for that expertise, but rather the next step. We essentially swapped the interface (from CLI/scripts to conversations with an agent) without abandoning our core principles.

Architectural thinking and a willingness to formalize decisions

Our multi-agent model holds together because:

  • Architectural decisions aren't locked inside senior engineers' heads; they're documented.
  • We have a culture of explaining why, not just what.
  • We're committed to keeping AGENTS/STYLE_GUIDE/docs updated as the architecture evolves.

This is critical: without it, AI agents either start making architectural decisions on their own (dangerous) or constantly trip over implicit constraints that only humans know about.

Change management and working with people

Finally, an important and often underestimated piece, change management:

  • Helping the team understand that AI is an amplifier, not a replacement
  • Agreeing on boundaries: what humans do, what agents do, and what scripts handle
  • Setting up feedback loops: how developers report cases where the agent got it wrong, and how we factor that into the next iteration

Our AGENTS/STYLE_GUIDEs and multi-agent architecture aren't just technical artifacts. There's also a team-wide agreement on how we work with AI, so it helps rather than gets in the way.

What we've concluded

In short, here's what we've taken away:

  • AI adoption is an engineering and organizational process, not a model-selection exercise.
  • Without baseline discipline, AI just amplifies chaos.
  • The healthiest entry point is developer productivity and routine tasks, not "automate everything."
  • When engineering wins, product and support win downstream.
  • Multi-agent architecture is a natural response to growing complexity.

Looking ahead

The next logical step is a dedicated agent that owns unit and integration testing as its own domain of responsibility, not as a side activity of the development process. Right now, tests are partially created by developers and partially by AQA at the e2e level.

Going forward, we want:

  • A dedicated agent who designs the structure of the unit and integration tests.
  • Monitors coverage and invariants at the module level.
  • Updates tests when contracts change.
  • Identifies degradation of architectural assumptions before they reach the e2e stage.

This would essentially be an "internal quality owner," an agent that operates not at the feature level but at the level of system stability.

Concluding

Our path to a multi-agent system didn't start with "Let's bolt AI onto this." It started with ordinary scripts and a desire to make development manageable.

AI became the next layer on top of already established practices, not a replacement for engineering judgment, but an amplifier of it. In that configuration, it turned out to be stable, scalable, and genuinely useful for the business, not a one-time experiment.

You can also read:

ML

[SnowBall: Iterative Context Processing When It Won't Fit in the LLM Window]

Learn how Enji.ai's SnowBall algorithm processes massive LLM contexts iteratively, overcoming token limits and "lost in the middle" issues in analytical pipelines.