← Back to writing
Case study

Building an AI discovery workflow for a precision manufacturer

How we helped a 200-person manufacturing firm identify and prioritise AI opportunities — without disrupting a single production line.

15 January 2026 Anuj Garg manufacturing · discovery · case-study

Placeholder case study. The structure here is the structure all our real case studies will follow once we have client permission to publish.

Context

A 200-person precision manufacturing firm — three facilities, exporting to seven countries, growing 18% year-on-year — came to us with a familiar problem. The board had a mandate to “do something with AI” by the end of the financial year. The CEO knew that mandate was real but underspecified. The CTO knew that most of the AI consulting pitches they had received looked like recycled enterprise transformation decks.

They wanted an honest assessment. They didn’t want shelfware.

The Challenge

Three constraints framed the engagement:

  • Operational continuity. Production lines could not be touched. Any AI work had to happen on the periphery of operations until proven.
  • No greenfield ML team. The firm had a strong IT function but had never shipped a production ML model. We were not going to be hiring data scientists into them in 12 weeks.
  • Board-ready output. Whatever we recommended had to survive a board review that would ask hard questions about ROI, downside risk, and dependency.

Our Approach

We ran a two-week discovery sprint with three workstreams in parallel:

  1. Process mapping. Two of our engineers spent a week on-site, sitting with the floor managers, quality team, and the export logistics group. Output: a 60-page process map with friction points colour-coded by severity.
  2. Data inventory. We catalogued every operational data source — ERP, MES, quality logs, customer correspondence, supplier paperwork — and graded each one on volume, structure, and consent for downstream use.
  3. Opportunity scoring. Against the process map and the data inventory, we ranked 23 candidate AI interventions on four axes: expected ROI, time-to-pilot, technical risk, and organisational change required.

Out of those 23, four cleared the bar. The other 19 we documented with explicit reasons — for several, the honest answer was “AI is the wrong tool, here’s the right one.”

The Outcome

The board approved a phased programme starting with the highest-ROI, lowest-disruption candidate: an AI-assisted quality triage workflow for incoming raw materials. We deployed a working pilot to one facility in eight weeks. The CTO’s team owns it now. Their first independent retraining cycle ships next quarter.

The discovery memo — including the 19 rejected ideas — has reportedly become required reading on the executive team. That, more than any single deployed model, is the part of the engagement we’re proudest of.