}

When I talk to managing directors about AI implementation, I almost always hear the same sentence: That would take at least half a year for us. Sometimes a year. Sometimes “we’re not ready yet.”
I understand this caution. If you look at experience reports from enterprise AI projects, you’ll find timelines of 12 to 24 months, six-figure budgets, and a success rate hovering around 50 percent. According to a McKinsey study, roughly 70 percent of all digital transformation projects fail — not because of the technology, but because of the execution.
The truth is: These numbers describe projects that were set up wrong. Too broad, too abstract, too far removed from day-to-day operations. If instead you take a concrete process, understand it, automate it, and then layer AI on top, you don’t need 12 months. Six weeks are enough for the first measurable result.
Not six weeks to the perfect solution. Six weeks to a working system that takes over real work and delivers real numbers. That’s a relevant distinction.
In this article, I present the roadmap that we use at Exasync ourselves and offer to our clients. Six weeks, each with concrete tasks, deliverables, and the pitfalls we know from firsthand experience.
Before you can start week 1, you need four things. Without these prerequisites, you’ll lose the first two weeks to organizational overhead — and that’s one of the most common reasons AI projects blow their timeline.
If these four points are in place, you’re good to go. If not, invest one or two weeks in preparation rather than having idle time later in the project.
The first week has a single goal: Understand what actually happens. Not what the org chart says, not what the process documentation claims — but what employees actually do every day.
A current-state report with: process description (step by step), system landscape map, time measurements, bottleneck analysis, and volume numbers. Length: 10 to 15 pages, no novel.
Access to the involved employees (about 4 to 6 hours of their time), system access, a contact person on the client side.
Employees describe the process as it should be — not as it is. That’s why shadowing is so important. Between what someone explains in a meeting and what they actually do at their screen, there are often worlds of difference. The workarounds, the side Excel files, the manual corrections — all of that only comes out when you watch.
After week 1, you have a detailed picture of the current state. Now you need to decide: Which process gets automated first? This decision is more important than many think. The wrong candidate can jeopardize the entire project.
Prioritized process list with scoring, architecture sketch for the first automation candidate, technical feasibility confirmation (or a reasoned “Not possible because...”).
API documentation for the involved systems, test access to staging environments (if available), 2 to 3 hours with the project owner for prioritization.
Choosing the process that sounds most “exciting” instead of the one that delivers the most value. AI implementation is not an innovation project — it’s an efficiency project. Save the fancy use cases for later and automate the most boring, repetitive process you have first. The ROI there is almost always the highest.
Now we build. Weeks 3 and 4 are the technical core of the project. This is where the automation that will later do the actual work takes shape.
Working automation workflow (running in parallel with the manual process), test report with processing rate and error log, documented fallback paths.
Development environment (n8n instance, database access, API keys), test data from the live system, regular exchange with the department (30 minutes daily is enough).
Perfectionism. Weeks 3 and 4 are not for covering 100 percent of all cases. They’re for automating the main load and having a clean escalation path for the rest. Anyone who tries to handle every exception in the first version needs not 6 weeks but 6 months.
A second classic: The API documentation doesn’t match reality. Build in buffer for systems whose interfaces work differently than documented. That’s more the rule than the exception.
This surprises many: In a project called “AI implementation,” the artificial intelligence doesn’t come in until the second-to-last week. There’s a good reason for this.
Putting AI on a chaotic process is like installing a navigation system in a car without a steering wheel. The automation from weeks 3 and 4 creates the foundation: clean data, defined workflows, working integrations. Only now does it make sense to integrate GPT, Claude, or another language model.
AI-enhanced workflow with defined decision points, tested prompts, confidence thresholds, and documented escalation path.
API access to an AI model (OpenAI, Anthropic, or a self-hosted model), budget for API costs during the testing phase (typical: EUR 50 to 200), at least 100 real test cases for validation.
Letting the AI decide too much. In the first version, AI should only be used where it provides a clear advantage. Everything that can be solved with simple rules (if/then/else) doesn’t need a language model. AI API calls cost money and time — every unnecessary call worsens both the economics and the response times.
Week 6 is the week the system goes into production. And simultaneously the week that gets underestimated most often.
Production system with monitoring, documented operating manual, ROI report with before-and-after comparison, recommendation for next automation steps.
Hosting infrastructure for continuous operation, defined escalation path (Who handles cases the AI can’t solve?), 2 hours for the ROI presentation with management.
Treating go-live as the end of the project. Week 6 is the beginning, not the end. Every automated system needs ongoing maintenance: model updates, adjustments to changed input data, performance optimization. Plan for 4 to 8 hours monthly for maintenance — or have your AI partner handle it.
Once the first process is running and the ROI is proven, three paths are open:
At Exasync, we typically accompany clients through 3 to 5 automation cycles in the first year. Each cycle builds on the previous ones, leverages the existing infrastructure, and gets faster and cheaper than the one before.
A typical 6-week project falls within this range:
Current-state analysis (Weeks 1-2): EUR 2,000 – 5,000. Depending on process complexity.
Automation (Weeks 3-4): EUR 4,000 – 12,000. Main cost driver: number of system integrations.
AI integration (Week 5): EUR 1,500 – 4,000. Depending on decision points.
Monitoring + go-live (Week 6): EUR 1,000 – 3,000. Including dashboard and alerting.
Total (one-time): EUR 8,500 – 24,000.
Ongoing costs (monthly): EUR 500 – 1,500 for hosting, AI API, monitoring, maintenance.
These numbers are realistic for a mid-market company with a clearly defined process. If you need to integrate three systems, you’re closer to the upper end. If it’s “just” email processing with one target system, closer to the lower end.
For comparison: A single employee who manually handles the same process 20 hours per week costs you EUR 4,000 per month or EUR 48,000 per year at EUR 50 total cost per hour. The break-even for a EUR 15,000 project is therefore under 4 months.
API change at the source system (Probability: Medium, Impact: High): Versioned API calls, monitoring for error rate spikes, fallback to manual processing.
Insufficient data quality (Probability: High, Impact: Medium): Validation layer before processing, cleanup scripts, escalation for unknown formats.
AI hallucinations / wrong decisions (Probability: Medium, Impact: High): Confidence scoring, human review at low scores, regular spot-check audits.
Employee resistance (Probability: Medium, Impact: Medium): Early involvement, demonstrate benefits (“less typing work”), don’t communicate it as headcount reduction.
Timeline slip due to missing access (Probability: High, Impact: Medium): Secure all access before project start (see checklist above), build in 3-day buffer.
Scope creep (Probability: High, Impact: High): Have a clear scope document signed in week 2, evaluate changes only after week 6.
AI API costs higher than expected (Probability: Low, Impact: Medium): Define token budget per transaction, rule-based pre-sorting (AI only when needed), cost monitoring from day 1.
None of these risks is a project killer — provided you know about them beforehand and have a countermeasure ready. The worst that can happen is a delay of two to three weeks. The most likely outcome: The processing rate sits at 88 instead of 95 percent in week 6, and you need another one to two weeks of fine-tuning. No drama.
I’m not going to answer with “Yes, of course.” Instead, here’s the honest assessment.
The 6-week plan works when three conditions are met:
At Exasync, we run our own company on exactly this principle. 50 AI agents, built in iterative cycles, each cycle with a clear focus. Not everything at once, but step by step. The result after just a few months: 95 percent autonomous operation. Not because we had a million-dollar budget, but because we kept each cycle small.
If you want to check whether your process fits the 6-week framework, let’s talk about it. We’ll give you an honest assessment in a 30-minute conversation — including a rough cost estimate and the three concrete steps you should take next.