2025∙Enterprise∙B2B∙AI∙Product Implementation∙Oracle

Tanglewood

Slashing consultant costs with AI

Black-and-white line illustration of a human and an AI system facing one another, representing collaboration before structure or solutions are defined.

Overview

Tanglewood is a strategic concept—not a shipped product, but a serious attempt to rethink a problem that enterprise software has been getting wrong for decades. It's here because the way I approach complex, ambiguous design problems matters as much to me as the artifacts I produce.

Minimal line drawing of a human and a robot standing side by side, looking toward an open horizon, suggesting shared orientation and intent.

The Problem Space

Enterprise platforms often fail before users ever reach day one.

Across finance, HR, and operations, onboarding is typically a long, fragmented process that relies heavily on spreadsheets, tribal knowledge, and UX that assumes users already know what they're configuring. The standard design response—forms, wizards, validation rules—makes the same mistake. It rushes toward structure before understanding exists. In most enterprise environments, the gap between organizational intent and system configuration is filled by external consultants.

They translate, interpret, reconcile contradictions, and document decisions. It works, but it's expensive, time-bound, and the knowledge walks out the door when the engagement ends.

AI gets introduced too late, framed as autofill or help text; a finishing touch on a broken process rather than a rethink of the process itself.

Line illustration of a human surrounded by fragmented documents and disconnected workflows, with an AI observing calmly, representing enterprise onboarding complexity.

Strategic Reframe

The premise Tanglewood starts from is simple: enterprise onboarding isn't a setup task. It's a translation problem.

Organizations are trying to translate policies into structures, language into data, and human intent into system behavior. Starting with screens presupposes that translation has already happened. It hasn't.

Tanglewood starts with conversation instead. The system's first job isn't to validate inputs. It's to help users articulate what they mean, surface the assumptions being made, and understand the consequences before anything is committed.

AI acts as an orchestrator here, not a shortcut. Progress happens through dialogue, not clicks.

Black-and-white illustration of a human and a robot jointly aligning a large abstract shape, symbolizing a shift from configuration to shared understanding.

High-Level Flow

At its highest level, Tanglewood operates as a continuous loop, not a one-time setup sequence.

The flow is designed to:

  • Establish shared understanding

  • Make assumptions visible

  • Preserve decision context over time

Rather than guiding users through a fixed wizard, Tanglewood cycles through three repeating states:

  • Express – users articulate intent in human terms

  • Interpret – the system translates intent into structured meaning

  • Align – humans and AI converge on a shared configuration

This loop may run multiple times before anything is finalized.
Nothing is committed by default.

The Tanglewood Concept Model


1. Input

Users provide whatever materials already exist:

  • Documents

  • Spreadsheets

  • Written explanations

  • Partial data

Tanglewood assumes inputs are incomplete and contradictory.

That is treated as normal, not erroneous.

2. Synthesis

The AI agent:

  • Analyzes provided materials

  • Drafts an initial enterprise blueprint

  • Surfaces assumptions explicitly

  • Flags uncertainty and missing context

The system explains its reasoning in plain language.

3. Review

Users examine the draft blueprint and can:

  • Accept or reject assumptions

  • Edit outputs directly

  • Ask clarifying or hypothetical questions

This phase is optimized for sense-making, not speed.

4. Refinement

As feedback is incorporated:

  • The configuration evolves

  • Rationale is updated

  • A change history is maintained

Understanding deepens with each iteration.

5. Validation

Only after alignment is reached does the system move toward:

  • Formal validation

  • Deployment readiness

  • Ongoing maintenance

At this stage, decisions are explicit and ownership is clear.

Minimal illustration of a human and an AI focused on a circular loop, representing continuous collaboration and iterative alignment rather than linear setup.

Artifact 1 of 4 – Business Narrative Document

The Business Narrative Document captures how the organization believes it operates before that understanding is forced into system structures.

It accepts inputs in human terms: policies, goals, constraints, exceptions, and unresolved questions. Ambiguity and contradiction are welcome here, because they need to exist long enough to be examined before the system is allowed to act on them. Premature formalization is one of the most common ways enterprise implementations go wrong. Push messy, contradictory organizational reality into predefined fields too early and you distort intent.

So, this document stays permissive by design. Non-technical stakeholders get a first-class voice before the system imposes its own structure on what they're trying to say. In Tanglewood, this narrative becomes the primary input for AI interpretation, a living source of intent that can be revisited, challenged, and refined as understanding deepens.

It is a foundation to reason from, not an output to finalize.

Artifact 2 of 4 – Configuration Summary

The Configuration Summary is the system's current understanding of how the organization intends to operate.

Rather than being assembled through forms or wizards, the AI agent generates it and keeps updating it as it interprets narrative inputs, documents, and ongoing feedback. What comes out is human intent translated into structured concepts: entities, rules, relationships, and constraints, with assumptions, uncertainties, and inferred decisions marked explicitly so nothing is hidden in the reasoning.

This visibility matters. Users get something concrete to react to rather than abstract prompts asking them what they want. The system shows its work and invites correction. That's a fundamentally different relationship than most enterprise configuration tools offer, where the burden of translation falls entirely on the user and the system just waits to be told what to do.

The Configuration Summary is a working draft, not a destination. It evolves as understanding deepens, and it's never more authoritative than the conversation that produced it.

Artifact 3 of 4 – Decision & Change Log

The Decision & Change Log captures both how and why understanding evolves over time.

As assumptions are accepted, revised, or rejected, the system records the reasoning behind each shift, not just the state transition. Context, rationale, and ownership travel with every entry. The result is a durable record of how decisions actually got made, which turns out to be exactly what organizations lose when an implementation ends and the consultants who held all that context in their heads move on to the next engagement.

Most audit logs and version histories are written for systems, not people. They tell you what happened, in sequence, in terms the database understands. This log is designed to be read by humans who need to understand why something is the way it is, especially when circumstances change and someone has to revisit a decision made two years ago by people who are no longer in the room.

The log grows alongside the system, continuously available, so institutional knowledge stays inside the organization where it belongs.

Artifact 4 of 4 – AI Narration / Reasoning Trail

What the System is considering

Why This Matters

Tanglewood is not a faster setup flow. It is a shift in how enterprise systems come into being.

Most enterprise onboarding fails because the interpretive work of translating human intent into system structure is fragmented, expensive, and temporary.

That gap is traditionally filled by external consultants who gather context, reconcile contradictions, and translate policy into configuration. This works, but it's costly, hard to scale, and the knowledge leaves when the engagement ends.

Tanglewood internalizes that work.

By embedding sensemaking, interpretation, and documentation directly into the system, Tanglewood enables organizations to:

  • Reduce dependence on long-running consulting engagements

  • Preserve institutional knowledge over time

  • Make configuration decisions explicit, inspectable, and revisitable

  • Adapt continuously as the organization evolves

This reframes onboarding as a collaborative process, not a hurdle to clear, replacing episodic, high-cost intervention with durable, system-owned understanding. And it establishes a model where AI does not obscure decision-making—but strengthens it.

The result is not just a configured system, but an organization that actually understands what it built, why it made the decisions it made, and how to revisit them when things change.

Line drawing of a human and an AI observing a stable, balanced structure, symbolizing clarity, durability, and reduced dependency on external intervention.

Design Principles

Tanglewood is guided by a small set of deliberate design principles:

  • Start with meaning, not mechanics

  • Documents before dashboards

  • Visibility over automation

  • Conversation before configuration

  • Humans stay accountable; AI does the heavy lifting

These principles shape not just UI decisions, but the sequence in which the system reveals itself to the user.

Rather than asking users to "fill out everything correctly," Tanglewood surfaces what the system believes to be true and invites users to correct it.

Minimal illustration of a designer thoughtfully observing abstract symbols, representing reflection, intentionality, and design judgment.