Tanglewood

A strategic concept for AI-first enterprise onboarding where intent, not consultants or configuration screens, drives the system.

Black-and-white line illustration of a human and an AI system facing one another, representing collaboration before structure or solutions are defined.

Overview

Tanglewood is a strategic concept exploring how enterprise configuration and onboarding could work if designed AI-first, rather than retrofitting AI into legacy workflows.

This case does not document a shipped product. Instead, it demonstrates how I approach complex systems, reframe entrenched problems, and design toward scalable, human-centered outcomes, especially in environments where AI plays an active role.

Minimal line drawing of a human and a robot standing side by side, looking toward an open horizon, suggesting shared orientation and intent.

The Problem Space

Enterprise platforms often fail before users ever reach “day one.”

Across finance, HR, and operations platforms, onboarding typically involves:

  • Long setup timelines

  • Fragmented ownership across teams

  • Heavy reliance on spreadsheets and tribal knowledge

  • UX that assumes structure before understanding exists

Traditional design approaches rush toward:

  • Forms

  • Step-by-step wizards

  • Dashboards and validation rules

But these solutions presuppose that users already know what they are configuring. In reality, much of the early onboarding phase is about aligning people, policies, and intent before the system can be meaningfully shaped.

In many enterprise environments, this gap is filled by external consultants who translate organizational intent into system configuration. While effective, this approach is expensive, time-bound, and difficult to scale. Much of the knowledge produced during implementation disappears once the engagement ends, forcing organizations to repeat the process during future changes.

AI is frequently introduced too late, framed as...

  • Autofill

  • Recommendations

  • Help text

...rather than as a core collaborator in the process.

Line illustration of a human surrounded by fragmented documents and disconnected workflows, with an AI observing calmly, representing enterprise onboarding complexity.

Strategic Reframe

Tanglewood begins with a different premise:
Enterprise onboarding is not a setup task. It is a translation problem.

Organizations are attempting to translate:

  • Policies into structures

  • Language into data

  • Human intent into system behavior

Instead of starting with screens, Tanglewood starts with conversation and narrative.

The system’s first job is not to validate inputs—it is to help users:

  • Articulate what they mean

  • See assumptions made explicit

  • Understand consequences before committing

In this model:

  • AI acts as an orchestrator, not a shortcut

  • Users collaborate with the system instead of feeding it

  • Progress happens through dialogue, not clicks

Tanglewood reframes this dynamic by embedding those translation and sense-making functions directly into the system itself. Rather than relying on temporary external expertise, organizations collaborate with an AI agent that continuously performs the same interpretive work: documenting intent, surfacing assumptions, and maintaining alignment over time.

The result is not faster setup alone, but clearer, more resilient configuration.

Black-and-white illustration of a human and a robot jointly aligning a large abstract shape, symbolizing a shift from configuration to shared understanding.

High-Level Flow

At its highest level, Tanglewood operates as a continuous loop, not a one-time setup sequence.

The flow is designed to:

  • Establish shared understanding

  • Make assumptions visible

  • Preserve decision context over time

Rather than guiding users through a fixed wizard, Tanglewood cycles through three repeating states:

  • Express – users articulate intent in human terms

  • Interpret – the system translates intent into structured meaning

  • Align – humans and AI converge on a shared configuration

This loop may run multiple times before anything is finalized.
Nothing is committed by default.

The Tanglewood Concept Model


1. Input

Users provide whatever materials already exist:

  • Documents

  • Spreadsheets

  • Written explanations

  • Partial data

Tanglewood assumes inputs are incomplete and contradictory.

That is treated as normal, not erroneous.

2. Synthesis

The AI agent:

  • Analyzes provided materials

  • Drafts an initial enterprise blueprint

  • Surfaces assumptions explicitly

  • Flags uncertainty and missing context

The system explains its reasoning in plain language.

3. Review

Users examine the draft blueprint and can:

  • Accept or reject assumptions

  • Edit outputs directly

  • Ask clarifying or hypothetical questions

This phase is optimized for sense-making, not speed.

4. Refinement

As feedback is incorporated:

  • The configuration evolves

  • Rationale is updated

  • A change history is maintained

Understanding deepens with each iteration.

5. Validation

Only after alignment is reached does the system move toward:

  • Formal validation

  • Deployment readiness

  • Ongoing maintenance

At this stage, decisions are explicit and ownership is clear.

Minimal illustration of a human and an AI focused on a circular loop, representing continuous collaboration and iterative alignment rather than linear setup.

Artifact 1 of 4 – Business Narrative Document

The Business Narrative Document captures how the organization believes it operates—before that understanding is forced into system structures.

Rather than starting with predefined fields or schemas, this document accepts inputs in human terms: policies, goals, constraints, exceptions, and unresolved questions. It is intentionally permissive, allowing ambiguity and contradiction to exist long enough to be examined.

In Tanglewood, this narrative becomes the primary substrate for AI interpretation. The system does not treat it as static documentation, but as a living source of intent that can be revisited, challenged, and refined over time.

Why this matters:

  • Prevents premature formalization

  • Gives non-technical stakeholders a first-class voice

  • Replaces consultant-led discovery with system-owned understanding

  • Establishes a shared reference point before configuration begins

The Business Narrative Document is not an output to finalize—it is a foundation to reason from.

Artifact 2 of 4 – Configuration Summary

The Configuration Summary represents the system’s current understanding of how the organization intends to operate.

Unlike traditional setup screens or configuration wizards, this artifact is not manually assembled by the user. It is generated and continuously updated by the AI agent as it interprets narrative inputs, documents, and ongoing feedback.

The summary translates human intent into structured concepts—entities, rules, relationships, and constraints—while explicitly marking assumptions, uncertainties, and inferred decisions.

Why this matters:

  • Bridges narrative understanding and system structure

  • Makes AI interpretation visible and inspectable

  • Gives users a concrete artifact to react to, not abstract prompts

  • Replaces consultant-produced configuration outputs with system-owned synthesis

The Configuration Summary is not a final state.

It is a working draft that evolves as understanding deepens.

Artifact 3 of 4 – Decision & Change Log

The Decision & Change Log captures how understanding evolves over time.

As assumptions are accepted, revised, or rejected, the system records not just what changed, but why the change occurred. Each entry preserves context, rationale, and ownership, creating a durable record of decision-making.

Unlike traditional audit logs or version histories, this artifact is designed for human comprehension. It explains shifts in reasoning, not just state transitions.

Why this matters:

  • Preserves institutional knowledge beyond implementation

  • Makes configuration decisions traceable and defensible

  • Reduces rework during organizational change

  • Replaces consultant-owned rationale with system-owned memory

The log is continuously available and grows alongside the system.

Artifact 4 of 4 – AI Narration / Reasoning Trail

What the System is considering

Why This Matters

Tanglewood is not a faster setup flow.

It is a shift in how enterprise systems come into being.

Most enterprise onboarding fails not because the software is incapable, but because the work of interpretation—translating human intent into system structure—is fragmented, expensive, and temporary.

Traditionally, this gap is filled by external consultants. They gather context, reconcile contradictions, document decisions, and translate policy into configuration. While effective, this approach is costly, difficult to scale, and prone to knowledge loss once the engagement ends.

Tanglewood internalizes that work.

By embedding sensemaking, interpretation, and documentation directly into the system, Tanglewood enables organizations to:

  • Reduce dependence on long-running consulting engagements

  • Preserve institutional knowledge over time

  • Make configuration decisions explicit, inspectable, and revisitable

  • Adapt continuously as the organization evolves

From a UX perspective, this reframes onboarding as a collaborative process, not a hurdle to clear.

From a business perspective, it replaces episodic, high-cost intervention with durable, system-owned understanding.

Most importantly, it establishes a model where AI does not obscure decision-making—but strengthens it.

The result is not just a configured system, but a shared, living understanding of how the organization intends to operate.

Line drawing of a human and an AI observing a stable, balanced structure, symbolizing clarity, durability, and reduced dependency on external intervention.

Design Principles

Tanglewood is guided by a small set of deliberate design principles:

  • Start with meaning, not mechanics

  • Documents before dashboards

  • Visibility over automation

  • Conversation before configuration

  • Humans stay accountable; AI does the heavy lifting

These principles shape not just UI decisions, but the order in which the system reveals itself.

Rather than asking users to “fill out everything correctly,” Tanglewood helps them understand what the system believes to be true, and invites them to correct it.

Minimal illustration of a designer thoughtfully observing abstract symbols, representing reflection, intentionality, and design judgment.