Tanglewood
Defining the UX vision for AI-first enterprise onboarding — replacing consultant-driven implementation with document ingestion, natural language configuration, and human-in-the-loop design across two financial product suites.
2025 · Enterprise SaaS · AI-Driven Configuration · Oracle
The split-view interface: natural language conversation on the left, live configuration updating on the right. The highlighted row shows a change being made through conversation.First designer on Oracle's most ambitious AI product initiative — I translated a year of product-team groundwork into agent architecture, interaction design, and a product specification for a system that would make enterprise implementation intelligent, inspectable, and trustworthy
↪ Impact
✓ First designer on a strategic initiative sponsored by Oracle's EVP of financial products — five months as sole designer before the project's design team would grow
✓ Defined UX-specific requirements for AI behavior — interruptibility, transparency, confidence communication, audit trails — that the product team had not identified
✓ Designed the agent workflow architecture, end-to-end onboarding flows, and the hybrid conversation/configuration UI within Oracle's Redwood Design System
✓ Brought historically separate ERP and EPM product teams into collaborative alignment around a unified configuration experience
✓ Worked directly with the Redwood core team on AI interaction patterns the component library hadn't yet addressed
↪ The Problem
Finance app onboarding is lengthy and costly
Oracle's ERP and EPM financial suites were originally separate products, built by separate companies, acquired at different times. Each was implemented independently, often by different teams, at different times, with little consideration for the other. The result: misaligned dimensions, conflicting definitions, and brittle custom integrations holding everything together.
Customers buying both faced implementation timelines that could stretch well beyond a year, with consulting costs often reaching six or seven figures, not because the software couldn't be configured, but because translating organizational intent into valid system configuration required expensive human intermediaries to reconcile two products that had never been designed to work together.
Leadership wanted AI to change that equation. The EVP's vision was aggressive: compress months of consultant-driven implementation into weeks. Engineers and former implementation consultants on the team were more measured — a roughly 60% reduction in consultant dependency felt like a realistic target. Either way, the opportunity was enormous.
Product teams had spent a year defining the problem space. No one had yet designed what the solution would look like for the human at the other end of it.
That was my job.
The translation gap: customers know their business, the systems need structured configuration, and expensive consultants bridge the two. Tanglewood replaces that bridge with AI.↪ My Approach
♠︎ Reframed Reqs
The product team handed me nine executive requirements — system-oriented directives like auto-generate dimension relationships, real-time integration, and Redwood UX compliance. I reinterpreted each through a design lens, then defined an entirely new layer of AI-specific UX requirements the team hadn't articulated: interruptibility, transparency, suggested responses, audit as a user-facing surface, learning cues, and collaboration hooks. These became the criteria I evaluated every design decision against.
♣︎ Designed AI
The system needed more than a single AI assistant. I defined four specialized agents — a Configuration Guide for direct user interaction, a Document Analyst for ingesting and parsing uploaded materials, a Validation Agent running continuously in the background to flag conflicts, and an Integration Agent to synchronize configuration across systems. The user primarily sees one agent; the others surface only when their work requires human attention.
♦︎ Mapped Flows
Built end-to-end onboarding workflows from document ingestion through AI interpretation, human review, and configuration deployment — covering both the initial setup and ongoing maintenance. Created before-and-after flow visualizations, personas, and ROI framing for executive presentations designed to survive the question every Oracle VP asks: "Is this just a coat of paint, or are we revolutionizing?"
♥︎ Designed UI
Explored interaction patterns for the hybrid conversation/configuration interface, working within — and occasionally pushing — Oracle's Redwood Design System. Collaborated directly with the Redwood core team on patterns the system didn't yet support for AI contexts: how agents communicate progress, how users interrupt and redirect, and how trust is built through visible reasoning.
End-to-end onboarding workflow from document ingestion through AI interpretation, human review, and configuration deployment — covering Analyze, Solve, and Act phases.↪ The Design
1. Hybrid conversation / configuration interface
The core design question was where the conversation should live relative to the configuration it produces. We started with a hybrid model — conversation and structured output visible together — and explored two primary layouts: a top-mounted chat with configuration content below, and a side-by-side split view with conversation on the left and live configuration on the right.
The split view showed the most promise. Users could watch the configuration take shape as they talked — no black-box moments. When the AI inferred a change, the affected row highlighted. When the user disagreed, they could respond in conversation and see the configuration adjust. The pattern preserved the user's sense of what the AI was doing with their input, rather than producing an output and asking them to validate something they didn't fully understand.
Two layout explorations for the hybrid interface: top-aligned chat with configuration below (left), and side-by-side split view with live updating configuration (right). The split view showed the most promise — users could watch the configuration form as they talked."2. Document-in, config-out
The system's entry point wasn't a blank form — it was a document upload. Customers could provide whatever they already had: org charts, financial reports, policy documents, legacy exports, even a simple written description of how their organization operates. The AI would parse these materials, extract configuration-relevant data, and produce a structured starting point. My research proved what would be possible.
I designed the template selection experience and the generated configuration output — the moment where uploaded documents become Accounting Calendars, Chart of Accounts segments, and enterprise structure recommendations. The user's first interaction with the system's intelligence is seeing their own documents reflected back as structured configuration, ready for review.
From template selection to generated configuration: the system ingests documents and produces structured enterprise structures — Accounting Calendars, Chart of Accounts, Business Units — ready for human review.3. Trust as a design principle
The overarching design challenge wasn't layout or information architecture — it was trust. How does a financial operations team trust an AI to configure systems that govern how their organization handles money?
Every pattern I designed was governed by three properties: the AI's reasoning is always visible (not just its output), the human can always see what would happen before anything commits, and every decision is logged with rationale and ownership. These weren't features on a roadmap — they were principles that governed every interaction in the system. I pushed this framing with both the product teams and the Redwood design system team as foundational to how AI-driven enterprise products should behave.
Trust as interaction design: every pattern governed by three properties — reasoning visible, preview before commit, decisions logged with rationale.4. Where the work was heading
Five months gave me enough time to establish the interaction foundation but not enough to explore every pattern the system would need. At the time of my departure, the design directions I had identified as critical next steps included: AI task progression with human review checkpoints, override patterns allowing users to reach back into the AI's work and modify earlier decisions while seeing downstream impact, and confidence indicators that communicate the AI's certainty at each step.
These represent the natural extension of the trust principles into more granular interaction territory — the patterns that would make the system not just transparent, but truly controllable.
↪ The Outcome
From 0→0.5
Over five months, I defined the foundational UX direction for one of Oracle's most ambitious AI product initiatives. The work included UX requirements the product team hadn't identified, a four-agent workflow architecture, a full product design specification, end-to-end onboarding flows, and the UI explorations that gave the product its first tangible form.
I brought historically separate product teams into alignment around a shared vision and worked with the Redwood core team on AI interaction patterns the component library hadn't yet addressed.
Departure
My involvement ended with Oracle's 2025 workforce reduction. I cannot speak to the project's trajectory after my departure. What I can speak to is the design thinking: the patterns I explored for human-AI collaboration in high-stakes configuration environments remain directly relevant to anyone building AI systems where the consequences of automation are real and the humans in the loop need to stay in control.
↪ The Takeaway
Enterprise AI isn't an automation problem.
It's a translation problem — and a trust problem.
Organizations aren't trying to remove humans from configuration. They're trying to make the system understand what humans mean, and to make that understanding visible enough that the humans stay in control.
The design when I left the project