1. Executive Summary
I led the design of an adaptive AI system built to support complex, multi‑step decision workflows inside a traditional web environment. The goal was to help people make high‑stakes decisions with clarity and confidence, without forcing them into a chat‑only experience or overwhelming them with unnecessary complexity.
This work spanned research, UI/UX, accessibility, service design, AI behaviour design, and cross‑functional leadership. The guiding belief behind the entire system was simple:
AI should adapt to people, context, and risk. People should not have to adapt to AI.
2. The Challenge
Most AI tools assume that conversation is the solution. In reality, many decisions require structure, context, and traceability. Users need to understand not only what the AI says, but why it says it.
We were designing for situations where rules change based on context, where accuracy matters, and where a wrong answer can have real consequences. The challenge was to create an AI experience that was flexible enough for exploration, structured enough for reliability, and transparent enough to earn trust.
How do we design an AI system that supports intricate, context‑sensitive decisions without overwhelming the user or compromising accuracy?
3. Understanding the Ecosystem
People
- Risk‑aware users, who need accuracy, traceability, and confidence before acting.
- Performance‑driven users, who want coaching, feedback, and improvement.
- Occasional users, who need clarity, low cognitive load, and guardrails.
The Service Context
The AI lived within a broader service ecosystem that included compliance teams, operational support, and human escalation pathways. Users often moved between browsing, reviewing documents, asking questions, and validating information. The AI needed to support this entire journey, not interrupt it.
Thematic Context (Abstracted)
The system was designed for multi‑step, context‑dependent decisions where the AI could not simply “guess.” It needed to ask, clarify, and adapt.
A single AI mode cannot serve all users or all contexts.
4. Research and Validation
- Missing context led to unreliable outputs.
- Cognitive overload made users distrust the system.
- Chat‑only tools broke real workflows.
- Users wanted transparency, not magic.
The breakthrough was recognizing that the AI should not rush to answer. It should guide the user toward better inputs.
The AI needed to collaborate, not simply respond.
5. Technical Feasibility and Backstage Orchestration
With Data Science
We defined model capabilities, limitations, and guardrails. We created risk‑tiered response patterns and prompting strategies that aligned with how the model behaved in different scenarios.
With Engineering
We mapped what could be real‑time and what required processing. We clarified voice input constraints, document‑parsing rules, and cross‑device responsiveness.
System Architecture
- A UI layer for chat, structured UI, and navigation.
- An AI orchestration layer that managed context and prompting logic.
- A data and compliance layer that ensured accuracy and safety.
- A human escalation layer for support and operational workflows.
6. Design Principles
- Ask before answering.
- Adapt to cognitive load.
- Prioritise clarity over speed.
- Build trust through transparency.
- Offer multiple paths, not one.
- Design for failure, not perfection.
7. The Solution
A Hybrid Experience
The AI was embedded within a traditional web system. Users could navigate, browse, escalate, and return without losing context. The AI felt like part of the product, not a separate tool.
Flexible Interaction Modes
- Embedded chat
- A full‑screen immersive mode
- Voice or text input
Non‑Chat Paths
Not everyone wants to type. Not everyone wants to talk.
So we designed structured pathways using tiles, lists, and filters.
The AI enhanced workflows. It did not replace them.
8. Adaptive AI Interaction
Context‑Aware Interaction
The AI asked for missing information before answering. This prevented hallucinations and ensured accuracy.
Iterative Refinement
Users could edit previous inputs, add context, or regenerate responses. The AI treated the conversation as a living system, not a one‑shot exchange.
Cognitive Load Modes
Three modes: Basic, Intermediate, and Advanced; adjusted tone, depth, and complexity. Users controlled how much information they wanted.
Explainability
The AI explained why it asked for something, why it refused something, and what it knew or did not know. This transparency built trust.
9. Conversation as a System
Infrastructure
We designed persistent history, search within conversations, and long‑response navigation. Users could always find their way back.
Conversational Design
Tone mattered. Turn‑taking mattered. Repairing misunderstandings mattered.
The AI needed to feel helpful, not overwhelming.
Guided Interaction
Inside chat, users could select from lists, tiles, and structured options. This reduced typing and cognitive load, especially for occasional users.
10. Trust, Transparency, and Control
We added clear disclaimers, visible knowledge boundaries, and output controls such as copy, download, and reuse. Users could give feedback, and the AI could regenerate responses using that feedback.
The system made its reasoning visible without burdening the user.
Need help shaping your UX story?
From product strategy to accessibility and execution, I help teams and individuals clarify their vision and elevate their work. If you're serious about improving your UX outcomes, let’s talk.
Book a Free Consultation11. Handling Real‑World Complexity
- Document upload
- Parsing and summarisation
- Structured outputs
- Exportable results
This ensured the AI fit into the real world, not an idealised one.
12. Accessibility as a Core System
Multimodal Interaction
The system supported voice and text, keyboard‑only flows, and screen readers.
Cognitive Support
Cognitive modes reduced overload and offered clear, digestible information.
Global Inclusion
The interface adapted to both left‑to‑right and right‑to‑left languages.
AI‑Specific Accessibility
We addressed voice bias, accent recognition, and safe handling of hallucinations.
Accessibility was achieved through adaptability.
13. Designing for Failure
- AI unavailability
- Server issues
- Network failures
- Guardrails triggering refusals
The system communicated clearly and recovered gracefully, with human escalation available when needed.
14. Design System Integration
The AI experience aligned with the broader design system.
Components were scalable, branded, and consistent across chat and non‑chat interactions.
15. Key Tradeoffs
- Speed versus accuracy
- Freedom versus structure
- Simplicity versus adaptability
- Chat‑only versus hybrid
16. Delivery Strategy
MVP
We launched with core chat, context clarification, and trust features.
Post‑MVP
We expanded into cognitive modes, coaching systems, advanced conversation management, and deeper accessibility features.
This allowed us to deliver value early while building toward a long‑term vision.
17. Testing and Optimisation
We tested context flows, prompt editing, voice patterns, and guided interactions.
We ran A/B tests on prompt styles, response formats, and feedback placement.
We tracked drop‑offs, completion rates, feature usage, and feedback signals.
The system improved continuously through real‑world data.
18. Impact
- Higher confidence in AI outputs
- Fewer errors in complex decision‑making
- Improved usability across user types
- Increased engagement through adaptive interaction
Quantitative improvements are abstracted for confidentiality, but the behavioural shift was clear: users trusted the system more, and they used it more effectively.
19. Confidentiality Note
This case study synthesises patterns from multiple implementations.
Industries and workflows are abstracted to protect confidentiality.
The focus is on the design approach, not proprietary details.
20. Final Reflection
AI is not just conversational. It is adaptive, accountable, and human‑aware.
This project demonstrated that designing AI is not about prompts or chat bubbles.
It is about systems, behaviour, trust, and the humans who rely on them.