1. Executive Summary

A person sits at a desk in a dark, high‑tech workspace surrounded by multiple monitors and holographic displays. A large circular interface glows at the centre of the scene, casting blue and white light across the room. Data visualizations, charts, and abstract digital patterns fill the walls, creating an immersive environment that suggests advanced computing, AI‑driven analysis, and complex decision‑making.

I led the design of an adaptive AI system built to support complex, multi‑step decision workflows inside a traditional web environment. The goal was to help people make high‑stakes decisions with clarity and confidence, without forcing them into a chat‑only experience or overwhelming them with unnecessary complexity.

This work spanned research, UI/UX, accessibility, service design, AI behaviour design, and cross‑functional leadership. The guiding belief behind the entire system was simple:

AI should adapt to people, context, and risk. People should not have to adapt to AI.

2. The Challenge

Most AI tools assume that conversation is the solution. In reality, many decisions require structure, context, and traceability. Users need to understand not only what the AI says, but why it says it.

We were designing for situations where rules change based on context, where accuracy matters, and where a wrong answer can have real consequences. The challenge was to create an AI experience that was flexible enough for exploration, structured enough for reliability, and transparent enough to earn trust.

How do we design an AI system that supports intricate, context‑sensitive decisions without overwhelming the user or compromising accuracy?

3. Understanding the Ecosystem

A group of four people sit around a conference table in a modern, tech‑driven meeting room, each working on a laptop. A large screen at the front displays a bar chart, and the walls glow with abstract digital patterns in blue and orange. The scene conveys collaborative analysis, data‑driven decision‑making, and a high‑tech service environment.

People

  • Risk‑aware users, who need accuracy, traceability, and confidence before acting.
  • Performance‑driven users, who want coaching, feedback, and improvement.
  • Occasional users, who need clarity, low cognitive load, and guardrails.

The Service Context

The AI lived within a broader service ecosystem that included compliance teams, operational support, and human escalation pathways. Users often moved between browsing, reviewing documents, asking questions, and validating information. The AI needed to support this entire journey, not interrupt it.

Thematic Context (Abstracted)

The system was designed for multi‑step, context‑dependent decisions where the AI could not simply “guess.” It needed to ask, clarify, and adapt.

A single AI mode cannot serve all users or all contexts.

4. Research and Validation

A person sits at a desk in a modern office, working on a laptop while a holographic AI assistant appears above the keyboard. The scene conveys human–AI collaboration, with the glowing projection suggesting guidance, dialogue, and shared problem‑solving.
  1. Missing context led to unreliable outputs.
  2. Cognitive overload made users distrust the system.
  3. Chat‑only tools broke real workflows.
  4. Users wanted transparency, not magic.

The breakthrough was recognizing that the AI should not rush to answer. It should guide the user toward better inputs.

The AI needed to collaborate, not simply respond.

5. Technical Feasibility and Backstage Orchestration

A small team gathers around a table in a dim, high‑tech workspace, reviewing papers and diagrams while transparent digital screens float around them displaying data, code, and system schematics. The scene conveys collaborative problem‑solving and the orchestration of complex technical systems.

With Data Science

We defined model capabilities, limitations, and guardrails. We created risk‑tiered response patterns and prompting strategies that aligned with how the model behaved in different scenarios.

With Engineering

We mapped what could be real‑time and what required processing. We clarified voice input constraints, document‑parsing rules, and cross‑device responsiveness.

System Architecture

  • A UI layer for chat, structured UI, and navigation.
  • An AI orchestration layer that managed context and prompting logic.
  • A data and compliance layer that ensured accuracy and safety.
  • A human escalation layer for support and operational workflows.

6. Design Principles

A group of four people stand in a modern office, gathered around a large transparent digital display filled with colourful charts, diagrams, and data visualisations. The scene conveys collaborative analysis and thoughtful decision‑making in a high‑tech environment.
  • Ask before answering.
  • Adapt to cognitive load.
  • Prioritise clarity over speed.
  • Build trust through transparency.
  • Offer multiple paths, not one.
  • Design for failure, not perfection.

7. The Solution

A person in a bright blue suit sits at a desk in a modern office overlooking a city skyline, writing on paper while a computer monitor displays a stylized chart. The scene conveys focused work, productivity, and the integration of digital tools within a broader workflow.

A Hybrid Experience

The AI was embedded within a traditional web system. Users could navigate, browse, escalate, and return without losing context. The AI felt like part of the product, not a separate tool.

Flexible Interaction Modes

  • Embedded chat
  • A full‑screen immersive mode
  • Voice or text input

Non‑Chat Paths

Not everyone wants to type. Not everyone wants to talk.
So we designed structured pathways using tiles, lists, and filters.

The AI enhanced workflows. It did not replace them.

8. Adaptive AI Interaction

Three people sit at a table in a modern, tech‑driven workspace, each wearing headphones and using different devices, a laptop, a tablet, and a smartphone. The softly lit environment and digital displays in the background suggest adaptive, multi‑modal interaction and collaborative problem‑solving.

Context‑Aware Interaction

The AI asked for missing information before answering. This prevented hallucinations and ensured accuracy.

Iterative Refinement

Users could edit previous inputs, add context, or regenerate responses. The AI treated the conversation as a living system, not a one‑shot exchange.

Cognitive Load Modes

Three modes: Basic, Intermediate, and Advanced; adjusted tone, depth, and complexity. Users controlled how much information they wanted.

Explainability

The AI explained why it asked for something, why it refused something, and what it knew or did not know. This transparency built trust.

9. Conversation as a System

Infrastructure

We designed persistent history, search within conversations, and long‑response navigation. Users could always find their way back.

Conversational Design

Tone mattered. Turn‑taking mattered. Repairing misunderstandings mattered.
The AI needed to feel helpful, not overwhelming.

Guided Interaction

Inside chat, users could select from lists, tiles, and structured options. This reduced typing and cognitive load, especially for occasional users.

10. Trust, Transparency, and Control

We added clear disclaimers, visible knowledge boundaries, and output controls such as copy, download, and reuse. Users could give feedback, and the AI could regenerate responses using that feedback.

The system made its reasoning visible without burdening the user.

Let’s build smarter, more inclusive products
Need help shaping your UX story?

From product strategy to accessibility and execution, I help teams and individuals clarify their vision and elevate their work. If you're serious about improving your UX outcomes, let’s talk.

Book a Free Consultation

11. Handling Real‑World Complexity

A split scene shows the same person working at a desk surrounded by large stacks of documents—one side in warm daylight, the other in cool nighttime lighting. The contrast highlights sustained effort, heavy paperwork, and the ongoing complexity of real‑world tasks.
  • Document upload
  • Parsing and summarisation
  • Structured outputs
  • Exportable results

This ensured the AI fit into the real world, not an idealised one.

12. Accessibility as a Core System

Multimodal Interaction

The system supported voice and text, keyboard‑only flows, and screen readers.

Cognitive Support

Cognitive modes reduced overload and offered clear, digestible information.

Global Inclusion

The interface adapted to both left‑to‑right and right‑to‑left languages.

AI‑Specific Accessibility

We addressed voice bias, accent recognition, and safe handling of hallucinations.

Accessibility was achieved through adaptability.

13. Designing for Failure

A stylized person stands in a futuristic digital environment, with holographic interface elements and a virtual avatar displayed behind them. The scene suggests communication between a human and a digital system, highlighting reliability, status awareness, and clear system feedback.
  • AI unavailability
  • Server issues
  • Network failures
  • Guardrails triggering refusals

The system communicated clearly and recovered gracefully, with human escalation available when needed.

14. Design System Integration

The AI experience aligned with the broader design system.
Components were scalable, branded, and consistent across chat and non‑chat interactions.

15. Key Tradeoffs

  • Speed versus accuracy
  • Freedom versus structure
  • Simplicity versus adaptability
  • Chat‑only versus hybrid

16. Delivery Strategy

MVP

We launched with core chat, context clarification, and trust features.

Post‑MVP

We expanded into cognitive modes, coaching systems, advanced conversation management, and deeper accessibility features.

This allowed us to deliver value early while building toward a long‑term vision.

17. Testing and Optimisation

A person stands in front of a large futuristic data wall filled with glowing charts, graphs, and analytical panels in neon colours. They interact with the display using a stylus, suggesting active testing, analysis, and continuous optimisation.

We tested context flows, prompt editing, voice patterns, and guided interactions.
We ran A/B tests on prompt styles, response formats, and feedback placement.
We tracked drop‑offs, completion rates, feature usage, and feedback signals.

The system improved continuously through real‑world data.

18. Impact

A person stands in profile, examining a large futuristic display filled with multicoloured charts, graphs, and glowing data panels. The scene conveys advanced analysis, insight generation, and the measurable impact of data‑driven systems.
  • Higher confidence in AI outputs
  • Fewer errors in complex decision‑making
  • Improved usability across user types
  • Increased engagement through adaptive interaction

Quantitative improvements are abstracted for confidentiality, but the behavioural shift was clear: users trusted the system more, and they used it more effectively.

19. Confidentiality Note

This case study synthesises patterns from multiple implementations.
Industries and workflows are abstracted to protect confidentiality.
The focus is on the design approach, not proprietary details.

20. Final Reflection

AI is not just conversational. It is adaptive, accountable, and human‑aware.

This project demonstrated that designing AI is not about prompts or chat bubbles.
It is about systems, behaviour, trust, and the humans who rely on them.