Gen AI and the Importance of Explainability

Gen AI is good at complex problem-solving, but explainability is key. Integrating Human-Centered Design (HCD) principles ensures AI solutions are transparent, thus enhancing user trust.

AI and Human Centered Design
AI and Human Centered Design

Gen AI and SAT (Boolean Satisfiability Problem Solver) solvers like PySAT are powerful tools capable of solving complex problems across various domains. However, to truly harness their potential, it is crucial to provide users with clear explanations of how these solutions are derived. This aligns perfectly with human-centered design (HCD) principles, which focus on creating solutions that are empathetic, inclusive, intuitive, and transparent. In this blog, we'll explore how integrating HCD principles can enhance the user experience with Gen AI, using the example of generating a Fibonacci sequence.

Integrating experiences from working on SAT solvers for youth sports scheduling, I noticed the critical need for explainability and managing response latency, especially when technologies involve human interaction. These lessons are invaluable when transitioning to Gen AI, where similar challenges arise. By addressing these issues, we can build more reliable and user-friendly AI systems.

The human needs to remain in the loop and able to actively carry the solution context back into their work if we are to effectively co-create with Gen AI.

Gen AI and SAT Solvers

Gen AI refers to artificial intelligence systems that can generate human-like text, code, art, and more. SAT Solvers are algorithms used to determine if a given Boolean formula can be satisfied, with applications in hardware verification, software testing, logistics and scheduling.

The relationship between SAT solvers and Gen AI lies in their ability to handle complex problem-solving. Both require transparent decision-making processes and efficient handling of data. Lessons from SAT solvers, such as the importance of clear solution explanations and minimizing latency, are directly applicable to Gen AI. These principles help in making AI systems more understandable and efficient, ensuring better user experiences.

While these technologies are adept at providing solutions, the real value lies in their ability to explain the reasoning behind these solutions. This fosters trust, aids learning, and allows users to adapt the solutions to their specific contexts.

The Role of Human-Centered Design

Human-Centered Design (HCD) is a crucial approach in creating technology that genuinely serves its users. By focusing on the user experience, HCD ensures that products are not only functional but also empathetic, inclusive, intuitive, and transparent. This methodology emphasizes the importance of understanding user needs and contexts, making technology accessible to all users, ensuring ease of use, and providing clear, traceable information. Integrating HCD principles into the development of Gen AI can significantly enhance their usability, accessibility, and trustworthiness.

A Brief History of Human-Centered Design

Human-Centered Design (HCD) has a rich history rooted in the evolution of user experience (UX) design and the broader field of design thinking. Here’s a concise overview of its development:

Early Beginnings: Ergonomics and Human Factors

The origins of HCD can be traced back to the early 20th century, particularly during World War II, when the need for designing equipment that fit human capabilities became paramount. Ergonomics and human factors engineering emerged to optimize the interaction between people and machines, leading to more efficient and safer designs.

1960s-1970s: The Rise of Cognitive Psychology

In the 1960s and 1970s, the field of cognitive psychology began to influence design thinking. Researchers like Donald Norman, a key figure in HCD, emphasized understanding how people perceive, remember, and solve problems. This period saw the development of user-centered design (UCD), focusing on making systems more intuitive and user-friendly.

1980s: The Computer Revolution

The advent of personal computing in the 1980s brought HCD to the forefront of software development. Apple, with its Macintosh, was a pioneer in emphasizing user-friendly interfaces. The graphical user interface (GUI) became a standard, making computers accessible to a broader audience. This era solidified the importance of designing with the user in mind.

1990s-2000s: UX Design and Design Thinking

The 1990s and 2000s witnessed the formalization of UX design and the widespread adoption of design thinking. IDEO, a global design firm, popularized design thinking—a methodology centered on empathy, ideation, and experimentation. This approach emphasized understanding users’ needs and contexts, leading to more innovative and effective solutions.

2010s-Present: Integrating HCD with Emerging Technologies

In the 2010s, HCD principles began to integrate with emerging technologies such as artificial intelligence, virtual reality, and the Internet of Things (IoT). The focus shifted towards creating seamless, intuitive, and inclusive experiences across diverse platforms and devices. Today, HCD is a critical aspect of designing complex systems, ensuring that technology serves human needs effectively.

Core Principles of Human-Centered Design

Human-Centered Design (HCD) is about designing products with the user in mind. This involves:

  • Empathy: Understanding user needs and contexts.
  • Inclusivity: Making technology accessible to all users.
  • Intuitiveness: Ensuring ease of use and reducing the learning curve.
  • Transparency: Providing clear, understandable, and traceable information.

HCD ensures that products are not only functional but also usable and accessible. This approach reduces frustration, increases satisfaction, and promotes widespread adoption. Integrating these principles into Gen AI can significantly enhance its usability, accessibility, and trustworthiness. By providing clear and understandable explanations, users can validate and learn from the solutions, ensuring that technology serves their needs effectively.

It’s no longer just about generating a basket of choices; I can now trust the AI to refine and enhance ideas based on my input. This feels like a significant step forward in co-creating with Gen AI.

Applying HCD Principles: Fibonacci Sequence Example

Let's explore how these principles can be applied using the example of generating some code to create a Fibonacci sequence with ChatGPT.

Enhancing Explainability and Transparency

When I asked ChatGPT to create a fib sequence method in python,it produced the following output:

fib sequence in python witout comments
Fibonacci sequenece without comments but a detailed explaintion that follow

Here, each step is not clearly explained in code with comments but as a summary after, making it easier for users to understand the logic without cluttering the code.

Another example of this would be the inverse where there is no summary but solely the use of comments in the code as a way of explaining.

fib sequence with comments
Fibonacci sequenece with comments

Both approaches explain the solution well providing a high trust experience for the user ensuring they carry the context of the solution back into their work, not just copying and pasting it because the machine said so. Its all about collaboration, not just delegation.

The goal should be explainability in the solution achieved, be it code, images, audio, etc. The human needs to remain in the loop and able to actively carry the solution context back into their work if we are to effectively co-create with Gen AI. By incorporating interactive features, we can further enhance this understanding and engagement to allow for a deeper level of fidelity.

Interactive Features

Creating an interactive section where code can run and users can input different values of 'n' to see the sequence generated in real-time can enhance understanding. Providing explanations for each step as the sequence is built ensures that users can follow along and grasp the underlying logic. This interactivity makes learning more engaging and effective.

I noticed the critical need for explainability and managing latency, especially when technologies involve human interaction.

Expanding on this concept, we can relate it to conversational artifacts. By producing well-documented interactive experiences as a result of dialogue, we advance our maturity in HCD. These structured content fragments, like code snippets and diagrams, turn AI interactions into productive collaborations. Conversational artifacts organize complex information, facilitate iterative development, and enhance collaboration by providing a shared reference point. They transform AI conversations into tangible and valuable outputs.

Claude AI's Artifacts feature is an excellent example of this. Artifacts in Claude are dedicated windows that display substantial, standalone content generated in response to a user’s request. This can include documents, code snippets, diagrams, and interactive components. By providing structure, persistence, and clarity to AI interactions, artifacts boost clarity and human-AI collaboration, transforming how we work, learn, and create in the AI age.

In the example below, I asked Claude to "make me a react app version of space invaders". It produced an interactive application that I can chat with to change as well as, documenting the code and provided detailed explanations of what it implemented. Thus providing me with level of higher trust of my interaction with Gen AI.

Claude's Conversation Artifact: Space Invader Game Running Claude's Conversation Artifact: Space Invader Game Running

ChatGPT offers similar functionality to conversational artifacts with images, allowing users to select an image for refinement or choose specific regions to modify. For example, I used Gen AI to create an image for a song my daughter and I made up about her cat Pink chasing a bird. I was able to select a region of the image and give specific directions to ChatGPT on what to do. This detailed level of interaction with Gen AI is only possible due to the interactive nature of the initial output request. It’s no longer just about generating a basket of choices; I can now trust the AI to refine and enhance ideas based on my input. This feels like a significant step forward in co-creating with Gen AI.

Editing an image in ChatGPT's conversational interface editor Editing an image in ChatGPT's conversational interface editor

Building Trust Through Explainability

Creating an environment of trust is fundamental when integrating AI into any system. Trust is the cornerstone that allows users to feel confident in the technology. One of the most effective ways to build this trust is through explainability.

Transparency in Decision-Making

Clearly outlining the decision-making process of the AI helps build trust. Showing intermediate steps and the logic used to arrive at the solution provides users with a deeper understanding and confidence in the AI's outputs. Transparency ensures that users can follow the AI's reasoning, making the technology more trustworthy.

Validation Mechanisms

Providing ways for users to validate and verify the solutions is essential. Allowing users to compare the generated sequence with known benchmarks ensures they can trust the accuracy of the AI's outputs. Validation mechanisms are crucial for establishing reliability and credibility.

Personalization

Allowing users to customize the level of detail in explanations based on their familiarity with the subject matter enhances the user experience. For beginners, providing more detailed explanations and visual aids is beneficial, while advanced users might prefer a brief overview. Personalization ensures that the technology meets the needs of a diverse user base.

Conclusion

As companies adopt Gen AI into their software and processes, integrating human-centered design principles can significantly enhance trustworthiness. Providing clear and understandable explanations not only helps users validate and learn from the solutions but also ensures that technology serves their needs effectively. The Fibonacci sequence example demonstrates how step-by-step breakdowns, visual aids, interactive features, and detailed annotations can transform complex solutions into comprehensible and valuable tools for users. The lessons learned from working with SAT solvers, such as the importance of clear solution paths and minimizing latency, can be directly applied to Gen AI, ensuring better user experiences and more effective collaboration between humans and AI.

Call to Action

Let's continue to advocate for and implement explainability in our Gen AI solutions. By doing so, we build trust with our users, fostering a collaborative environment where AI serves as a reliable and understandable partner in the creative process.