People expect that when they use a tool, it will provide them with the EXACT answer, demanding 100% certainty.
This expectation has been met with traditional software where answers were confined within preset boundaries, lacking dynamism, and remaining predictable.
However, with AI in play, users must engage their cognitive abilities alongside the generated outputs. It's not advisable to fully rely on the outputs, but the challenge lies in determining which ones to trust and which to question. Additionally, the degree of trust or distrust poses another layer of complexity.
In the process of developing AI products and experiences, it is essential to acknowledge certain realities about human behaviour
-
Most individuals desire certainty when using software.
-
Many people dislike the effort of collecting diverse data and deriving insights, finding it time-consuming and mentally taxing. A significant portion of the population prefers to avoid extensive thinking.
-
Humans hold high expectations for machines; failure to meet these expectations often leads to trust issues.
-
People prefer binary answers over ambiguous or grey responses. The mantra is, "Don't make me think."
-
AI should consistently provide the same answer for the same question across different users to avoid discrepancies.
Therefore, the critical questions to address to build AI products are:
-
How can AI outputs achieve greater certainty?
-
What measures can be implemented to instil trust in AI outputs among users?
-
What steps are necessary for building trust in AI-generated outputs?
-
How can AI minimize probabilistic outputs?
-
How can AI learn and adapt to human behaviors, presenting answers in ways that better satisfy individual needs?
#AI #AIProduct #behaviour