The "waiting experience" is also in the middle of the whole experience cycle chain and plays an important role in the voice interaction experience. However, the "waiting experience" in voice interaction has not been systematically studied in the industry and is still in a vague state.
1. Must the response time be as short as possible?
Dynatrace, a digital performance management platform, has studied the behavior of users browsing the web and found that when a web page loads 0.5 seconds faster, it can drive a 10% increase in the core data of user behavior conversions on a website. Therefore, minimizing wait times in web design and app design is a relentless pursuit in product design.
Unlike visual-based interaction, voice interaction naturally comes with emotional attributes. However, the experience of emotion is complex, and it is not controlled by only a single variable, efficiency. In most cases, when people talk to each other in life, an answer that is too fast gives the user a sense of frivolity and robbery, while an answer that is too slow gives the user a sense of sluggishness and foolishness.
So, what response times actually make for the best experience in voice interaction? What are the trends in response time experience?
2. What variables affect the waiting experience?
In the field of visual design, when designing the loading state of a page, in order to reduce the user's bounce rate, designers often eliminate the user's uneasiness by giving a progress bar, or by using a fun and emotional design.
But in the field of voice interaction, the carrier of speech is intangible, or indeterminate in form, and we don't even have an interface that carries the loading state. And what variables affect the waiting experience in this case? What is the extent of the impact?
In summary, it can be said that in the field of voice interaction, although the waiting experience is important, it is still a "fog". In view of this, we take the current main carrier of voice interaction, smart speaker products, as an example, to conduct a thematic study on the issue of waiting experience in AI products.
Second, a study of the waiting experience of smart speakers
Current smart speakers, mainly use the voice interaction process of waking up by voice first and then inputting commands. With this in mind, we can divide the process of using a smart speaker into two main stages.
1) Wake-up phase: the user converts the speaker from the waiting state to the ready state through the specified wake-up word, and the speaker is woken up before it can receive the user's voice commands.
2) User request and feedback stage: the user gives voice command content and the feedback result from the smart speaker to meet the user's needs.