Alexa, What’s Going on Now? Digital User Assistance in the 21st Century
Machine learning, artificial intelligence and user assistance are currently the most popular keywords when it comes to modern systems. This trend was also clearly noticeable at the tcworld conference in 2018. I see great potential in these tools and technologies for technical communication and documentation. It is interesting to note that technologies such as chatbots have been around for decades, but have just now gained recognition for everyday use. Why? This mainly has to do with the expansion of rule-based chatbot systems to include the power of machine learning. The advantage of this combination is that the answers learned by the system could not have been pre-programmed prior to its exposure to the user’s problems. This grants assistance systems brand new possibilities for interpreting and evaluating user queries.
Use Cases Before Technology
So far, chatbots have mainly been used in the marketing sector, i.e. in “pre-sales”. The few chatbots built for use as guidance systems often fail. Too often, these one-hit wonders appear then disappear. As it stands, successful chatbot instruction systems for electronic devices or complex machines, for example, have been searched for to no avail. A system of this kind could replace a quick guide or the like quite simply and effectively. For the sustainable creation of such systems, it is not enough to simply include instructions and quick guides in the form of modern technologies. Quite the opposite: the contents must be completely restructured and the textual and graphic preparation addressed in a brand new way. An important principle to keep in mind is: use case before technology. This is too often overlooked in all industry sectors. In technical documentation though, more than anywhere else, it is a requirement for acceptance by the end user and imperative to the successful use of modern technologies.
Digital User Assistance – the Modern Version of Electronic Documentation
Why put work into new use cases and refurbishment? Today, users are in the habit of asking colleagues, friends or Google. The questions are usually formulated in natural conversational language, regardless of whether the person (or search engine) on the receiving end understands or not. If the received answer does not fit, the user generally adjusts the query until the answers meet the requirements of what they’re looking for and help solve the problem- or, the user loses their nerve.
Child: “Daddy, why doesn’t the thing work?”
Father: “What thing?”
Child: “The coffee machine?”
Father: “Which one?”
Child: “Well, the little one!”
Father: “Oh, you mean the Nestle machine with capsules?”
Child: “What else would I mean?”
Father: “I unplugged it – I needed the power socket. Just plug it in again…”
Child: “Huh? Plug in what?”
Father: “The Schuko plug.”
Child: “What’s a Schuko plug?”
Father: “The power plug!”
Child: “Daaad, why didn’t you just say so?!”
Alexa could conduct a similar dialogue with the properly programmed skills. With the help of knowledge databases (ontology and terminology databases, etc.), user questions can be interpreted by a system without specific answers ever being preselected for this query. These knowledge databases and collections of metadata form the basis, along with with dawning technologies of Artificial Intelligence to produce modern forms of documentation.
For example, even if the word “power plug” is not a common (technical) term, a system can use a terminology database and machine learning to search all synonyms and contextual connections and find an appropriate answer.
At first glance, the step from digital documentation to digital user assistance seems pretty small, but there are alot of hurdles. In contrast to conventional documentation, digital user assistance does not focus on a single product or manufacturer. The basic idea is to fully adapt to the user and provide information tailored to them and the situation they’re in. Digital user assistance is designed accordingly, as follows:
- Individualized information (product-related)
- Personalized information (user-related)
- Information 4.0 & smart information
- Context-sensitive information (product and process status)
- Multimodal information (different sensory channels)
- Multimedia information (technology/formats)
- Intuitiveness and constant availability
- Central access
- Creation of new information through the use of knowledge databases
- Dynamic and self-learning
- Consideration of safety aspects
Alexa, What’s Going on Now?
Systems that communicate with the user on the basis of natural language – in whole dialogue – are ideally suited for digital user assistance. Chatbots and language assistants like Alexa and Google Assistant are exactly this type of system.
By correctly preparing information modules and correctly analyzing a use case (e.g. quick guide), information can be prepared in a way that quickly brings the user to a solution for a problem with a technical product (e.g. solving an error condition) with the help of a language assistant like Alexa.
With the simple question “What’s going on now?” the user can signal to the system that one of the devices registered to their account is not working properly and that they want to receive information leading to a solution. By making use of a network connection, the assistance system reads out the status of the corresponding machine and solution approaches and prepares them for users by using learned statements (machine learning) so that the user can then apply the solution step by step.
A simple example is a coffee machine with WLAN connectivity and that can therefore be controlled via an app. Let’s assume that one of these machines is at our home and all of the sudden doesn’t want to make any more coffee. Up to now, the operating instructions have been searched for and retrieved – or better yet, the laptop has been retrieved and googled. Best case scenario, you have found a solution after at least 15 minutes and hopefully applied it correctly.
Now, let’s ask our assistance system the question:
User: “Alexa, what’s going on now?”
Alexa: “Just a moment, I’ll check for you … Ok, it looks like there’s a problem with your coffee machine. According to the error code, the water nozzle is blocked. Should I help you find a solution?”
User: “Yes, but hurry up!”
Alexa: “All right. I sent a video link to your mobile phone. If you have any questions, just ask me. When you’re done, say ‘done’ and I’ll check if the bug has been fixed.”
Can’t be Done, Doesn’t Exist!
For chatbots and language assistants, there are convenient frameworks and platforms that make it possible to create test systems relatively easily and with minimal programming. Such test cases are especially important today. Users are ready for the modern systems, but there are still no exemplary products and standardized creation processes on the market that could be used to orient such efforts. Of course, there are certainly information providers claiming the opposite. Nevertheless, I am convinced that in these cases, the use case has often not been fully exploited.
Imagine if you asked Alexa a question after the first start-up and she read you the complete safety chapter with all safety instructions. You would quickly deactivate this assistant. Another consideration is whether pure audio is sufficiently helpful as a lone medium for these services.
So, new technology alone does not automatically mean better usability of digital documentation. I think the big advantage for the future lies in the combination of voice control and multimodal information processing (video, text, image, audio, augmented reality, etc.). In other words: seeing, speaking and hearing are part of being human and this is how we work most efficiently. It is time to bring our mechanized help up to speed!
Alexa, finish blog post!