Interview with Information Energy Keynote Speaker Dr. Stephan Sigg
Doing research on Dr. Stephan Sigg, I was impressed by his efforts in the field of computer communications and future technologies, working in institutions in three countries and releasing a great quantity of publications.
And I also had to do additional research on my findings. Why?
Stephan Sigg deals with what happens behind the chatbot talk, the smart watch or the individual recommendations on Netflix we get, which is a wide, more complex world. It is the world of mathematical patterns and sensor technologies that enables machines to interact in a context-sensitive and natural way.
So, to answer our questions about the future, such as what is possible or when will we be living in a world of ubiquitous computing, we have to ask those who are sitting and working next to these future technologies, like Stephan Sigg. Let´s dive deeper and take the opportunity to talk with him and try to get closer to these questions:
First, could you explain to those who are not yet into information technologies but want to understand: How do we get from light sensors on my lamp to individual information from a home robot like “Hello Mr. Sigg – your lightbulb is not working well anymore. Shall I order a new one from Amazon for you? You still have the Christmas voucher from your sister.”
First of all, thank you for the nice and on-the-spot introduction and for the opportunity to discuss and share ideas with the experts, theoreticians and practitioners attending the Information Energy 2018 conference. I am very much looking forward to this event.
In your question, you are comparing two systems that can be perceived as services adapting to changing environmental conditions. The lamp exploits a simple actuator to establish a perception of intelligent behavior: in low light conditions, the light sensor controls the current flowing through the circuit. These elements determine the amount of current to pass through based on the amount of light they detect.
Despite the factitious intelligent behavior of such a system, it is indeed pa ssive and triggered by environmental input to the actuator. Similar to a mechanical light switch that is triggered by the intensity of physical force, the light sensor acts as a switch that is triggered by the intensity of light.
I am expecting tremendous advances in AI methods and applied AI research in the next couple of years.
The robot, on the other hand, is a much more advanced system in many respects. It interacts using natural human language; it knows my name and my social network. It detects an anomaly – a deviation from normal operation – and makes suggestions based on knowledge about my usual shopping preferences, the existing private inventory of the product in demand, and different payment options.
The way we get to agents with such a level of is data – huge amounts of data: from profile-type data such as names, ages, addresses of individuals, to social network groups, interactions, and resources (items in the fridge, light bulbs in the cupboard, …), to streams of continuous sensor readings from all kinds of environmental and on-body sensors in order to detect such anomalies.(3)
To detect an anomaly, the system must first establish a perception of normal behavior. For this, feature values are extracted from sensed data streams. For instance, reduced light intensity, squealing sounds emitted from the bulb, or increased flickering. In a multi-dimensional feature space, an anomaly is spotted by feature values that fall far off the typical feature samples. The reaction of the system to such an anomaly might then follow a number of rules based on the mentioned profile type, social network and inventory data.
So, data is the key to enabling such seemingly intelligent behavior.
[…] the accuracy of deep learning classifiers has advanced significantly.
2. Can you name three examples of future technologies that will change our daily life?
If I had to pick three, it would be Artificial Intelligence, 5G, and the bundle of security/privacy/authentication.
First: Artificial Intelligence is slightly hyped nowadays, due largely to the impressive advances in deep learning methods in various domains. With the help of the amount of data and enormous computational power, the accuracy of deep learning classifiers has advanced significantly. In addition, transfer learning and zero-shot learning paradigms have significant potential to further improve the perceived performance of classification algorithms.
A prominent example of the high potential of AI applications is autonomous driving, which already shows impressive results in current realizations. Another good example, exploiting speech processing, is the life-translation of spoken input, and there are many further recent adaptations of AI in various domains that hold great potential. These technologies will significantly change our way of life, how we use our time, and with whom we interact.
Second: As a second example of future technologies that will change our daily life, I believe that 5G and, more generally, the proliferation of the IoT will have a disruptive impact. One promise of 5G is a unified standard (though still different for device classes and distinct frequency ranges) ranging from tiny IoT devices to vehicular communication.
Finally: any service in the IoT and among connected personal devices can only be successful if it establishes trust in how data is managed, especially as regards confidentiality and availability.
For instance, common solutions for providing user-friendly authentication on mobile devices make significant compromises in security. Pattern-based inputs, for instance, are easily overcome via shoulder surfing or smudge attacks, while biometrics cannot withstand any targeted attack, as the biometric token used (e.g. fingerprints, iris, gait,…) is continuously observable with contemporary image and video technology.
Usable security schemes exploiting, for instance, fuzzy cryptography schemes and multiple implicit feature patterns, might, on the other hand, also be able to provide seamless context-based authentication among interface-less devices.(2)
Together with a ubiquitous 5G instrumentation, 5G and IoT have the potential to turn virtually every electronic device into a connected environmental sensor, able to track presence, activities, gestures and more.
3. What are the next steps you and your colleagues are working on?
I am expecting tremendous advances in AI methods and applied AI research in the next couple of years. These might go together with new approaches to process Big Data and to filter relevant information.
In particular, transfer learning will receive increased attention in the next few years. In addition, advances in deep learning models, for instance, towards applying deep learning classifiers on devices with restricted resources, will find their way into applied research in various fields.
Our group is focusing on ambient intelligence and in particular towards realizing machine learning for environmental perception on battery-less devices. In particular, we are not aiming for close-to-perfect recognition accuracy by exploiting tremendous processing and storage resources, as is common, for instance, in applied deep learning research. Instead, we aim to provide “good enough” accuracy at a minimum resource cost (power, CPU, storage).
It is sometimes most efficient to e.g. distribute processing and storage demand, and aggregation tasks can also be achieved during the simultaneous transmission of data on a wireless channel. We are trying to be as efficient as possible and also to involve, for instance, autonomous backscatter nodes that can draw part or all of their energy parasitically from the surrounding environment. In the meantime, this might lead to maintenance-free autonomous nodes that provide environmental perception.
Yes, safety, security and privacy are very important and pressing challenges. This is also reflected in the large interest in blockchains recently.
4. Reading the comments on documentaries about future scenarios – you read positive as well as critical ones. To sum up, the big critical point seems to be the issue of safety in an increasingly connected world. How far along is the research with safety models to deal with this challenge?
Yes, safety, security and privacy are very important and pressing challenges. This is also reflected in the large interest in blockchains recently. This concept still leaves a number of challenges to be addressed, especially when it comes to scalability or latency.
However, recently, good solutions have been proposed, such as the OmniLedger and ByzCoin concepts brought forward by the group around Bryan Ford(1) . I am really excited about these developments.
In my perception, we are not that far from integrating user interfaces into garments in actual end-user products.
5. Talking in a measurable scale – if ubiquitous computing is part of our everyday life: we do use pens and tablets instead of mouse and keyboard and our sweatshirt is a wearable user interface, and that is a 10 on our scale, while the static website content we are familiar with from the early 90’s is a 1, where are we now?
This is a difficult question.
I am not sure if static webpages would score a 1 on my scale for ubiquitous computing or whether it is actually possible to identify a single fixed starting point for any such dynamic and gradual development.
In my perception, we are not that far from integrating user interfaces into garments in actual end-user products. Prototypes of such clothing are regularly displayed at Ubicomp and other leading conferences, especially in HCI.
Also, garments as intelligent user interface could surely be further be advanced by getting rid of the necessity for the interface at all. For instance, imagine an implicit interaction and control triggered by your actions and context and by the way you behave, anticipating your input to the interface rather than reacting to it.(5)
To answer your question, I see us at the beginning of an exciting journey with many exciting advances still to be experienced.
6. So, how long will it be until we experience true pervasive computing?
Curiosity comes from being excited about learning things you do not yet know.
I feel curious about many directions in pervasive and ubiquitous computing, and I hope and expect that this feeling might prevail for a fairly long time.
7. Last question – fictional scenario: The time traveler Dr. Emmett Brown from the film series Back to the Future answers two questions from the future. What would you ask him?
-> Is the total transparence of all actors a viable solution to resolve privacy concerns?
-> What kind of experiment were you performing on October 26, 1985 that made all your watches run 25 minutes late?
We thank you for the many insights behind future technologies you gave us. It is interesting to see which logical patterns are behind the developments. Like you, we are curious how things will continue. We wish you all the best for your future.
Text Sources:
[1] Kokoris-Kogias, Eleftherios, et al. “OmniLedger: A Secure, Scale-Out, Decentralized Ledger.” IACR Cryptology ePrint Archive 2017 (2017): 406
[2] D. Schürmann, A. Brüsch, S. Sigg and L. Wolf, “BANDANA — Body area network device-to-device authentication using natural gAit,” 2017 IEEE International Conference on Pervasive Computing and Communications (PerCom), 2017.
[3] Andreas Bulling, Ulf Blanke, and Bernt Schiele. 2014. A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surv. 46, 3, Article 33 (January 2014), 33 pages.
[4] S. Savazzi, S. Sigg, M. Nicoli, V. Rampa, S. Kianoush and U. Spagnolini, “Device-Free Radio Vision for Assisted Living: Leveraging wireless channel quality information for human sensing,” in IEEE Signal Processing Magazine, vol. 33, no. 2, pp. 45-58, March 2016.
[5] M. Elhamshary, A. Basalmah and M. Youssef, “A Fine-Grained Indoor Location-Based Social Network,” in IEEE Transactions on Mobile Computing, vol. 16, no. 5, pp. 1203-1217, May 1 2017
1 comment on “Our Future Technologies Behind the Scenes, Interview with Dr. Stephan Sigg”