Assoc. Prof. Mohammed A.-M. Salem
Faculty of Media Engineering and Technology, German University in Cairo, GUC
Computer Vision, Machine Learning
Speech title：Computer Vision and IoT for Safety and Security in Smart Cities
Safety and security are the most necessary and fundamental needs for any sustainable community. The advances in the information and communication technologies have enabled new advanced solutions that saves costs, time and efforts. At the same time, it ensures high safety and security levels.
ICT is utilized to its end edge in smart cities. These are private housing compounds or governmental public cites which use smart solutions for city management and enable smart services to their inhabitants. They utilize the IoT technology to install sensors everywhere to collect data for efficiently managing the assets and the resources. They are developing simple and integrated applications to allow the inhabitants to access and interact with the system directly but remotely. More and more compounds or small gated settlements promise high level of security and safety along with smart cities capabilities to attract new homeowners.
The technology of smart cameras are going toward having really intelligent camera units, where advanced surveillance cameras are integrated with computing and communications boards. Advanced video surveillance algorithms based on deep learning are implemented on the computing boards to deliver accurate and fast scene understanding.
In this talk we address the advances in the deep learning models and the applications of video surveillance for smart cities. Although conventional and deep learning video processing techniques and are available to problems such as people & vehicle counting, tracking, identification and abnormal action detection. However, there are big needs for advanced models that are more accurate, robust and general that fit in different scene conditions and deal with big variety of cases. Yet the implementation of such models on embedded devices with low computing power to enable real-time processing is another challenge that is addressed in this talk too.
On the backend a deep analysis level is needed to run to integrate the results from the distributed cameras and from the different algorithms to produce a complete picture of the scene and the security level status of the city.
Assoc. Prof. DR. Wang Qixin
Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China
Real-time/Embedded Systems, Cyber-physical Systems
Speech title：Software Engineering Issues in Cyber-Physical Systems
Cyber-Physical Systems (CPS) are the results of the inevitable merge of computers with other domains of physical-world practices. A key challenge to CPS is software engineering: how to coordinate experts of disparate backgrounds to build dependable, maintainable, and efficient complex CPS. This challenge is exacerbated as each cyber/physical domain technology advances, resulting in combinatorial growth of the cross-domain interaction/interference complexity.
The CPS software engineering challenge implies a huge problem space involving multiple dimensions. In this talk, we aim to introduce this problem space to a broad audience, to reveal its abundant academic and practical values, and inspire new solutions.
Specifically, in this talk, we shall explore the CPS software engineering problem space with several concrete examples. Through these examples, we shall demonstrate how the CPS software engineering problems differ from traditional software engineering problems, how the problems are formulated, and how cross-domain thinking can help address these problems.
Assoc. Prof. Attila Fazekas
Faculty of Informatics, University of Debrecen, Hungary
Image Processing, Pattern Recognition, Multi-modal Human-Computer Interaction
Title：Face detection, facial gesture recognition and their role in multi-modal human-computer interaction
In human communication, the recognition and interpretation of gestures on the human face has great importance. We can also say that the human face is a communication interface in human communication. This gives the face serious role in human-computer interaction as well.
During a multi-modal human-computer interaction, the emotional state of the human face, one's gender, age and gaze direction can also be determined. This information can be used to customize the communication, which can be much more human-like. From the point of view of digital image processing, several tasks can be identified: face detection, recognition of facial gestures, determination of the direction of gaze Specifically, in this talk, we give a general overview about the problems of the face detection, and facial gesture recognition. We summarize the state-of-arts results, and demonstrate how these algorithms can be used in multi-modal human-computer interaction.