With the prevalence of low-cost, low-power computing devices and their applications in various real-world scenarios it is becoming increasingly evident that we are going to work and live in smart environments in the future. A smart environment is a physical space wherein miniature computing devices perceive, monitor, and interact with human users. A compelling real-world example of such an environment is a “Smart Home” – an augmented home environment within which the daily activities of its inhabitants, usually the elderly or disabled, are monitored and analysed so that personalized context-aware assistance can be provided. Others examples include intelligent meeting rooms, conferencing centre, research environments, hospitals and intelligent cars, to name but a few.
At present a large number of the underpinning technologies for the realisation of smart environments have become financially affordable and technically applicable, such as sensor networks and communication platforms. A number of lab-based smart environments or real-world intelligent physical spaces have been developed. As such, monitoring environments and the behaviours of situated individuals within the environment along with collecting massive sensor data are now not a major research focus. Research has, however, shifted towards high-level data/information/knowledge processing, e.g., data fusion, context and activity recognition, decision-making support for situated individuals, in order to support rich user experiences in smart environments. While many approaches, methods and algorithms have been proposed for this purpose, most of them have been considered in rudimentary scenarios, e.g., the environment contains only a single inhabitant performing a single activity. Many real-world scenarios, e.g., interleaved, concurrent, overlapped activities and multi-user activities have not been addressed.
This project aims to develop novel yet pragmatic approaches and methods for multi-user activity recognition, e.g., multiple occupancy scenarios. The ultimate purpose is to make smart environments support real world use cases and advanced features of assistive living. The research will be built upon existing research results but go beyond current approaches and methods to make use of the latest information processing principles, methodologies and technologies, e.g., domain knowledge-driven data/information processing. Special emphasis will be placed on modelling, representation of multi-user activities, and the corresponding recognition algorithms.
The research will be conducted within the smart environment located in the School and will be able to avail of a multitude of resources to support experimentation. The context of the work will be related to ambient assisted living or intelligent research/meeting environments.
1. Chen L., Nugent CD., Wang H., A Knowledge-Driven Approach to Activity Recognition in Smart Homes, IEEE Transactions on Knowledge and Data Engineering, ISSN: 1041-434707, IEEE Computer Society, doi.ieeecomputersociety.org/10.1109/TKDE.2011.51, 2011
2. Philipose, M., Fishkin, K. P., Perkowitz, M., Patterson, D. J., Hahnel, D., Fox, D., Kautz, H.; “Inferring Activities from Interactions with Objects”, IEEE Pervasive Computing: Mobile and Ubiquitous Systems, Vol.3 No.4, pp 50–57, 2004
3. Gu T. Wu Z., Tao X., Pung H.K.,Lu J., (2009), epSICAR: An Emerging Patterns based approach to sequential, interleaved and Concurrent Activity Recognition, in proceedings of the 2009 IEEE International Conference on Pervasive Computing and Communications
First Supervisor: Chen, L Dr
Second Supervisor: Nugent, CD Professor
Collaboration: This project does not involve collaboration with another establishment