Staff profile
Overview
https://apps.dur.ac.uk/biography/image/658
Mr Yunzhan Zhou
Postgraduate Student
Affiliation |
---|
Postgraduate Student in the Department of Computer Science |
Biography
He has built a 3D eye-tracking dataset collected when users were navigating through a VR museum. Based on the dataset, He proposed several machine learning and deep learning methods for predicting subsequent visual attention. He is now working on improving the prediction methods and understanding the visual attention mechanism in VR.
The dataset can be found here: https://github.com/YunzhanZHOU/EDVAM
Research interests
- Deep Learning
- Eye Tracking
- Human-Computer Interaction
- User Interface
- Virtual Reality
- Visual Attention
Publications
Chapter in book
- Wang, J., Ivrissimtzis, I., Li, Z., Zhou, Y., & Shi, L. (2023). User-Defined Hand Gesture Interface to Improve User Experience of Learning American Sign Language. In C. Frasson, P. Mylonas, & C. Troussas (Eds.), Augmented Intelligence and Intelligent Tutoring Systems: 19th International Conference, ITS 2023, Corfu, Greece, June 2-5, 2023, Proceedings (479-490). Springer Verlag. https://doi.org/10.1007/978-3-031-32883-1_43
- Li, Z., Shi, L., Cristea, A., Zhou, Y., Xiao, C., & Pan, Z. (2022). SimStu-Transformer: A Transformer-Based Approach to Simulating Student Behaviour. In Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners’ and Doctoral Consortium (348-351). Springer, Cham. https://doi.org/10.1007/978-3-031-11647-6_67
- Zhou, Y. (2019). 8. Towards personalized virtual reality touring through cross-object user interfaces. In Personalized Human-Computer Interaction. https://doi.org/10.1515/9783110552485-008
- Li, X., Chen, W., Zhou, Y., Athalye, S., Chin, W. K. D., Goh Wei Kit, R., …Hansen, P. (2019). Mobile Phone-Based Device for Personalised Tutorials of 3D Printer Assembly. In Human-Computer Interaction. Recognition and Interaction Technologies. https://doi.org/10.1007/978-3-030-22643-5_4
Conference Paper
- Wang, J., Ivrissimtzis, I., Li, Z., Zhou, Y., & Shi, L. (2023). Exploring the Potential of Immersive Virtual Environments for Learning American Sign Language. In Responsive and Sustainable Educational Futures: 18th European Conference on Technology Enhanced Learning, EC-TEL 2023, Aveiro, Portugal, September 4–8, 2023, Proceedings (459-474). https://doi.org/10.1007/978-3-031-42682-7_31
- Wang, J., Ivrissimtzis, I., Li, Z., Zhou, Y., & Shi, L. (2023). Developing and Evaluating a Novel Gamified Virtual Learning Environment for ASL. In Human-Computer Interaction – INTERACT 2023: 19th IFIP TC13 International Conference, York, UK, August 28 – September 1, 2023, Proceedings, Part I (459-468). https://doi.org/10.1007/978-3-031-42280-5_29
- Li, Z., Shi, L., Cristea, A. I., & Zhou, Y. (2021). A Survey of Collaborative Reinforcement Learning: Interactive Methods and Design Patterns. . https://doi.org/10.1145/3461778.3462135
- Zhou, Y., Feng, T., Shuai, S., Li, X., Sun, L., & Duh, H. B. (2019). An Eye-Tracking Dataset for Visual Attention Modelling in a Virtual Museum Context. . https://doi.org/10.1145/3359997.3365738
Journal Article
- Li, Z., Shi, L., Wang, J., Cristea, A. I., & Zhou, Y. (2023). Sim-GAIL: A generative adversarial imitation learning approach of student modelling for intelligent tutoring systems. Neural Computing and Applications, 35(34), 24369-24388. https://doi.org/10.1007/s00521-023-08989-w
- Zhou, Y., Feng, T., Shuai, S., Li, X., Sun, L., & Duh, H. B. (2022). EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museum. Frontiers of Information Technology & Electronic Engineering, 23(1), 101-112. https://doi.org/10.1631/fitee.2000318
- Sun, L., Zhou, Y., Hansen, P., Geng, W., & Li, X. (2018). Cross-objects user interfaces for video interaction in virtual reality museum context. Multimedia Tools and Applications, 77(21), https://doi.org/10.1007/s11042-018-6091-5