Speaker
Mr
Cheng-Yen Lin
(National Taipei University of Technology)
Description
People are the main users of cities, the development of cities is mainly to meet people's service needs and create different user experiences. Therefore, understanding users' behaviors, movements, directions, observation and data collection of different aggregation modes in urban space has always been an important issue in developing smart cities in the future. Under the background of the development of science and technology, people's observation in cities has been introduced into scientific and technological applications from man-made on-the-spot investigation, for example, from monitoring systems or aerial drone images, through deep learning, the interaction between people and space can be detected and interpreted. Or through the method of machine learning, from the collection of a small amount of local data, deduce the experience movement of a large number of people in space. Although the application of the above-mentioned related technologies has gradually matured, it is only in certain specific fields, such as indoor space and low forest; Or the shooting characteristics of the applied image, there are many limitations and undetectable characteristics. Therefore, this research study proposes to use 360 cameras to capture images containing the interaction between human and space in different spaces, integrate pre-processing, image optimization processing, and deep learning of image recognition, trying to analyze the position of human in space, basic data reading, moving line and directional interpretation, and aggregation mode. It is hoped that through the collection and analysis of a large amount of data from 360 landscape images, the application system of near-timely semi-automatic recognition of user's characteristics and behavior patterns in future cognitive space can be developed.
In this research study, the process is divided into three parts, namely, the shooting and processing of the 360-degree landscape video, the identification and path tracking of the user characteristics in the video, the output and analysis and interpretation of the user path. In the part of image data collection, this research study will use a camera with 360 panoramic view. In the aspect of user detection, YOLOv4 services are integrated as detection algorithms. In addition, in order to allow the identified pedestrians to add the same label for analysis, Deepsort algorithm is also used to track the behavior of specific people by this labeling method,the coordinate results obtained from the 360-degree video are converted into the top view and the moving track of pedestrians is superimposed on the plane map of the research field to judge the relationship between users and space in the field.
The final output will include pedestrian path records, pedestrian aggregation patterns and space use probability. These data can be applied to the simulation of pedestrian behavior patterns in space, so as to predict the behavior of some special events. Combined with the output data and analysis, we can predict the use state closer to the space, and then improve the reliability and availability of the prediction. Finally, a clear and standardized data format will be formulated for coordinate output, future coordinate data can be extended to more existing spatial analysis systems.
Primary author
Mr
Cheng-Yen Lin
(National Taipei University of Technology)
Co-author
Prof.
Sheng-Ming Wang
(National Taipei University of Technology)