Near-Infrared Light-Triggered Polymeric Nanomicelles for Cancer

Oct 30, 2017 - Lei Li†, Xin Pang†, and Gang Liu. State Key Laboratory of Molecular Vaccinology and Molecular Diagnostics and Center for Molecular ...
8 downloads 0 Views 2MB Size
Proceedings of APSIPA Annual Summit and Conference 2017

12 - 15 December 2017, Malaysia

A Markerless Visual-motor Tracking System for Behavior Monitoring in DCD Assessment Shengyan LI1, Bin LI1, 2, Shixiong ZHANG1, Hong FU1, *, Wai-Lun LO1, Jie YU3, Cindy H. P. SIT3 and Ruimin LI1, 2 1

2

Department of Computer Science, Chu Hai College of Higher Education, Hong Kong Xi’an Institute of Optics and Precision Mechanics of CAS, University of Chinese Academy of Sciences, Beijing, China 3 Department of Sports Science and Physical Education, The Chinese University of Hong Kong, Shatin, Hong Kong * Corresponding Author, E-mail: [email protected], Tel: +852-29727250 the results and monitoring the performance errors of the subject, the examiner is also required to qualitatively observe the examinee’s gaze of eyes and posture in the tasks. Actually, it is hard to monitor so many items in a very soon trial by human observation. Some of phenomena in the tasks may be missed. Hence, the digital recording and analysis is a necessary for overall observation and examination in DCD assessment. Right now, little work has been done on the study of the eye-motion coordination with digital technologies. Some companies producing motion capture system or eye tracker further develop the commercial integration system for combining eye tracking and motion capture. For example, SMI eye tracking glass can be integrated in Qualisys Track Manager; and Ergoneers Dikablis eye tracker can be integrated into VICON motion capture system. With this integration, the coordination of eye-body motion can be tracked and analyzed. Essig et al [4] implemented the VICON-SMI EyeTracking Visualizer system to calculate the 3D gaze and hand motion tracking. Besides these, a visionin-action software named Quiet Eye Solutions, which consists of an ASL eye tracker and a camera located 3m to the subject, was used to analyze the throwing and catching tasks in children with DCD [5]. In these systems, a series of markers are needed to be fixed on the human body, and a group of cameras can capture body movement by detecting these markers. These fixed markers may influence the movement performance in some special tests, such as DCD assessment. Besides, the current eye-motion system is very expensive for both the eye tracker and motion capture system. Therefore, there is a need to develop a proper digital eye-motion tracking system to study the mechanism of eye-body motion coordination and perform further digital and objective assessment, which should be markerless and not too expensive. In this study, a markerless digitalized system was developed to measure both eye movement and body motion, for an in-depth study with a joint eye-motion analysis. Two sub-systems for evaluating gross and fine motor skills were implemented with a head-mounted eye tracker and multiple Kinects. The fusion algorithm for eye tracking and motion capturing will be developed to integrate eye and body motion information. Then the patterns and correlations of eye-motion tracking will be analyzed for visual-motor behavior in DCD assessment. In the rest part of this paper, the developed markerless eye-motion system designed for DCD assessment

Abstract—Development coordination disorder (DCD) is a disorder of visual-motor integration in children, so the study on coordination of vision and motion is essential to understand the mechanism of DCD. The standardized assessments with certain visual-motor tasks are often used to detect impairments in eyemotion coordination, which is the most important component for diagnosis of DCD. However, these assessments emphasize on motor performance of children, while ignoring the detailed process of eye-motion coordination in performing these tasks. Therefore, it is a need to develop a proper digital visual-motor system to detect the eye movement and body motion simultaneously during these tasks, to study the mechanism of a typical eye-motion coordination. Currently, the eye-motion system is based on marker to detect the joints and gaze points for integration. This kind of system is very expensive and the markers may affect children’s behavior in the task of DCD assessment. In this study, an integrated and markerless eyemotion system is developed to detect children’s behavior in DCD assessment. In this system, a multi-Kinect system is used to monitor body motion and extract the skeletal joints. The gaze point is detected by a head-mount eye tracker and will be integrated into the system of body motion using image processing method. With the proposed system, the skeletal joints and gaze points can be obtained simultaneously for further digital DCD assessment.

I.

INTRODUCTION

Development coordination disorder (DCD), also referred to developmental dyspraxia, is a motor disorder in children, affecting around 5%-6% of school-aged children. DCD affects fine and gross motor coordination in children and adults, leading to difficulty in learning, organizing and moderating motor skills, and can have significant long term effects on academic, psychosocial and vocational outcomes. In order to assess the motor impairments of children, the tools of Movement Assessment Battery for Children - Second Edition (Movement ABC-2, [1]-[2]) and Bruininks-Osteretsky Test of Motor Proficiency-2 (BOT-2, [3]), are used for assessment of general motor competence. However, DCD is also a disorder of visual motor integration, so the study on eye and motion coordination is essential to understand the mechanism of DCD, which gives systematic and objective assessment, for timely and proper intervention. Current standardized assessment systems focus on the final performance of children on some visual-motor tasks, while ignoring the detailed eye-motion coordination in performing these tasks. In the processing of assessment, besides recording

978-1-5386-1542-3@2017 APSIPA

774

APSIPA ASC 2017

Proceedings of APSIPA Annual Summit and Conference 2017

12 - 15 December 2017, Malaysia

with Movement ABC-2 will be briefly introduced in Section II. Then in Section III, two tasks will be illustrated to show how to use the developed system in assessing the fine and gross motor skills. In Section IV, a task in DCD assessment will be digitally analyzed as an example. Finally, a conclusion will be given in Section V. II.

Kinects will be used for tracking and analysis. As shown in Fig. 1(b), the layout of Kinect set 1 and set 2 are near to a wall for throwing a ball at the wall in Task 4. A yellow line paralleled with the wall is used in Task 4 and Task 7. From the sample images in Fig. 1(b), Task 7 is conducted along this yellow line. Beside Task 7, the subject faces to the wall performing the other tasks for gross motor skill assessment. In these processing, set 1 and set 2 of the Kinect often get more useful information for motion capture than set 3.

SYSTEM DESIGN

In this section, the markerless visual-motor tracking system will be designed to monitor the gross and fine motor skills in the tasks of DCD assessment. The system consists of a binocular eye-tracker (Pupil Labs) for eye tracking, and Kinect V2 (Microsoft) for motion capture.

Task 1

Eye Tracker

Joint

A.

Eye Tracking with Pupil Labs Eye Tracker The head-mounted eye tracker used in the proposed system is a binocular eye-tracker produced by Pupil Labs (Berlin, Germany), which is configurable with one world camera and two eye cameras (one for each eye). It can be used to estimate where the subject is looking in 3D (vergence) and robust binocular eye movement data. Sampling rate of the eye and world cameras is from high resolution capture at 30Hz to low latency (5.7ms) 120Hz. The gaze accuracy of this eye tracker is 0.60±0.08 degree.

Task 3

Gaze

(a) Fine movement system. Wall Kinect V2

Kinect V2

Eye Tracker

Set 2 θ2

B.

Motion Capture with Microsoft Kinect for Windows V2 A set of Microsoft Kinect for Windows V2 (Kinect V2) consists of an Xbox Kinect sensor and a Kinect adapter for Windows. Kinect V2 can track as many as six people and 25 skeletal joints per person simultaneously in a range of 0.5~4.5 meters. No marker is needed for skeleton detection. Both the depth image with a resolution of 512×424 pixels and 1080p color image, can be obtained by Kinect V2 in a sampling rate of 30Hz. The field of view of depth image is 70×60 degrees.

Set 1

θ3 Gaze

θ1

Task 6

Joint Task 7

Task 4

Kinect V2

Task 5

C.

Eye-motion Tracking System The tool of Movement ABC-2 is employed in this study for DCD assessment, including eight tasks for each of three age bands. Right now, the system is designed for age band 2 (7~10 years) with the tasks of manual dexterity (placing pegs, treading lace and drawing trial), aiming & catching (catching with two hands, throwing beanbag onto mat), and balance (one-board balance, walking heel-to-toe forwards and hopping on mats). Fig. 1 shows the proposed eye-motion tracking system for DCD assessment with Movement ABC-2, including fine movement system as shown in Fig. 1(a) and gross movement system as shown in Fig. 1(b). The fine movement system shown in Fig. 1(a) is designed for the tasks of manual dexterity. It consists of one head-mounted eye tracker and one Kinect V2 in front of the subject. It is used for monitoring the gaze position of eye and the skeletal joint of upper body motion when the subject sits in front of table and performs Task 1 to Task 3 on the table. The gross movement system shown in Fig. 1(b) is designed for the tasks of aiming & catching and balance, which consists of a head-mounted eye tracker and three Kinect V2. It can track the gaze of the subject and his/her whole body motion in three different views in Task 4 to Task 8. In these tasks, some or all of

978-1-5386-1542-3@2017 APSIPA

Task 2

Task 8

Set 3

(b) Gross movement system. Fig. 1 The proposed eye-motion tracking system. (a) Fine movement system. Task 1: placing pegs, Task 2: treading lace and Task 3: drawing trial. (b) Gross movement system. Task 4: catching with two hands, Task 5: throwing beanbag onto mat, Task 6: one-board balance, Task 7: walking heel-to-toe forwards and Task 8: hopping on mats. The angles θ1 to θ3 illustrated here will be used in the following section of this paper. θ1 is the angle between the joints of SpineMid, SpineBase and HipRight; θ2 is the angle between the joints of SpineShoulder, ShoulderRight and ElbowRight; and θ3 is the angle between the joints of ShoulderRight, ElbowRight and WristRight.

In order to analyze the visual and motor behavior of the subject, the proposed system needs to integrate the image and data from 1~3 Kinect and 1 eye tracker in image registration. In the proposed system, it is usually transforming all sets of data into one of the Kinect sets. III.

TASK FOR DCD ASSESSMENT

In this section, two tasks of Movement ABC-2: Task 1 (placing pegs) and Task 5 (throwing beanbag onto mat), will be shown as examples to illustrate how the fine and gross movement systems work. A group of images obtained by eye tracker and Kinect in a trial of Task 1 for placing pegs is illustrated for the fine movement system. One trial of Task 5

775

APSIPA ASC 2017

Proceedings of APSIPA Annual Summit and Conference 2017

12 - 15 December 2017, Malaysia

for throwing beanbag onto mat is used to introduce how the gross movement system used in DCD assessment.

acts before body movement. In task 1, subjects firstly perform a visual-search for target hole and then fixate on it before they insert the peg into that hole, and when the peg contacts the hole, they will look at the box to search next peg.

A.

Example for Fine Motor Skills The eye-motion tracking system for fine motor skill assessment consists of an eye tracker which captures the world image and monitors the gaze point, and a Kinect which tracks the upper body motion. Fig. 2 shows a pair of image in a trial of Task 1, captured by world camera of eye tracker and color camera of Kinect. In Fig. 2, the subject is placing pegs with right hand. In this moment, the subject is picking up a yellow peg and aims to insert it into the middle hole in 2nd line of the blue pegboard. From Fig. 2(a), it is clear that the gaze point of the subject (green circle) is just located on the target hole, which also indicates that the eye tracker works well. At the same time, Kinect image in Fig. 2(b) shows the whole image of the task, including the world image of eye tracker and the motion information of subject. Besides the skeleton information obtained by Kinect, the estimation algorithm developed by Cao et al [6] is also employed to extract the skeletal joints of upper body motion. With skeletal joints, the posture and body control such as sitting posture and head behavior, can be digitally extracted and further analyzed.

B.

Example for Gross Motor Skills In Task 5, the subject stands on a yellow mat and throws a red beanbag onto the red-circle target of the blue mat. The head-mounted eye-tracker and Kinect set 1 and set 2 shown in Fig. 1(b) are used to track the gross motor skill in this task. Fig. 3 shows a group of image when the beanbag is flying in a trial of Task 1. Fig. 3(a) is the world image with gaze point obtained by eye tracker. The green circle on it is gaze point of the subject. The color images obtained by Kinect set 2 and set 1 are shown in Fig. 3(b) and (c), to monitor the whole process of the task, including the body behaviour and the view of world camera of eye tracker. In this study, Kinect set 1 set as leader, and all of data will be transformed into the system of Kinect set 1. The result image with integration of body skeletal joints and gaze point is shown in Fig. 3(c). Kinect set 2

Kinect set 1

Beanbag

Target mat

Hand of subject Gaze point

(a) World image and gaze data (green circle) by eye tracker

(a) Image captured by eye tracker with gaze point Beanbag

Skeletal joints (b) Information by Kinect 2 Gaze point (b) Image captured by Kinect integrated with skeletal joints and gaze point Fig. 2 A frame of fine movement system used in Task 1.

In image registration, gaze point in eye-tracking image can be transformed into Kinect image using image matching method. Fig. 2(b) shows the result of eye-motion integration, in which the skeletal joint in red and the gaze point in green are digitally integrated and recorded in same coordinate system. Hence, the eye and body movements can be analyzed simultaneously and their relationship can also be further discussed. We can not only qualitatively observe whether the subject looks at the board while inserting pegs, but also quantitatively measure the location of gaze and skeletal joints for further analysis of their relationship. With visual-motion integration, it clearly shows that the eye movement always

978-1-5386-1542-3@2017 APSIPA

Beanbag Target mat Gaze point

Skeletal joint

(c) Integration image of skeletal joints and eye gaze point into Kinect set 1. Fig. 3 A frame of gross movement system used in Task 5.

In Task 5, the eye performance is much different for different subjects. Some of them always focus on the target (red circle) before throwing and focus on the beanbag after throwing, whose score in this assessment may be higher. And

776

APSIPA ASC 2017

Proceedings of APSIPA Annual Summit and Conference 2017

12 - 15 December 2017, Malaysia

some of them just have a very short time on the target and beanbag, whose score in this assessment is relatively lower.

The angle θ2 shows that the subject completes one upper arm waving in each trial. From the image recording, we can know that the beanbag is throwing out at Frame 7. The angle θ3 is largest in this frame, and the angle θ1 shows the subject lean to forward before throwing the beanbag.

IV. DATA ANALYSIS FOR TASK 5 In this section, the whole process of a trial in Task 5 with 16 frames is demonstrated for data analysis based on the digital information obtained by the proposed system. The extracted skeletal joints for the subject in one trial of Task 5 are shown in Fig. 4. Some joints from one Kinect set may be wrong due to occlusion. The advantage of Multi-Kinect is to detect the body motion in different view, which can avoid the mistake caused by occlusion.

V.

CONCLUSIONS

In this paper, a markerless eye-motion tracking system was developed for monitoring the eye and body movements in DCD assessment using Movement ABC-2. It consists of three Kinect V2 and one eye-tracker to conduct the gross movement assessment, and one Kinect V2 and one eyetracker for fine movement assessment. In this system, the world image and gaze point can be obtained by eye tracker, and the color image of whole process and skeletal joints of the subject can be obtained by Kinect in different views. Using image processing method, the visual point and body motion can be integrated into same coordinate system for analyzing the whole process easily. In this paper, a trial of Task 1 for placing pegs and Task 5 for throwing beanbag onto mat are used to illustrate the gross and fine movement systems. The digital data for gaze and skeletal joints can be extracted for monitoring the whole trial and further analyzing the behavior. The next step of the project is to improve the accuracy of motion detection and to further analyze the subject behavior in different tasks. Meanwhile, the proposed system could also be considered as valuable reference for other diseases related to eye-motion integration. ACKNOWLEDGMENT The work described in this paper was fully supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project Reference No.: UGC/FDS13/E02/16).

Fig. 4 Skeletal joint in a trial of Task 5

In order to analyze the body behavior, the parameter of the joints such as the displacement, velocity and acceleration of each joint, and the angles among them, should be extracted and calculated to evaluate human performance in DCD assessment. From the image record, we found that the main gestures of the subject in Task 5 are arm waving and body leaning forward. Hence, three angles θ1~θ3 shown in Fig. 1(b) can be extracted from the digital results of skeletal joints to express the behavior of arm waving and leaning forward. θ1 is used to measure the gesture of leaning forward; θ2 and θ3 are the angles for measuring the gesture of waving upper arm and lower arm, respectively. Fig. 5 shows the result of the defined angles for the trial of Task 5 in Fig. 4.

REFERENCES [1] S. E. Henderson, D. A. Sugden and A. L. Barnett, Movement Assessment Battery for Children-2 second Edition (Movement ABC-2). London, UK: The Psychological Corporation, 2007. [2] J. Yu, C. H. Sit, C. M. Capio, A. Burnett, A. S. Ha, and W. Y. Huang, “Fundamental movement skills proficiency in children with developmental coordination disorder: does physical selfconcept matter?” Disability and Rehabilitation, vol. 38, no. 1, pp. 45-51, 2016. [3] R. Bruininks, B. Bruininks, Bruininks-Oseretsky Test of Motor Proficiency (2nd ed.). Minneapolis, MN: NCS Pearson, 2005. [4] K. Essig, D. Dornbusch, D. Prinzhorn, H. Ritter, J. Maycock and T. Schack, “Automatic analysis of 3D gaze coordinates on scene objects using data from eye-tracking and motion-capture systems,” In Proceedings of the Symposium on Eye Tracking Research and Applications, March 2012, ACM, pp. 37-44. [5] C. A. L. Miles, G. Wood, S. J. Vine, J. N. Vickers and M. R. Wilson, “Quiet eye training facilitates visuomotor coordination in children with developmental coordination disorder,” Research in Developmental Disabilities, vol. 40, pp. 31-41, 2015. [6] Z. Cao, T. Simon, S. E. Wei and Y. Sheikh, “Realtime multiperson 2d pose estimation using part affinity fields,” arXiv preprint arXiv, 1611.08050, 2016.

160

1 2 3

140 120 100 80 60 40 20

0

2

4

6

8

10

12

14

16

Number of frame

Fig. 5 Results of angles in a trial of Task 5.

978-1-5386-1542-3@2017 APSIPA

777

APSIPA ASC 2017