projection analysis optimization for human transition

10
Research Article Projection Analysis Optimization for Human Transition Motion Estimation Wanyi Li , Feifei Zhang, Qiang Chen, and Qian Zhang Department of Computer Science, Guangdong University of Education, Guangzhou, Guangdong 510303, China Correspondence should be addressed to Wanyi Li; [email protected] Received 7 November 2018; Revised 18 March 2019; Accepted 9 April 2019; Published 2 June 2019 Academic Editor: Jintao Wang Copyright © 2019 Wanyi Li et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. It is a difficult task to estimate the human transition motion without the specialized soſtware. e 3-dimensional (3D) human motion animation is widely used in video game, movie, and so on. When making the animation, human transition motion is necessary. If there is a method that can generate the transition motion, the making time will cost less and the working efficiency will be improved. us a new method called latent space optimization based on projection analysis (LSOPA) is proposed to estimate the human transition motion. LSOPA is carried out under the assistance of Gaussian process dynamical models (GPDM); it builds the object function to optimize the data in the low dimensional (LD) space, and the optimized data in LD space will be obtained to generate the human transition motion. e LSOPA can make the GPDM learn the high dimensional (HD) data to estimate the needed transition motion. e excellent performance of LSOPA will be tested by the experiments. 1. Introduction 3-dimensional (3D) human motion animation is applied in many fields, such as video game and movie. It is necessary to estimate the human transition motion for making all kinds of 3D animations [1–4]. Estimating the human transition motion is crucial to making the smooth animation of the 3D human motion; it is a branch of human motion estimation. As to technologies of human motion estimation, there are some advanced methods in recent years, such as multiview image segmentation [5], sparse presentation [6], and convolutional neural network (CNN) coupled with a geometric prior [7]. ese methods focus on the reconstruction of the 3D human motion from the 2-dimensional (2D) image sequence. e needed data is the high dimensional data of the 3D human motion model. e mapping will be built between the model and the 2D image for each frame. However, it is difficult to build the mapping without the overcomplete prior information, as a result of the data complexity during the optimization. Some human poses contain the ambiguity of limbs; for example, it is hard to determine which thigh is in the front from the silhouette. e right thigh or leſt thigh cannot be confirmed. e problem is that if we have enough prior information to distinguish the ambiguity, the reconstruction will be achieved easily. us the generated model of 3D human motion is necessary; the samples of the model can be obtained to construct the prior information. Besides, the generated model can also generate the 3D human motion for making 3D character movie. en the generated model can be built through the unsupervised learning. e unsupervised learning of the model is the necessary supplement of the advantaged methods above. In this paper, how to generate the human transition motion will be mainly discussed in the following sections If there is a method that can estimate the valid human transition motion, the animation making time will cost less, and the work will get easier. us a new method called latent space optimization based on projection analysis (LSOPA) is proposed to estimate the human transition motion. LSOPA needs to combine Gaussian process dynamical models (GPDM) [8] to process the low dimensional (LD) data. GPDM is the derivation of some dimension reduction models [9–13]; it can provide the prediction of the LD data. Aſter the dimension reduction, LD data will be optimized by LSOPA, so that the valid human transition motion can be generated to achieve the estimation. e human motion is described by the high dimensional data. If the HD data sample is searched in its own dimensional space, the invalid data will be generated; Hindawi International Journal of Digital Multimedia Broadcasting Volume 2019, Article ID 6816453, 9 pages https://doi.org/10.1155/2019/6816453

Upload: others

Post on 12-Dec-2021

4 views

Category:

Documents


0 download

TRANSCRIPT

Research ArticleProjection Analysis Optimization for Human TransitionMotion Estimation

Wanyi Li Feifei Zhang Qiang Chen and Qian Zhang

Department of Computer Science Guangdong University of Education Guangzhou Guangdong 510303 China

Correspondence should be addressed to Wanyi Li luther1212163com

Received 7 November 2018 Revised 18 March 2019 Accepted 9 April 2019 Published 2 June 2019

Academic Editor Jintao Wang

Copyright copy 2019 Wanyi Li et alThis is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

It is a difficult task to estimate the human transition motion without the specialized software The 3-dimensional (3D) humanmotion animation is widely used in video game movie and so on When making the animation human transition motion isnecessary If there is a method that can generate the transition motion the making time will cost less and the working efficiencywill be improvedThus a newmethod called latent space optimization based on projection analysis (LSOPA) is proposed to estimatethe human transition motion LSOPA is carried out under the assistance of Gaussian process dynamical models (GPDM) it buildsthe object function to optimize the data in the low dimensional (LD) space and the optimized data in LD space will be obtainedto generate the human transition motion The LSOPA can make the GPDM learn the high dimensional (HD) data to estimate theneeded transition motion The excellent performance of LSOPA will be tested by the experiments

1 Introduction

3-dimensional (3D) human motion animation is applied inmany fields such as video game and movie It is necessary toestimate the human transition motion for making all kindsof 3D animations [1ndash4] Estimating the human transitionmotion is crucial to making the smooth animation of the 3Dhumanmotion it is a branch of humanmotion estimation Asto technologies of human motion estimation there are someadvanced methods in recent years such as multiview imagesegmentation [5] sparse presentation [6] and convolutionalneural network (CNN) coupled with a geometric prior[7] These methods focus on the reconstruction of the 3Dhumanmotion from the 2-dimensional (2D) image sequenceThe needed data is the high dimensional data of the 3Dhuman motion model The mapping will be built betweenthe model and the 2D image for each frame However itis difficult to build the mapping without the overcompleteprior information as a result of the data complexity duringthe optimization Some human poses contain the ambiguityof limbs for example it is hard to determine which thighis in the front from the silhouette The right thigh or leftthigh cannot be confirmed The problem is that if we haveenough prior information to distinguish the ambiguity the

reconstruction will be achieved easily Thus the generatedmodel of 3D human motion is necessary the samples of themodel can be obtained to construct the prior informationBesides the generatedmodel can also generate the 3D humanmotion for making 3D character movie Then the generatedmodel can be built through the unsupervised learningThe unsupervised learning of the model is the necessarysupplement of the advantaged methods above In this paperhow to generate the human transition motion will be mainlydiscussed in the following sections

If there is a method that can estimate the valid humantransition motion the animation making time will cost lessand the work will get easier Thus a new method called latentspace optimization based on projection analysis (LSOPA) isproposed to estimate the human transition motion LSOPAneeds to combine Gaussian process dynamical models(GPDM) [8] to process the low dimensional (LD) dataGPDM is the derivation of some dimension reductionmodels[9ndash13] it can provide the prediction of the LD data After thedimension reduction LD data will be optimized by LSOPAso that the valid human transitionmotion can be generated toachieve the estimationThehumanmotion is described by thehigh dimensional data If the HD data sample is searched inits own dimensional space the invalid data will be generated

HindawiInternational Journal of Digital Multimedia BroadcastingVolume 2019 Article ID 6816453 9 pageshttpsdoiorg10115520196816453

2 International Journal of Digital Multimedia Broadcasting

it means the generated humanmotion in 3Dwill be abnormal[14] GPDM is an unsupervised learning model it can learnthe high dimensional data (HD) sample and estimate thenew one but it needs to process the LD data in the LDspace In the LD space the LD data can be searched andthe corresponding valid HD data can be generated by themapping from LD data to HD data Some methods [15ndash17]can process the LD data but the generated LD data are allunreliable and undetermined during the optimization as aresult of the randomization of these methods The LSOPAcan do this work to process the LD data better and ensurethe valid transition motion can be generated The excellentperformance of LSOPA will be tested by the experiments inthe corresponding section

The human motion is described by a 3D human motionmodel The model has some markers to show the limbsof human motion it is the HD data When we make the3D human motion animation and only have the samplesof the two irrelevant human motions it is necessary to usethe transition motion to connect the two irrelevant humanmotions so that the complete and smooth human motion in3D is constructed Meanwhile the transition motion consistsof many poses and the poses are all the HD data samples of3D human model thus how to estimate the valid transitionmotion is a challenged task However the LSOPA can takethe advantage of the GPDM to generate the valid transitionmotion and avoid generating the invalid pose of the transitionmotion The proposed method will be discussed in thefollowing sections

2 Dimension Reduction

When we have the sequence of HD data samples Y =[y1 y119894 y119873]T isin R119873times119863 y119894 isin R119863 the corresponding LDdata X = [x1 x119894 x119873]T isin R119873times119902 x119894 isin R119902 can be obtainedas the following equations from GPDM [8]

119901 (Y | X120573W)

= |W|119873radic(2120587)119873119863 1003816100381610038161003816KY

1003816100381610038161003816119863exp(minus12

sdot tr (Kminus1119884 YW2YT))

(1)

119901 (X | 120572)

= 119901 (x1)radic(2120587)(119873minus1)119902 1003816100381610038161003816Kx

1003816100381610038161003816119902exp (minus12

sdot tr (KXminus1X2119873X

T2119873))

(2)

(KY)119894119895 = 119896Y (x119894 x119895) = exp(minus1205731210038171003817100381710038171003817x119894 minus x119895

100381710038171003817100381710038172)+ 120573minus12 120575x119894x119895

(3)

(KX)119894119895 = 119896X (x119894 x119895) = 1205721 exp(minus1205722210038171003817100381710038171003817x119894 minus x119895

100381710038171003817100381710038172)+ 1205723x119894Tx119895 + 1205724minus1120575x119894x119895

(4)

119901 (W) = 119863prod119898=1

2120581radic2120587 exp(minus119908211989821205812) (5)

119901 (120572) prop prod119894

120572minus1119894 (6)

119901 (120573) prop prod119894

120573minus1119894 (7)

X120572120573W = arg minX120572120573W

(minus ln119901 (X120572120573WY))

= arg minX120572120573W

(minus ln (119901 (Y | X120573W)

sdot 119901 (X | 120572) 119901 (120572) 119901 (120573) 119901 (W))

(8)

Equations (1)-(7) are used to computing (8) then the X andthe other parameters can be got In (1) and (2) we have thesequences X isin R119873times119902 Y isin R119873times119863 respectively Y denotes theHD data samples of 3D human motion and X denotes theLD data of Y in the LD space after the dimension reductionIn (3) and (4) KY isin R119873times119873 and KX isin R(119873minus1)times(119873minus1) are kernelmatrices respectively 120573 = [1205731 1205732] and 120572 = [1205721 1205722 1205723 1205724]are their corresponding kernel parameters and they satisfythe relation of (6) and (7) respectively Equations (3) and(4) are showing the method of computing the two kernelmatrices KY and KX W = diag(1199081 119908119898 119908119863) isin R119863times119863is a scale diagonal matrix with the preset parameter 120581 x1conforms Gaussian distribution of 119902 dimensions and X2119873is the sequence X2119873 = [x2 x3 x119873]T isin R(119873minus1)times119902 Themapping from LD data to HD data can be built as follows

ylowast = f (xlowast) = 120583Y (xlowast)= YTKminus1Y [119896Y (x1 xlowast) 119896Y (x2 xlowast) 119896Y (x119873 xlowast)]T= YTKminus1Y kY (xlowast)

(9)

The GPDM has a dynamic process It can predict the LDdata in the latent space (also called LD space) and generatethe needed HD data of human motion so that it has betterperformance than other dimension reduction models ThusGPDM can be selected to learn the samples of the twodifferent human motions then the LD space can be built tofind the needed LD data of the transition motion so thatcorresponding poses can be generated through the mappingof (9)

International Journal of Digital Multimedia Broadcasting 3

minus2 minus1 0 1 2minus10

12

minus15

minus1

minus05

0

05

1

15

2

Figure 1 The LD data of the two irrelevant human motions

3 Latent Space Optimization Based onProjection Analysis

31 The LD Data of the Transition Motion in the LD SpaceAfter the dimension reduction the HD data samples of thetwo irrelevant human motions can be seen in the LD spaceThe LD data of two difference sequences can denote thecorresponding poses of the two irrelevant human motionsIt can be found that there is an obvious distance betweenthe two LD data as Figure 1 shows From the two sets ofLD data we can know the needed transition motion can begenerated through the LD space but it needs to construct anappropriated curve to connect the two sets of LD data in theLD space Thus the optimized work will be started

32 Optimization in the LD Space Assume that X1 =[x11 x12 x11198731]T isin R1198731times119902 X2 = [x21 x22 x21198732]T isin R1198732times119902 arerespectively the LDdata of the two irrelevant humanmotionsafter dimension reduction the first motion has 1198731 framesand the second motion has 1198732 frames The positions of thecorresponding LD data can be seen in Figure 2 In Figure 2the derivations of 119871 can be got as follows

119871

=100381710038171003817100381710038171003817100381710038171003817100381710038171003817(x1015840119894 minus x11198731) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)100381710038171003817100381710038172

100381710038171003817100381710038171003817100381710038171003817100381710038171003817 (10)

A = (x1015840119894 minus x11198731)T (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817 (11)

From Figure 2 we can see 119871 denotes the distance from the LDdata sample x1015840119894 isin R3 to its projectionA in (10) andwe can alsosee the vector (Bminusx11198731) is the projection of the vector (x1015840119894minusx11198731)

in the plane 120596 Assume the set of bases Blowast = [b1 b2 b119902] isinR119902times119902 in the plane 120596 b119894 isin R3 119894 = 1 2 119902 is the vector fromthe plane 120596 we can get

Proj120596 (x1015840119894 minus x11198731) = (B minus x11198731) (12)

(x1015840119894 minus x11198731) minus Proj120596 (x1015840119894 minus x11198731) = Projc120596 (x1015840119894 minus x11198731) (13)

ProjH denotes the projective operation which means obtain-ing the projective vector or plane of H ProjcH denotes thecomplement of the projective operation ProjH which meansobtaining the vector perpendicular to projective vector fromthe operation ProjH In (12) and (13) the complement of theprojection vector is perpendicular to the plane 120596 thus theequation can be got from (13) as follows

BlowastT ((x1015840119894 minus x11198731) minus Proj120596 (x1015840119894 minus x11198731))= BlowastTProjc120596 (x1015840119894 minus x11198731) = 0

(14)

In (14)Blowast isin R119902times119902 thenProj120596(x1015840119894 minusx11198731) is a vector of the plane120596 and (15) can deduced from this

Proj120596 (x1015840119894 minus x11198731) = Blowasty = 1199101b1 + 1199102b2 + + 119910119902b119902 (15)

In (15) Proj120596(x1015840119894 minus x11198731) isin 120596 can be denoted by the bases b119894 isinR3 119894 = 1 2 119902 and coefficients y = [1199101 1199102 119910119902]T isin R119902then Blowast = [b1 b2 b119902] isin R119902times119902 Substituting (15) into (14)the following equation can be deduced

BlowastT ((x1015840119894 minus x11198731) minus Blowasty) = 0 (16)

Simplify (16) compute the least squares estimator of y andget the equation as below

y = (BlowastTBlowast)minus1 BlowastT (x1015840119894 minus x11198731) (17)

Substitute (12) and (17) into (15) the following equation canbe obtained

Proj120596 (x1015840119894 minus x11198731) = (B minus x1198731)= Blowast (BlowastTBlowast)minus1 BlowastT (x1015840119894 minus x11198731)

(18)

From (15) to (18) we have y isin R119902 and the bases b119894 isin R119902 119894 =1 2 119902 are linearly independentOur task is to find the optimal LD data sequence of the

transition motion between x11198731 and x21 thus we can build theconstraint functions to describe position of the needed LDdata sequence X1015840 = [x10158401 x10158402 x10158401198731015840]T isin R119873

1015840times119902 as follows

4 International Journal of Digital Multimedia Broadcasting

C

1

1Nx

X2

X1

L

B

21xA

(i) (

i minus 1

N1)

(i minus 1

N1)

i

Figure 2 The analysis of the optimized models

(x1015840119894 minus x11198731)T (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817= 119894 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)

119894 = 1 2 1198731015840(19)

Equation (19) denotes confirming the projection distance inthe vector (x21 minus x11198731) from each x1015840119894 isin R119902 119894 = 1 2 1198731015840 thenwe have

100381710038171003817100381710038171003817100381710038171003817100381710038171003817(x1015840119894 minus x11198731) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)100381710038171003817100381710038172

100381710038171003817100381710038171003817100381710038171003817100381710038171003817

= 119897 (119894) 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1) = 119871 119894 = 1 2 1198731015840(20)

Equation (20) denotes confirming the distance between x1015840119894and Proj(x21minusx11198731 )x

1015840119894 meanwhile we can get

arccos B1015840TC(1003817100381710038171003817B10158401003817100381710038171003817 C) lt 120579 (119894) 119894 = 1 2 1198731015840 (21)

In (21) B1015840 = (B minus x1198731) is a vector of Proj120596(x1015840119894 minus x11198731)which is perpendicular to the plane 120596 C is a preset vectorin the plane 120596 it can be seen as the reference of computingthe angle of (21) In (19)-(21) the LD data of the transitionmotion can be described in the LD space and 119897(119894) is the presetvalues of adjusting the value of 119871 120579(119894) is preset angle whichis describing the deviation of the LD data position Then theprojection and corresponding vector can be used to buildthese constraint functions so that an object function withconstraints can be obtained as follows

argminx1015840119894

10038171003817100381710038171003817y1015840119894 minus y1015840119894minus110038171003817100381710038171003817 = argmin

x1015840119894

10038171003817100381710038171003817f (x1015840119894) minus f (x1015840119894minus1)10038171003817100381710038171003817 x10158400 = x11198731 119894 = 1 2 1198731015840

119904119905

(x1015840119894 minus x11198731)T (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817= 119894 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)100381710038171003817100381710038171003817100381710038171003817100381710038171003817

(x1015840119894 minus x11198731) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)100381710038171003817100381710038172100381710038171003817100381710038171003817100381710038171003817100381710038171003817= 119897 (119894) 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)

arccos B1015840TC(1003817100381710038171003817B10158401003817100381710038171003817 C) lt 120579 (119894)

119894 = 1 2 1198731015840(22)

The optimized object function of (22) ensures that the framesof the transitionmotion can be varying regularly and the validtransition motion will be seen in the vision The solution of(22) can be obtained by the method in [18]

33 The Procedure of the LSOPA When the two irrelevanthuman motion samples are obtained the two samples are

denoted by Y1 = [y11 y12 y11198731]T isin R1198731times119902 Y2 =[y21 y22 y21198732]T isin R1198732times119902 Then we have the following

LSOPA

(1) Let Y = [YT1 YT2 ]T use the method called principal

component analysis (PCA) to initiate Y and get the

International Journal of Digital Multimedia Broadcasting 5

Input Y1 Y2 Output Ynew Xu 120572u 120573uWu

Preset 120579(119894) 119897(119894) 119894 = 1 2 1198731015840 C 120572 120573WY = [YT

1 YT2 ]T

Use PCA to process Y and get X = [XT1 XT2 ]T

X120572120573W = arg minX120572120573W

(minus ln119901(X120572120573WY))For i=1 to1198731015840argmin

x1015840119894

1003817100381710038171003817y1015840119894 minus y1015840119894minus11003817100381710038171003817 = argmin

x1015840119894

1003817100381710038171003817f(x1015840119894 ) minus f(x1015840119894minus1)1003817100381710038171003817 x10158400 = x11198731

119904119905

(x1015840119894 minus x11198731)T (x21 minus x11198731 )10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817= 119894 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)100381710038171003817100381710038171003817100381710038171003817100381710038171003817

(x1015840119894 minus x11198731 ) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731 )100381710038171003817100381710038172100381710038171003817100381710038171003817100381710038171003817100381710038171003817= 119897(119894) 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)

arccos B1015840TC(B1015840 C) lt 120579 (119894)

y1015840119894 = f(x1015840119894 )End ForX1015840 = [x10158401 x10158402 x10158401198731015840 ]TY1015840 = [y10158401 y10158402 y10158401198731015840 ]TXnew = [XT

1 X1015840TXT2 ]T

Ynew = [YT1 Y1015840TYT

2 ]TXu120572u120573uWu = arg min

X120572120573W(minus ln119901(Xnew120572120573WYnew))

End the procedure

Algorithm 1 The brief procedure of LSOPA

initiated LD data X = [XT1 XT2 ]T Preset the values of120579(119894) 119897(119894) and C

(2) Preset 120572120573W Then compute X120572120573W =argminX120572120573W(minus ln119901(X120572120573WY)) update Xand get 120572 120573W

(3) Compute argminx1015840119894y119894 minus y119894minus1 = argminx1015840

119894f(x1015840119894 ) minus

f(x1015840119894minus1) x10158400 = x11198731 119894 = 1 2 1198731015840 and the corre-sponding constraint functions in (22) and get the LDdata X1015840 = [x10158401 x10158402 x10158401198731015840]T isin R119873

1015840times119902 of the transitionmotion

(4) Compute the transition motion y1015840119894 = f(x1015840119894 ) 119894 =1 2 1198731015840 let Y1015840 = [y10158401 y10158402 y10158401198731015840]T andoutput the 3D transition motion modelFinally let Ynew = [YT

1 Y1015840TYT2 ]T Xnew =

[XT1 X1015840TXT

2 ]T compute the Xu120572u120573uWu =argminX120572120573W(minus ln119901(Xnew120572120573WYnew)) andupdate Xu 120572u 120573uWu End the whole procedure

The method LSOPA can be written as pseudocode inAlgorithm 1 for the transition motion estimation

4 Experiment and Evaluation

We test the proposed method LSOPA in the performance ofthe vision and error test We choose the line initialization[8] (LI) and random initialization [17] (RI) as the comparedmethods LI means that the LD data of the transition motionis obtained linearly between the LD data of the two differentmotions RI means that the LD data of the transition motionis obtained randomly between the LD data of two differentmotions The LD data samples (green ones) in the LD spacecan be seen in Figures 3(a)ndash3(c) which denotes the transitionmotion The tested transition motion is a total of 15 framesand the tested database is the Carnegie Mellon University(CMU)motion capture databasewhich is shortened forCMUdatabase

The three methods can estimate the corresponding tran-sition motions but the transition motions are obviouslydifferent Let us see the effect in the vision from Figures4(a)ndash4(c) In the estimation of the transitionmotion betweenwalking and playing golf we can find that the transitionmotion generated by the LSOPA is smoothly and naturallyin vision The transition motions generated by the othertwo methods are invalid and messy they are abnormalhuman transitionmotions obviously In aword the transitionmotions of LI and RI are not like playing the golf It isdemonstrated that the performance of LSOPA is the best inthe visual effect among the three methods

6 International Journal of Digital Multimedia Broadcasting

minus2minus1

01

2

minus10

12

3

minus1

0

1

2

3

(a) LSOPAminus2

minus10

12

minus10

12

3

minus1

0

1

2

3

(b) LI

minus2minus1

01

2

minus10

12

3

minus1

0

1

2

3

(c) RI

Figure 3 The LD data samples of the LSOPA LI and RI

Then the error and mean error of estimating the tran-sition motion are tested How to compute the error can beseen in [19] but the scale of markers in the 3Dmodel must beconsidered The error can be computed as follows

119864119903 = 1119872119886119903119872119886119903sum119898119886=1

10038171003817100381710038171003817Bx119898119886 minus Bx101584011989811988610038171003817100381710038171003817 (23)

In (23) Bx119898119886Bx1015840119898119886 isin R3 Bx119898119886 and Bx1015840119898119886 are respectivelythe true position and the estimated position of the marker in

the 3Dmodel both are the jointmarker in the 3DmodelMaris the number of the markers in the 3D model

The transition motion includes four transition motionsthey are walking and playing golf walking and dancingwalking and swimming and walking and playing footballrespectively The contrast of the true data and the estimateddata can be seen in Figures 5(a)ndash5(c)The error of each framein the transition motion generated by LSOPA is the lowestamong the three methods so is the mean error Generallyspeaking the error of some frames will be close but the

International Journal of Digital Multimedia Broadcasting 7

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(a) The transition motion of LSOPA

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(b) The transition motion of LI

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(c) The transition motion of RI

Figure 4 The visual evaluation of the methods

error of LSOPA will be kept low and stable on the wholeThe tested result can also reveal that the LSOPA has the bestperformance in the error test

5 Conclusion

The LSOPA is proposed to solve the problem of estimatingthe human transition motion The LSOPA is tested in theexperiments of the vision and error test and we can find thatthe performance of the LSOPA is the best among the threemethods However the LSOPA still cannot process somecomplex human transition motion well and its initializationof optimization cannot be random thus the LSOPA needto be improved in the future research Then the improvedmethod can carry out the unsupervised learning of thetwo complex irrelevant human motions and estimate theirsmooth and valid transition motion The 3D human motion

model will also be replaced by other exquisitemodels [20ndash22]in the future research

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work is supported by the University Young CreativeTalent Project of Guangdong Province (no 2016KQNCX111)Science and Technology Plan Project of Guangzhou (no

8 International Journal of Digital Multimedia Broadcasting

LSOPA mean error=309007mmLI mean error=3366073mmRI mean error=284808mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(a) The error of the transitionmotion fromwalking to playing golf

LSOPA mean error=44449mmLI mean error=225848mmRI mean error=674034mm

0

20

40

60

80

100

120

140

160

180

erro

r (m

m)

5 10 150frame

(b) The error of the transition motion from walking to dancing

LSOPA mean error=68818mmLI mean error=352607mmRI mean error=1912774mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(c) The error of the transition motion from walking to swimming

LSOPA mean error=112612mmLI mean error=616732mmRI mean error=2228574mm

0

50

100

150

200

250

300

350

400

450er

ror (

mm

)

5 10 150frame

(d) The error of the transition motion from walking to playingfootball

Figure 5 The error of several transition motions

201804010280) National Natural Science Foundation ofChina (no 61772140) Science and Technology PlanningProject of Guangdong Province (no 2017A010101021) Appro-priative Researching Fund for Professors and DoctorsGuangdong University of Education (no 2015ARF17 no2015ARF25) Key Disciplines Of Network Engineering ofGuangdong University of Education (no ZD2017004) andComputer Practice Teaching Demonstration Center ofGuangdong University of Education (no 2018sfzx01)

References

[1] Z Liu L Zhou H Leung et al ldquoHigh-quality compatibletriangulations and their application in interactive animationrdquoComputers and Graphics vol 76 pp 60ndash72 2018

[2] A Sujar J J Casafranca A Serrurier et al ldquoReal-time anima-tion of human charactersrsquo anatomyrdquo Computers and Graphicsvol 74 pp 268ndash277 2018

[3] B Walther-Franks and R Malaka ldquoAn interaction approach tocomputer animationrdquo Entertainment Computing vol 5 no 4pp 271ndash283 2014

[4] D Archambault and H C Purchase ldquoCan animation supportthe visualisation of dynamic graphsrdquo Information Sciences vol330 pp 495ndash509 2016

[5] L Yebin G Juergen S Carsten et al ldquoMarkerless motioncapture of multiple characters using multiview image segmen-tationrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 11 pp 2720ndash2735 2013

[6] X Zhou M Zhu S Leonardos and K Daniilidis ldquoSparserepresentation for 3D shape estimation a convex relaxation

International Journal of Digital Multimedia Broadcasting 9

approachrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 39 no 8 pp 1648ndash1661 2017

[7] X ZhouMZhuG Pavlakos S Leonardos KGDerpanis andK Daniilidis ldquoMonoCap monocular human motion captureusing aCNNcoupledwith a geometric priorrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 41 no 4 pp901ndash914 2019

[8] J M Wang D J Fleet and A Hertzmann ldquoGaussian processdynamical models for human motionrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 30 no 2 pp 283ndash298 2008

[9] S T Roweis and L K Saul ldquoNonlinear dimensionality reduc-tion by locally linear embeddingrdquo Science vol 290 no 5500pp 2323ndash2326 2000

[10] N Lawrence ldquoProbabilistic non-linear principal componentanalysis with Gaussian process latent variable modelsrdquo Journalof Machine Learning Research (JMLR) vol 6 pp 1783ndash18162005

[11] C H Ek Shared Gaussian Process Latent Variables ModelsOxford Brookes University Oxford UK 2009

[12] J Xinwei G JunbinWTianjiang et al ldquoSupervised latent linearGaussian process latent variable model for dimensionalityreductionrdquo IEEE Transactions on Systems Man and Cybernet-ics Part B Cybernetics vol 42 no 6 pp 1620ndash1632 2012

[13] J B Tenenbaum V de Silva and J C Langford ldquoA globalgeometric framework for nonlinear dimensionality reductionrdquoScience vol 290 no 5500 pp 2319ndash2323 2000

[14] W-Y Li and J-F Sun ldquoHuman motion estimation basedon gaussion incremental dimension reduction and manifoldboltzmann optimizationrdquoActa Electronica Sinica vol 45 no 12pp 3060ndash3069 2017

[15] W Li J Sun X Zhang and Y Wu ldquoSpatial constraints-based maximum likelihood estimation for human motionsrdquo inProceedings of the 2013 IEEE International Conference on SignalProcessing Communications and Computing ICSPCC 2013 pp1ndash6 IEEE KunMing China 2013

[16] W Li J Sun and Z Song ldquoManifold latent probabilisticoptimization for human motion fitting based on orthogonalsubspace searchingrdquo Journal of Information and ComputationalScience vol 11 no 15 pp 5357ndash5365 2014

[17] W Li J Sun and X Zhang ldquoLearning for human transitionmotions estimation based on feature similarity optimizationrdquoJournal of Computational Information Systems vol 10 no 5 pp2127ndash2136 2014

[18] RH ByrdM EHribar and J Nocedal ldquoAn interior point algo-rithm for large-scale nonlinear programmingrdquo SIAM Journal onOptimization vol 9 no 4 pp 877ndash900 1999

[19] L Sigal A O Balan andM J Black ldquoHumanEva synchronizedvideo and motion capture dataset and baseline algorithm forevaluation of articulated human motionrdquo International Journalof Computer Vision vol 87 no 1-2 pp 4ndash27 2010

[20] M Sandau H Koblauch T B Moeslund H Aanaeligs T Alkjaeligrand E B Simonsen ldquoMarkerless motion capture can providereliable 3D gait kinematics in the sagittal and frontal planerdquoMedical Engineeringamp Physics vol 36 no 9 pp 1168ndash1175 2014

[21] E Yeguas-Bolivar R Munoz-Salinas R Medina-Carnicer andA Carmona-Poyato ldquoComparing evolutionary algorithms andparticle filters for markerless human motion capturerdquo AppliedSoft Computing vol 17 pp 153ndash166 2014

[22] J Gall B Rosenhahn T Brox et al ldquoOptimization and filteringfor human motion capture - a multi-layer frameworkrdquo Inter-national Journal of Computer Vision vol 87 no 1-2 pp 75ndash922010

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

2 International Journal of Digital Multimedia Broadcasting

it means the generated humanmotion in 3Dwill be abnormal[14] GPDM is an unsupervised learning model it can learnthe high dimensional data (HD) sample and estimate thenew one but it needs to process the LD data in the LDspace In the LD space the LD data can be searched andthe corresponding valid HD data can be generated by themapping from LD data to HD data Some methods [15ndash17]can process the LD data but the generated LD data are allunreliable and undetermined during the optimization as aresult of the randomization of these methods The LSOPAcan do this work to process the LD data better and ensurethe valid transition motion can be generated The excellentperformance of LSOPA will be tested by the experiments inthe corresponding section

The human motion is described by a 3D human motionmodel The model has some markers to show the limbsof human motion it is the HD data When we make the3D human motion animation and only have the samplesof the two irrelevant human motions it is necessary to usethe transition motion to connect the two irrelevant humanmotions so that the complete and smooth human motion in3D is constructed Meanwhile the transition motion consistsof many poses and the poses are all the HD data samples of3D human model thus how to estimate the valid transitionmotion is a challenged task However the LSOPA can takethe advantage of the GPDM to generate the valid transitionmotion and avoid generating the invalid pose of the transitionmotion The proposed method will be discussed in thefollowing sections

2 Dimension Reduction

When we have the sequence of HD data samples Y =[y1 y119894 y119873]T isin R119873times119863 y119894 isin R119863 the corresponding LDdata X = [x1 x119894 x119873]T isin R119873times119902 x119894 isin R119902 can be obtainedas the following equations from GPDM [8]

119901 (Y | X120573W)

= |W|119873radic(2120587)119873119863 1003816100381610038161003816KY

1003816100381610038161003816119863exp(minus12

sdot tr (Kminus1119884 YW2YT))

(1)

119901 (X | 120572)

= 119901 (x1)radic(2120587)(119873minus1)119902 1003816100381610038161003816Kx

1003816100381610038161003816119902exp (minus12

sdot tr (KXminus1X2119873X

T2119873))

(2)

(KY)119894119895 = 119896Y (x119894 x119895) = exp(minus1205731210038171003817100381710038171003817x119894 minus x119895

100381710038171003817100381710038172)+ 120573minus12 120575x119894x119895

(3)

(KX)119894119895 = 119896X (x119894 x119895) = 1205721 exp(minus1205722210038171003817100381710038171003817x119894 minus x119895

100381710038171003817100381710038172)+ 1205723x119894Tx119895 + 1205724minus1120575x119894x119895

(4)

119901 (W) = 119863prod119898=1

2120581radic2120587 exp(minus119908211989821205812) (5)

119901 (120572) prop prod119894

120572minus1119894 (6)

119901 (120573) prop prod119894

120573minus1119894 (7)

X120572120573W = arg minX120572120573W

(minus ln119901 (X120572120573WY))

= arg minX120572120573W

(minus ln (119901 (Y | X120573W)

sdot 119901 (X | 120572) 119901 (120572) 119901 (120573) 119901 (W))

(8)

Equations (1)-(7) are used to computing (8) then the X andthe other parameters can be got In (1) and (2) we have thesequences X isin R119873times119902 Y isin R119873times119863 respectively Y denotes theHD data samples of 3D human motion and X denotes theLD data of Y in the LD space after the dimension reductionIn (3) and (4) KY isin R119873times119873 and KX isin R(119873minus1)times(119873minus1) are kernelmatrices respectively 120573 = [1205731 1205732] and 120572 = [1205721 1205722 1205723 1205724]are their corresponding kernel parameters and they satisfythe relation of (6) and (7) respectively Equations (3) and(4) are showing the method of computing the two kernelmatrices KY and KX W = diag(1199081 119908119898 119908119863) isin R119863times119863is a scale diagonal matrix with the preset parameter 120581 x1conforms Gaussian distribution of 119902 dimensions and X2119873is the sequence X2119873 = [x2 x3 x119873]T isin R(119873minus1)times119902 Themapping from LD data to HD data can be built as follows

ylowast = f (xlowast) = 120583Y (xlowast)= YTKminus1Y [119896Y (x1 xlowast) 119896Y (x2 xlowast) 119896Y (x119873 xlowast)]T= YTKminus1Y kY (xlowast)

(9)

The GPDM has a dynamic process It can predict the LDdata in the latent space (also called LD space) and generatethe needed HD data of human motion so that it has betterperformance than other dimension reduction models ThusGPDM can be selected to learn the samples of the twodifferent human motions then the LD space can be built tofind the needed LD data of the transition motion so thatcorresponding poses can be generated through the mappingof (9)

International Journal of Digital Multimedia Broadcasting 3

minus2 minus1 0 1 2minus10

12

minus15

minus1

minus05

0

05

1

15

2

Figure 1 The LD data of the two irrelevant human motions

3 Latent Space Optimization Based onProjection Analysis

31 The LD Data of the Transition Motion in the LD SpaceAfter the dimension reduction the HD data samples of thetwo irrelevant human motions can be seen in the LD spaceThe LD data of two difference sequences can denote thecorresponding poses of the two irrelevant human motionsIt can be found that there is an obvious distance betweenthe two LD data as Figure 1 shows From the two sets ofLD data we can know the needed transition motion can begenerated through the LD space but it needs to construct anappropriated curve to connect the two sets of LD data in theLD space Thus the optimized work will be started

32 Optimization in the LD Space Assume that X1 =[x11 x12 x11198731]T isin R1198731times119902 X2 = [x21 x22 x21198732]T isin R1198732times119902 arerespectively the LDdata of the two irrelevant humanmotionsafter dimension reduction the first motion has 1198731 framesand the second motion has 1198732 frames The positions of thecorresponding LD data can be seen in Figure 2 In Figure 2the derivations of 119871 can be got as follows

119871

=100381710038171003817100381710038171003817100381710038171003817100381710038171003817(x1015840119894 minus x11198731) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)100381710038171003817100381710038172

100381710038171003817100381710038171003817100381710038171003817100381710038171003817 (10)

A = (x1015840119894 minus x11198731)T (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817 (11)

From Figure 2 we can see 119871 denotes the distance from the LDdata sample x1015840119894 isin R3 to its projectionA in (10) andwe can alsosee the vector (Bminusx11198731) is the projection of the vector (x1015840119894minusx11198731)

in the plane 120596 Assume the set of bases Blowast = [b1 b2 b119902] isinR119902times119902 in the plane 120596 b119894 isin R3 119894 = 1 2 119902 is the vector fromthe plane 120596 we can get

Proj120596 (x1015840119894 minus x11198731) = (B minus x11198731) (12)

(x1015840119894 minus x11198731) minus Proj120596 (x1015840119894 minus x11198731) = Projc120596 (x1015840119894 minus x11198731) (13)

ProjH denotes the projective operation which means obtain-ing the projective vector or plane of H ProjcH denotes thecomplement of the projective operation ProjH which meansobtaining the vector perpendicular to projective vector fromthe operation ProjH In (12) and (13) the complement of theprojection vector is perpendicular to the plane 120596 thus theequation can be got from (13) as follows

BlowastT ((x1015840119894 minus x11198731) minus Proj120596 (x1015840119894 minus x11198731))= BlowastTProjc120596 (x1015840119894 minus x11198731) = 0

(14)

In (14)Blowast isin R119902times119902 thenProj120596(x1015840119894 minusx11198731) is a vector of the plane120596 and (15) can deduced from this

Proj120596 (x1015840119894 minus x11198731) = Blowasty = 1199101b1 + 1199102b2 + + 119910119902b119902 (15)

In (15) Proj120596(x1015840119894 minus x11198731) isin 120596 can be denoted by the bases b119894 isinR3 119894 = 1 2 119902 and coefficients y = [1199101 1199102 119910119902]T isin R119902then Blowast = [b1 b2 b119902] isin R119902times119902 Substituting (15) into (14)the following equation can be deduced

BlowastT ((x1015840119894 minus x11198731) minus Blowasty) = 0 (16)

Simplify (16) compute the least squares estimator of y andget the equation as below

y = (BlowastTBlowast)minus1 BlowastT (x1015840119894 minus x11198731) (17)

Substitute (12) and (17) into (15) the following equation canbe obtained

Proj120596 (x1015840119894 minus x11198731) = (B minus x1198731)= Blowast (BlowastTBlowast)minus1 BlowastT (x1015840119894 minus x11198731)

(18)

From (15) to (18) we have y isin R119902 and the bases b119894 isin R119902 119894 =1 2 119902 are linearly independentOur task is to find the optimal LD data sequence of the

transition motion between x11198731 and x21 thus we can build theconstraint functions to describe position of the needed LDdata sequence X1015840 = [x10158401 x10158402 x10158401198731015840]T isin R119873

1015840times119902 as follows

4 International Journal of Digital Multimedia Broadcasting

C

1

1Nx

X2

X1

L

B

21xA

(i) (

i minus 1

N1)

(i minus 1

N1)

i

Figure 2 The analysis of the optimized models

(x1015840119894 minus x11198731)T (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817= 119894 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)

119894 = 1 2 1198731015840(19)

Equation (19) denotes confirming the projection distance inthe vector (x21 minus x11198731) from each x1015840119894 isin R119902 119894 = 1 2 1198731015840 thenwe have

100381710038171003817100381710038171003817100381710038171003817100381710038171003817(x1015840119894 minus x11198731) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)100381710038171003817100381710038172

100381710038171003817100381710038171003817100381710038171003817100381710038171003817

= 119897 (119894) 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1) = 119871 119894 = 1 2 1198731015840(20)

Equation (20) denotes confirming the distance between x1015840119894and Proj(x21minusx11198731 )x

1015840119894 meanwhile we can get

arccos B1015840TC(1003817100381710038171003817B10158401003817100381710038171003817 C) lt 120579 (119894) 119894 = 1 2 1198731015840 (21)

In (21) B1015840 = (B minus x1198731) is a vector of Proj120596(x1015840119894 minus x11198731)which is perpendicular to the plane 120596 C is a preset vectorin the plane 120596 it can be seen as the reference of computingthe angle of (21) In (19)-(21) the LD data of the transitionmotion can be described in the LD space and 119897(119894) is the presetvalues of adjusting the value of 119871 120579(119894) is preset angle whichis describing the deviation of the LD data position Then theprojection and corresponding vector can be used to buildthese constraint functions so that an object function withconstraints can be obtained as follows

argminx1015840119894

10038171003817100381710038171003817y1015840119894 minus y1015840119894minus110038171003817100381710038171003817 = argmin

x1015840119894

10038171003817100381710038171003817f (x1015840119894) minus f (x1015840119894minus1)10038171003817100381710038171003817 x10158400 = x11198731 119894 = 1 2 1198731015840

119904119905

(x1015840119894 minus x11198731)T (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817= 119894 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)100381710038171003817100381710038171003817100381710038171003817100381710038171003817

(x1015840119894 minus x11198731) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)100381710038171003817100381710038172100381710038171003817100381710038171003817100381710038171003817100381710038171003817= 119897 (119894) 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)

arccos B1015840TC(1003817100381710038171003817B10158401003817100381710038171003817 C) lt 120579 (119894)

119894 = 1 2 1198731015840(22)

The optimized object function of (22) ensures that the framesof the transitionmotion can be varying regularly and the validtransition motion will be seen in the vision The solution of(22) can be obtained by the method in [18]

33 The Procedure of the LSOPA When the two irrelevanthuman motion samples are obtained the two samples are

denoted by Y1 = [y11 y12 y11198731]T isin R1198731times119902 Y2 =[y21 y22 y21198732]T isin R1198732times119902 Then we have the following

LSOPA

(1) Let Y = [YT1 YT2 ]T use the method called principal

component analysis (PCA) to initiate Y and get the

International Journal of Digital Multimedia Broadcasting 5

Input Y1 Y2 Output Ynew Xu 120572u 120573uWu

Preset 120579(119894) 119897(119894) 119894 = 1 2 1198731015840 C 120572 120573WY = [YT

1 YT2 ]T

Use PCA to process Y and get X = [XT1 XT2 ]T

X120572120573W = arg minX120572120573W

(minus ln119901(X120572120573WY))For i=1 to1198731015840argmin

x1015840119894

1003817100381710038171003817y1015840119894 minus y1015840119894minus11003817100381710038171003817 = argmin

x1015840119894

1003817100381710038171003817f(x1015840119894 ) minus f(x1015840119894minus1)1003817100381710038171003817 x10158400 = x11198731

119904119905

(x1015840119894 minus x11198731)T (x21 minus x11198731 )10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817= 119894 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)100381710038171003817100381710038171003817100381710038171003817100381710038171003817

(x1015840119894 minus x11198731 ) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731 )100381710038171003817100381710038172100381710038171003817100381710038171003817100381710038171003817100381710038171003817= 119897(119894) 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)

arccos B1015840TC(B1015840 C) lt 120579 (119894)

y1015840119894 = f(x1015840119894 )End ForX1015840 = [x10158401 x10158402 x10158401198731015840 ]TY1015840 = [y10158401 y10158402 y10158401198731015840 ]TXnew = [XT

1 X1015840TXT2 ]T

Ynew = [YT1 Y1015840TYT

2 ]TXu120572u120573uWu = arg min

X120572120573W(minus ln119901(Xnew120572120573WYnew))

End the procedure

Algorithm 1 The brief procedure of LSOPA

initiated LD data X = [XT1 XT2 ]T Preset the values of120579(119894) 119897(119894) and C

(2) Preset 120572120573W Then compute X120572120573W =argminX120572120573W(minus ln119901(X120572120573WY)) update Xand get 120572 120573W

(3) Compute argminx1015840119894y119894 minus y119894minus1 = argminx1015840

119894f(x1015840119894 ) minus

f(x1015840119894minus1) x10158400 = x11198731 119894 = 1 2 1198731015840 and the corre-sponding constraint functions in (22) and get the LDdata X1015840 = [x10158401 x10158402 x10158401198731015840]T isin R119873

1015840times119902 of the transitionmotion

(4) Compute the transition motion y1015840119894 = f(x1015840119894 ) 119894 =1 2 1198731015840 let Y1015840 = [y10158401 y10158402 y10158401198731015840]T andoutput the 3D transition motion modelFinally let Ynew = [YT

1 Y1015840TYT2 ]T Xnew =

[XT1 X1015840TXT

2 ]T compute the Xu120572u120573uWu =argminX120572120573W(minus ln119901(Xnew120572120573WYnew)) andupdate Xu 120572u 120573uWu End the whole procedure

The method LSOPA can be written as pseudocode inAlgorithm 1 for the transition motion estimation

4 Experiment and Evaluation

We test the proposed method LSOPA in the performance ofthe vision and error test We choose the line initialization[8] (LI) and random initialization [17] (RI) as the comparedmethods LI means that the LD data of the transition motionis obtained linearly between the LD data of the two differentmotions RI means that the LD data of the transition motionis obtained randomly between the LD data of two differentmotions The LD data samples (green ones) in the LD spacecan be seen in Figures 3(a)ndash3(c) which denotes the transitionmotion The tested transition motion is a total of 15 framesand the tested database is the Carnegie Mellon University(CMU)motion capture databasewhich is shortened forCMUdatabase

The three methods can estimate the corresponding tran-sition motions but the transition motions are obviouslydifferent Let us see the effect in the vision from Figures4(a)ndash4(c) In the estimation of the transitionmotion betweenwalking and playing golf we can find that the transitionmotion generated by the LSOPA is smoothly and naturallyin vision The transition motions generated by the othertwo methods are invalid and messy they are abnormalhuman transitionmotions obviously In aword the transitionmotions of LI and RI are not like playing the golf It isdemonstrated that the performance of LSOPA is the best inthe visual effect among the three methods

6 International Journal of Digital Multimedia Broadcasting

minus2minus1

01

2

minus10

12

3

minus1

0

1

2

3

(a) LSOPAminus2

minus10

12

minus10

12

3

minus1

0

1

2

3

(b) LI

minus2minus1

01

2

minus10

12

3

minus1

0

1

2

3

(c) RI

Figure 3 The LD data samples of the LSOPA LI and RI

Then the error and mean error of estimating the tran-sition motion are tested How to compute the error can beseen in [19] but the scale of markers in the 3Dmodel must beconsidered The error can be computed as follows

119864119903 = 1119872119886119903119872119886119903sum119898119886=1

10038171003817100381710038171003817Bx119898119886 minus Bx101584011989811988610038171003817100381710038171003817 (23)

In (23) Bx119898119886Bx1015840119898119886 isin R3 Bx119898119886 and Bx1015840119898119886 are respectivelythe true position and the estimated position of the marker in

the 3Dmodel both are the jointmarker in the 3DmodelMaris the number of the markers in the 3D model

The transition motion includes four transition motionsthey are walking and playing golf walking and dancingwalking and swimming and walking and playing footballrespectively The contrast of the true data and the estimateddata can be seen in Figures 5(a)ndash5(c)The error of each framein the transition motion generated by LSOPA is the lowestamong the three methods so is the mean error Generallyspeaking the error of some frames will be close but the

International Journal of Digital Multimedia Broadcasting 7

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(a) The transition motion of LSOPA

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(b) The transition motion of LI

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(c) The transition motion of RI

Figure 4 The visual evaluation of the methods

error of LSOPA will be kept low and stable on the wholeThe tested result can also reveal that the LSOPA has the bestperformance in the error test

5 Conclusion

The LSOPA is proposed to solve the problem of estimatingthe human transition motion The LSOPA is tested in theexperiments of the vision and error test and we can find thatthe performance of the LSOPA is the best among the threemethods However the LSOPA still cannot process somecomplex human transition motion well and its initializationof optimization cannot be random thus the LSOPA needto be improved in the future research Then the improvedmethod can carry out the unsupervised learning of thetwo complex irrelevant human motions and estimate theirsmooth and valid transition motion The 3D human motion

model will also be replaced by other exquisitemodels [20ndash22]in the future research

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work is supported by the University Young CreativeTalent Project of Guangdong Province (no 2016KQNCX111)Science and Technology Plan Project of Guangzhou (no

8 International Journal of Digital Multimedia Broadcasting

LSOPA mean error=309007mmLI mean error=3366073mmRI mean error=284808mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(a) The error of the transitionmotion fromwalking to playing golf

LSOPA mean error=44449mmLI mean error=225848mmRI mean error=674034mm

0

20

40

60

80

100

120

140

160

180

erro

r (m

m)

5 10 150frame

(b) The error of the transition motion from walking to dancing

LSOPA mean error=68818mmLI mean error=352607mmRI mean error=1912774mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(c) The error of the transition motion from walking to swimming

LSOPA mean error=112612mmLI mean error=616732mmRI mean error=2228574mm

0

50

100

150

200

250

300

350

400

450er

ror (

mm

)

5 10 150frame

(d) The error of the transition motion from walking to playingfootball

Figure 5 The error of several transition motions

201804010280) National Natural Science Foundation ofChina (no 61772140) Science and Technology PlanningProject of Guangdong Province (no 2017A010101021) Appro-priative Researching Fund for Professors and DoctorsGuangdong University of Education (no 2015ARF17 no2015ARF25) Key Disciplines Of Network Engineering ofGuangdong University of Education (no ZD2017004) andComputer Practice Teaching Demonstration Center ofGuangdong University of Education (no 2018sfzx01)

References

[1] Z Liu L Zhou H Leung et al ldquoHigh-quality compatibletriangulations and their application in interactive animationrdquoComputers and Graphics vol 76 pp 60ndash72 2018

[2] A Sujar J J Casafranca A Serrurier et al ldquoReal-time anima-tion of human charactersrsquo anatomyrdquo Computers and Graphicsvol 74 pp 268ndash277 2018

[3] B Walther-Franks and R Malaka ldquoAn interaction approach tocomputer animationrdquo Entertainment Computing vol 5 no 4pp 271ndash283 2014

[4] D Archambault and H C Purchase ldquoCan animation supportthe visualisation of dynamic graphsrdquo Information Sciences vol330 pp 495ndash509 2016

[5] L Yebin G Juergen S Carsten et al ldquoMarkerless motioncapture of multiple characters using multiview image segmen-tationrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 11 pp 2720ndash2735 2013

[6] X Zhou M Zhu S Leonardos and K Daniilidis ldquoSparserepresentation for 3D shape estimation a convex relaxation

International Journal of Digital Multimedia Broadcasting 9

approachrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 39 no 8 pp 1648ndash1661 2017

[7] X ZhouMZhuG Pavlakos S Leonardos KGDerpanis andK Daniilidis ldquoMonoCap monocular human motion captureusing aCNNcoupledwith a geometric priorrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 41 no 4 pp901ndash914 2019

[8] J M Wang D J Fleet and A Hertzmann ldquoGaussian processdynamical models for human motionrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 30 no 2 pp 283ndash298 2008

[9] S T Roweis and L K Saul ldquoNonlinear dimensionality reduc-tion by locally linear embeddingrdquo Science vol 290 no 5500pp 2323ndash2326 2000

[10] N Lawrence ldquoProbabilistic non-linear principal componentanalysis with Gaussian process latent variable modelsrdquo Journalof Machine Learning Research (JMLR) vol 6 pp 1783ndash18162005

[11] C H Ek Shared Gaussian Process Latent Variables ModelsOxford Brookes University Oxford UK 2009

[12] J Xinwei G JunbinWTianjiang et al ldquoSupervised latent linearGaussian process latent variable model for dimensionalityreductionrdquo IEEE Transactions on Systems Man and Cybernet-ics Part B Cybernetics vol 42 no 6 pp 1620ndash1632 2012

[13] J B Tenenbaum V de Silva and J C Langford ldquoA globalgeometric framework for nonlinear dimensionality reductionrdquoScience vol 290 no 5500 pp 2319ndash2323 2000

[14] W-Y Li and J-F Sun ldquoHuman motion estimation basedon gaussion incremental dimension reduction and manifoldboltzmann optimizationrdquoActa Electronica Sinica vol 45 no 12pp 3060ndash3069 2017

[15] W Li J Sun X Zhang and Y Wu ldquoSpatial constraints-based maximum likelihood estimation for human motionsrdquo inProceedings of the 2013 IEEE International Conference on SignalProcessing Communications and Computing ICSPCC 2013 pp1ndash6 IEEE KunMing China 2013

[16] W Li J Sun and Z Song ldquoManifold latent probabilisticoptimization for human motion fitting based on orthogonalsubspace searchingrdquo Journal of Information and ComputationalScience vol 11 no 15 pp 5357ndash5365 2014

[17] W Li J Sun and X Zhang ldquoLearning for human transitionmotions estimation based on feature similarity optimizationrdquoJournal of Computational Information Systems vol 10 no 5 pp2127ndash2136 2014

[18] RH ByrdM EHribar and J Nocedal ldquoAn interior point algo-rithm for large-scale nonlinear programmingrdquo SIAM Journal onOptimization vol 9 no 4 pp 877ndash900 1999

[19] L Sigal A O Balan andM J Black ldquoHumanEva synchronizedvideo and motion capture dataset and baseline algorithm forevaluation of articulated human motionrdquo International Journalof Computer Vision vol 87 no 1-2 pp 4ndash27 2010

[20] M Sandau H Koblauch T B Moeslund H Aanaeligs T Alkjaeligrand E B Simonsen ldquoMarkerless motion capture can providereliable 3D gait kinematics in the sagittal and frontal planerdquoMedical Engineeringamp Physics vol 36 no 9 pp 1168ndash1175 2014

[21] E Yeguas-Bolivar R Munoz-Salinas R Medina-Carnicer andA Carmona-Poyato ldquoComparing evolutionary algorithms andparticle filters for markerless human motion capturerdquo AppliedSoft Computing vol 17 pp 153ndash166 2014

[22] J Gall B Rosenhahn T Brox et al ldquoOptimization and filteringfor human motion capture - a multi-layer frameworkrdquo Inter-national Journal of Computer Vision vol 87 no 1-2 pp 75ndash922010

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

International Journal of Digital Multimedia Broadcasting 3

minus2 minus1 0 1 2minus10

12

minus15

minus1

minus05

0

05

1

15

2

Figure 1 The LD data of the two irrelevant human motions

3 Latent Space Optimization Based onProjection Analysis

31 The LD Data of the Transition Motion in the LD SpaceAfter the dimension reduction the HD data samples of thetwo irrelevant human motions can be seen in the LD spaceThe LD data of two difference sequences can denote thecorresponding poses of the two irrelevant human motionsIt can be found that there is an obvious distance betweenthe two LD data as Figure 1 shows From the two sets ofLD data we can know the needed transition motion can begenerated through the LD space but it needs to construct anappropriated curve to connect the two sets of LD data in theLD space Thus the optimized work will be started

32 Optimization in the LD Space Assume that X1 =[x11 x12 x11198731]T isin R1198731times119902 X2 = [x21 x22 x21198732]T isin R1198732times119902 arerespectively the LDdata of the two irrelevant humanmotionsafter dimension reduction the first motion has 1198731 framesand the second motion has 1198732 frames The positions of thecorresponding LD data can be seen in Figure 2 In Figure 2the derivations of 119871 can be got as follows

119871

=100381710038171003817100381710038171003817100381710038171003817100381710038171003817(x1015840119894 minus x11198731) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)100381710038171003817100381710038172

100381710038171003817100381710038171003817100381710038171003817100381710038171003817 (10)

A = (x1015840119894 minus x11198731)T (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817 (11)

From Figure 2 we can see 119871 denotes the distance from the LDdata sample x1015840119894 isin R3 to its projectionA in (10) andwe can alsosee the vector (Bminusx11198731) is the projection of the vector (x1015840119894minusx11198731)

in the plane 120596 Assume the set of bases Blowast = [b1 b2 b119902] isinR119902times119902 in the plane 120596 b119894 isin R3 119894 = 1 2 119902 is the vector fromthe plane 120596 we can get

Proj120596 (x1015840119894 minus x11198731) = (B minus x11198731) (12)

(x1015840119894 minus x11198731) minus Proj120596 (x1015840119894 minus x11198731) = Projc120596 (x1015840119894 minus x11198731) (13)

ProjH denotes the projective operation which means obtain-ing the projective vector or plane of H ProjcH denotes thecomplement of the projective operation ProjH which meansobtaining the vector perpendicular to projective vector fromthe operation ProjH In (12) and (13) the complement of theprojection vector is perpendicular to the plane 120596 thus theequation can be got from (13) as follows

BlowastT ((x1015840119894 minus x11198731) minus Proj120596 (x1015840119894 minus x11198731))= BlowastTProjc120596 (x1015840119894 minus x11198731) = 0

(14)

In (14)Blowast isin R119902times119902 thenProj120596(x1015840119894 minusx11198731) is a vector of the plane120596 and (15) can deduced from this

Proj120596 (x1015840119894 minus x11198731) = Blowasty = 1199101b1 + 1199102b2 + + 119910119902b119902 (15)

In (15) Proj120596(x1015840119894 minus x11198731) isin 120596 can be denoted by the bases b119894 isinR3 119894 = 1 2 119902 and coefficients y = [1199101 1199102 119910119902]T isin R119902then Blowast = [b1 b2 b119902] isin R119902times119902 Substituting (15) into (14)the following equation can be deduced

BlowastT ((x1015840119894 minus x11198731) minus Blowasty) = 0 (16)

Simplify (16) compute the least squares estimator of y andget the equation as below

y = (BlowastTBlowast)minus1 BlowastT (x1015840119894 minus x11198731) (17)

Substitute (12) and (17) into (15) the following equation canbe obtained

Proj120596 (x1015840119894 minus x11198731) = (B minus x1198731)= Blowast (BlowastTBlowast)minus1 BlowastT (x1015840119894 minus x11198731)

(18)

From (15) to (18) we have y isin R119902 and the bases b119894 isin R119902 119894 =1 2 119902 are linearly independentOur task is to find the optimal LD data sequence of the

transition motion between x11198731 and x21 thus we can build theconstraint functions to describe position of the needed LDdata sequence X1015840 = [x10158401 x10158402 x10158401198731015840]T isin R119873

1015840times119902 as follows

4 International Journal of Digital Multimedia Broadcasting

C

1

1Nx

X2

X1

L

B

21xA

(i) (

i minus 1

N1)

(i minus 1

N1)

i

Figure 2 The analysis of the optimized models

(x1015840119894 minus x11198731)T (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817= 119894 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)

119894 = 1 2 1198731015840(19)

Equation (19) denotes confirming the projection distance inthe vector (x21 minus x11198731) from each x1015840119894 isin R119902 119894 = 1 2 1198731015840 thenwe have

100381710038171003817100381710038171003817100381710038171003817100381710038171003817(x1015840119894 minus x11198731) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)100381710038171003817100381710038172

100381710038171003817100381710038171003817100381710038171003817100381710038171003817

= 119897 (119894) 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1) = 119871 119894 = 1 2 1198731015840(20)

Equation (20) denotes confirming the distance between x1015840119894and Proj(x21minusx11198731 )x

1015840119894 meanwhile we can get

arccos B1015840TC(1003817100381710038171003817B10158401003817100381710038171003817 C) lt 120579 (119894) 119894 = 1 2 1198731015840 (21)

In (21) B1015840 = (B minus x1198731) is a vector of Proj120596(x1015840119894 minus x11198731)which is perpendicular to the plane 120596 C is a preset vectorin the plane 120596 it can be seen as the reference of computingthe angle of (21) In (19)-(21) the LD data of the transitionmotion can be described in the LD space and 119897(119894) is the presetvalues of adjusting the value of 119871 120579(119894) is preset angle whichis describing the deviation of the LD data position Then theprojection and corresponding vector can be used to buildthese constraint functions so that an object function withconstraints can be obtained as follows

argminx1015840119894

10038171003817100381710038171003817y1015840119894 minus y1015840119894minus110038171003817100381710038171003817 = argmin

x1015840119894

10038171003817100381710038171003817f (x1015840119894) minus f (x1015840119894minus1)10038171003817100381710038171003817 x10158400 = x11198731 119894 = 1 2 1198731015840

119904119905

(x1015840119894 minus x11198731)T (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817= 119894 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)100381710038171003817100381710038171003817100381710038171003817100381710038171003817

(x1015840119894 minus x11198731) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)100381710038171003817100381710038172100381710038171003817100381710038171003817100381710038171003817100381710038171003817= 119897 (119894) 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)

arccos B1015840TC(1003817100381710038171003817B10158401003817100381710038171003817 C) lt 120579 (119894)

119894 = 1 2 1198731015840(22)

The optimized object function of (22) ensures that the framesof the transitionmotion can be varying regularly and the validtransition motion will be seen in the vision The solution of(22) can be obtained by the method in [18]

33 The Procedure of the LSOPA When the two irrelevanthuman motion samples are obtained the two samples are

denoted by Y1 = [y11 y12 y11198731]T isin R1198731times119902 Y2 =[y21 y22 y21198732]T isin R1198732times119902 Then we have the following

LSOPA

(1) Let Y = [YT1 YT2 ]T use the method called principal

component analysis (PCA) to initiate Y and get the

International Journal of Digital Multimedia Broadcasting 5

Input Y1 Y2 Output Ynew Xu 120572u 120573uWu

Preset 120579(119894) 119897(119894) 119894 = 1 2 1198731015840 C 120572 120573WY = [YT

1 YT2 ]T

Use PCA to process Y and get X = [XT1 XT2 ]T

X120572120573W = arg minX120572120573W

(minus ln119901(X120572120573WY))For i=1 to1198731015840argmin

x1015840119894

1003817100381710038171003817y1015840119894 minus y1015840119894minus11003817100381710038171003817 = argmin

x1015840119894

1003817100381710038171003817f(x1015840119894 ) minus f(x1015840119894minus1)1003817100381710038171003817 x10158400 = x11198731

119904119905

(x1015840119894 minus x11198731)T (x21 minus x11198731 )10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817= 119894 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)100381710038171003817100381710038171003817100381710038171003817100381710038171003817

(x1015840119894 minus x11198731 ) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731 )100381710038171003817100381710038172100381710038171003817100381710038171003817100381710038171003817100381710038171003817= 119897(119894) 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)

arccos B1015840TC(B1015840 C) lt 120579 (119894)

y1015840119894 = f(x1015840119894 )End ForX1015840 = [x10158401 x10158402 x10158401198731015840 ]TY1015840 = [y10158401 y10158402 y10158401198731015840 ]TXnew = [XT

1 X1015840TXT2 ]T

Ynew = [YT1 Y1015840TYT

2 ]TXu120572u120573uWu = arg min

X120572120573W(minus ln119901(Xnew120572120573WYnew))

End the procedure

Algorithm 1 The brief procedure of LSOPA

initiated LD data X = [XT1 XT2 ]T Preset the values of120579(119894) 119897(119894) and C

(2) Preset 120572120573W Then compute X120572120573W =argminX120572120573W(minus ln119901(X120572120573WY)) update Xand get 120572 120573W

(3) Compute argminx1015840119894y119894 minus y119894minus1 = argminx1015840

119894f(x1015840119894 ) minus

f(x1015840119894minus1) x10158400 = x11198731 119894 = 1 2 1198731015840 and the corre-sponding constraint functions in (22) and get the LDdata X1015840 = [x10158401 x10158402 x10158401198731015840]T isin R119873

1015840times119902 of the transitionmotion

(4) Compute the transition motion y1015840119894 = f(x1015840119894 ) 119894 =1 2 1198731015840 let Y1015840 = [y10158401 y10158402 y10158401198731015840]T andoutput the 3D transition motion modelFinally let Ynew = [YT

1 Y1015840TYT2 ]T Xnew =

[XT1 X1015840TXT

2 ]T compute the Xu120572u120573uWu =argminX120572120573W(minus ln119901(Xnew120572120573WYnew)) andupdate Xu 120572u 120573uWu End the whole procedure

The method LSOPA can be written as pseudocode inAlgorithm 1 for the transition motion estimation

4 Experiment and Evaluation

We test the proposed method LSOPA in the performance ofthe vision and error test We choose the line initialization[8] (LI) and random initialization [17] (RI) as the comparedmethods LI means that the LD data of the transition motionis obtained linearly between the LD data of the two differentmotions RI means that the LD data of the transition motionis obtained randomly between the LD data of two differentmotions The LD data samples (green ones) in the LD spacecan be seen in Figures 3(a)ndash3(c) which denotes the transitionmotion The tested transition motion is a total of 15 framesand the tested database is the Carnegie Mellon University(CMU)motion capture databasewhich is shortened forCMUdatabase

The three methods can estimate the corresponding tran-sition motions but the transition motions are obviouslydifferent Let us see the effect in the vision from Figures4(a)ndash4(c) In the estimation of the transitionmotion betweenwalking and playing golf we can find that the transitionmotion generated by the LSOPA is smoothly and naturallyin vision The transition motions generated by the othertwo methods are invalid and messy they are abnormalhuman transitionmotions obviously In aword the transitionmotions of LI and RI are not like playing the golf It isdemonstrated that the performance of LSOPA is the best inthe visual effect among the three methods

6 International Journal of Digital Multimedia Broadcasting

minus2minus1

01

2

minus10

12

3

minus1

0

1

2

3

(a) LSOPAminus2

minus10

12

minus10

12

3

minus1

0

1

2

3

(b) LI

minus2minus1

01

2

minus10

12

3

minus1

0

1

2

3

(c) RI

Figure 3 The LD data samples of the LSOPA LI and RI

Then the error and mean error of estimating the tran-sition motion are tested How to compute the error can beseen in [19] but the scale of markers in the 3Dmodel must beconsidered The error can be computed as follows

119864119903 = 1119872119886119903119872119886119903sum119898119886=1

10038171003817100381710038171003817Bx119898119886 minus Bx101584011989811988610038171003817100381710038171003817 (23)

In (23) Bx119898119886Bx1015840119898119886 isin R3 Bx119898119886 and Bx1015840119898119886 are respectivelythe true position and the estimated position of the marker in

the 3Dmodel both are the jointmarker in the 3DmodelMaris the number of the markers in the 3D model

The transition motion includes four transition motionsthey are walking and playing golf walking and dancingwalking and swimming and walking and playing footballrespectively The contrast of the true data and the estimateddata can be seen in Figures 5(a)ndash5(c)The error of each framein the transition motion generated by LSOPA is the lowestamong the three methods so is the mean error Generallyspeaking the error of some frames will be close but the

International Journal of Digital Multimedia Broadcasting 7

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(a) The transition motion of LSOPA

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(b) The transition motion of LI

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(c) The transition motion of RI

Figure 4 The visual evaluation of the methods

error of LSOPA will be kept low and stable on the wholeThe tested result can also reveal that the LSOPA has the bestperformance in the error test

5 Conclusion

The LSOPA is proposed to solve the problem of estimatingthe human transition motion The LSOPA is tested in theexperiments of the vision and error test and we can find thatthe performance of the LSOPA is the best among the threemethods However the LSOPA still cannot process somecomplex human transition motion well and its initializationof optimization cannot be random thus the LSOPA needto be improved in the future research Then the improvedmethod can carry out the unsupervised learning of thetwo complex irrelevant human motions and estimate theirsmooth and valid transition motion The 3D human motion

model will also be replaced by other exquisitemodels [20ndash22]in the future research

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work is supported by the University Young CreativeTalent Project of Guangdong Province (no 2016KQNCX111)Science and Technology Plan Project of Guangzhou (no

8 International Journal of Digital Multimedia Broadcasting

LSOPA mean error=309007mmLI mean error=3366073mmRI mean error=284808mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(a) The error of the transitionmotion fromwalking to playing golf

LSOPA mean error=44449mmLI mean error=225848mmRI mean error=674034mm

0

20

40

60

80

100

120

140

160

180

erro

r (m

m)

5 10 150frame

(b) The error of the transition motion from walking to dancing

LSOPA mean error=68818mmLI mean error=352607mmRI mean error=1912774mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(c) The error of the transition motion from walking to swimming

LSOPA mean error=112612mmLI mean error=616732mmRI mean error=2228574mm

0

50

100

150

200

250

300

350

400

450er

ror (

mm

)

5 10 150frame

(d) The error of the transition motion from walking to playingfootball

Figure 5 The error of several transition motions

201804010280) National Natural Science Foundation ofChina (no 61772140) Science and Technology PlanningProject of Guangdong Province (no 2017A010101021) Appro-priative Researching Fund for Professors and DoctorsGuangdong University of Education (no 2015ARF17 no2015ARF25) Key Disciplines Of Network Engineering ofGuangdong University of Education (no ZD2017004) andComputer Practice Teaching Demonstration Center ofGuangdong University of Education (no 2018sfzx01)

References

[1] Z Liu L Zhou H Leung et al ldquoHigh-quality compatibletriangulations and their application in interactive animationrdquoComputers and Graphics vol 76 pp 60ndash72 2018

[2] A Sujar J J Casafranca A Serrurier et al ldquoReal-time anima-tion of human charactersrsquo anatomyrdquo Computers and Graphicsvol 74 pp 268ndash277 2018

[3] B Walther-Franks and R Malaka ldquoAn interaction approach tocomputer animationrdquo Entertainment Computing vol 5 no 4pp 271ndash283 2014

[4] D Archambault and H C Purchase ldquoCan animation supportthe visualisation of dynamic graphsrdquo Information Sciences vol330 pp 495ndash509 2016

[5] L Yebin G Juergen S Carsten et al ldquoMarkerless motioncapture of multiple characters using multiview image segmen-tationrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 11 pp 2720ndash2735 2013

[6] X Zhou M Zhu S Leonardos and K Daniilidis ldquoSparserepresentation for 3D shape estimation a convex relaxation

International Journal of Digital Multimedia Broadcasting 9

approachrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 39 no 8 pp 1648ndash1661 2017

[7] X ZhouMZhuG Pavlakos S Leonardos KGDerpanis andK Daniilidis ldquoMonoCap monocular human motion captureusing aCNNcoupledwith a geometric priorrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 41 no 4 pp901ndash914 2019

[8] J M Wang D J Fleet and A Hertzmann ldquoGaussian processdynamical models for human motionrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 30 no 2 pp 283ndash298 2008

[9] S T Roweis and L K Saul ldquoNonlinear dimensionality reduc-tion by locally linear embeddingrdquo Science vol 290 no 5500pp 2323ndash2326 2000

[10] N Lawrence ldquoProbabilistic non-linear principal componentanalysis with Gaussian process latent variable modelsrdquo Journalof Machine Learning Research (JMLR) vol 6 pp 1783ndash18162005

[11] C H Ek Shared Gaussian Process Latent Variables ModelsOxford Brookes University Oxford UK 2009

[12] J Xinwei G JunbinWTianjiang et al ldquoSupervised latent linearGaussian process latent variable model for dimensionalityreductionrdquo IEEE Transactions on Systems Man and Cybernet-ics Part B Cybernetics vol 42 no 6 pp 1620ndash1632 2012

[13] J B Tenenbaum V de Silva and J C Langford ldquoA globalgeometric framework for nonlinear dimensionality reductionrdquoScience vol 290 no 5500 pp 2319ndash2323 2000

[14] W-Y Li and J-F Sun ldquoHuman motion estimation basedon gaussion incremental dimension reduction and manifoldboltzmann optimizationrdquoActa Electronica Sinica vol 45 no 12pp 3060ndash3069 2017

[15] W Li J Sun X Zhang and Y Wu ldquoSpatial constraints-based maximum likelihood estimation for human motionsrdquo inProceedings of the 2013 IEEE International Conference on SignalProcessing Communications and Computing ICSPCC 2013 pp1ndash6 IEEE KunMing China 2013

[16] W Li J Sun and Z Song ldquoManifold latent probabilisticoptimization for human motion fitting based on orthogonalsubspace searchingrdquo Journal of Information and ComputationalScience vol 11 no 15 pp 5357ndash5365 2014

[17] W Li J Sun and X Zhang ldquoLearning for human transitionmotions estimation based on feature similarity optimizationrdquoJournal of Computational Information Systems vol 10 no 5 pp2127ndash2136 2014

[18] RH ByrdM EHribar and J Nocedal ldquoAn interior point algo-rithm for large-scale nonlinear programmingrdquo SIAM Journal onOptimization vol 9 no 4 pp 877ndash900 1999

[19] L Sigal A O Balan andM J Black ldquoHumanEva synchronizedvideo and motion capture dataset and baseline algorithm forevaluation of articulated human motionrdquo International Journalof Computer Vision vol 87 no 1-2 pp 4ndash27 2010

[20] M Sandau H Koblauch T B Moeslund H Aanaeligs T Alkjaeligrand E B Simonsen ldquoMarkerless motion capture can providereliable 3D gait kinematics in the sagittal and frontal planerdquoMedical Engineeringamp Physics vol 36 no 9 pp 1168ndash1175 2014

[21] E Yeguas-Bolivar R Munoz-Salinas R Medina-Carnicer andA Carmona-Poyato ldquoComparing evolutionary algorithms andparticle filters for markerless human motion capturerdquo AppliedSoft Computing vol 17 pp 153ndash166 2014

[22] J Gall B Rosenhahn T Brox et al ldquoOptimization and filteringfor human motion capture - a multi-layer frameworkrdquo Inter-national Journal of Computer Vision vol 87 no 1-2 pp 75ndash922010

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

4 International Journal of Digital Multimedia Broadcasting

C

1

1Nx

X2

X1

L

B

21xA

(i) (

i minus 1

N1)

(i minus 1

N1)

i

Figure 2 The analysis of the optimized models

(x1015840119894 minus x11198731)T (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817= 119894 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)

119894 = 1 2 1198731015840(19)

Equation (19) denotes confirming the projection distance inthe vector (x21 minus x11198731) from each x1015840119894 isin R119902 119894 = 1 2 1198731015840 thenwe have

100381710038171003817100381710038171003817100381710038171003817100381710038171003817(x1015840119894 minus x11198731) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)100381710038171003817100381710038172

100381710038171003817100381710038171003817100381710038171003817100381710038171003817

= 119897 (119894) 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1) = 119871 119894 = 1 2 1198731015840(20)

Equation (20) denotes confirming the distance between x1015840119894and Proj(x21minusx11198731 )x

1015840119894 meanwhile we can get

arccos B1015840TC(1003817100381710038171003817B10158401003817100381710038171003817 C) lt 120579 (119894) 119894 = 1 2 1198731015840 (21)

In (21) B1015840 = (B minus x1198731) is a vector of Proj120596(x1015840119894 minus x11198731)which is perpendicular to the plane 120596 C is a preset vectorin the plane 120596 it can be seen as the reference of computingthe angle of (21) In (19)-(21) the LD data of the transitionmotion can be described in the LD space and 119897(119894) is the presetvalues of adjusting the value of 119871 120579(119894) is preset angle whichis describing the deviation of the LD data position Then theprojection and corresponding vector can be used to buildthese constraint functions so that an object function withconstraints can be obtained as follows

argminx1015840119894

10038171003817100381710038171003817y1015840119894 minus y1015840119894minus110038171003817100381710038171003817 = argmin

x1015840119894

10038171003817100381710038171003817f (x1015840119894) minus f (x1015840119894minus1)10038171003817100381710038171003817 x10158400 = x11198731 119894 = 1 2 1198731015840

119904119905

(x1015840119894 minus x11198731)T (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817= 119894 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)100381710038171003817100381710038171003817100381710038171003817100381710038171003817

(x1015840119894 minus x11198731) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731)100381710038171003817100381710038172100381710038171003817100381710038171003817100381710038171003817100381710038171003817= 119897 (119894) 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)

arccos B1015840TC(1003817100381710038171003817B10158401003817100381710038171003817 C) lt 120579 (119894)

119894 = 1 2 1198731015840(22)

The optimized object function of (22) ensures that the framesof the transitionmotion can be varying regularly and the validtransition motion will be seen in the vision The solution of(22) can be obtained by the method in [18]

33 The Procedure of the LSOPA When the two irrelevanthuman motion samples are obtained the two samples are

denoted by Y1 = [y11 y12 y11198731]T isin R1198731times119902 Y2 =[y21 y22 y21198732]T isin R1198732times119902 Then we have the following

LSOPA

(1) Let Y = [YT1 YT2 ]T use the method called principal

component analysis (PCA) to initiate Y and get the

International Journal of Digital Multimedia Broadcasting 5

Input Y1 Y2 Output Ynew Xu 120572u 120573uWu

Preset 120579(119894) 119897(119894) 119894 = 1 2 1198731015840 C 120572 120573WY = [YT

1 YT2 ]T

Use PCA to process Y and get X = [XT1 XT2 ]T

X120572120573W = arg minX120572120573W

(minus ln119901(X120572120573WY))For i=1 to1198731015840argmin

x1015840119894

1003817100381710038171003817y1015840119894 minus y1015840119894minus11003817100381710038171003817 = argmin

x1015840119894

1003817100381710038171003817f(x1015840119894 ) minus f(x1015840119894minus1)1003817100381710038171003817 x10158400 = x11198731

119904119905

(x1015840119894 minus x11198731)T (x21 minus x11198731 )10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817= 119894 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)100381710038171003817100381710038171003817100381710038171003817100381710038171003817

(x1015840119894 minus x11198731 ) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731 )100381710038171003817100381710038172100381710038171003817100381710038171003817100381710038171003817100381710038171003817= 119897(119894) 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)

arccos B1015840TC(B1015840 C) lt 120579 (119894)

y1015840119894 = f(x1015840119894 )End ForX1015840 = [x10158401 x10158402 x10158401198731015840 ]TY1015840 = [y10158401 y10158402 y10158401198731015840 ]TXnew = [XT

1 X1015840TXT2 ]T

Ynew = [YT1 Y1015840TYT

2 ]TXu120572u120573uWu = arg min

X120572120573W(minus ln119901(Xnew120572120573WYnew))

End the procedure

Algorithm 1 The brief procedure of LSOPA

initiated LD data X = [XT1 XT2 ]T Preset the values of120579(119894) 119897(119894) and C

(2) Preset 120572120573W Then compute X120572120573W =argminX120572120573W(minus ln119901(X120572120573WY)) update Xand get 120572 120573W

(3) Compute argminx1015840119894y119894 minus y119894minus1 = argminx1015840

119894f(x1015840119894 ) minus

f(x1015840119894minus1) x10158400 = x11198731 119894 = 1 2 1198731015840 and the corre-sponding constraint functions in (22) and get the LDdata X1015840 = [x10158401 x10158402 x10158401198731015840]T isin R119873

1015840times119902 of the transitionmotion

(4) Compute the transition motion y1015840119894 = f(x1015840119894 ) 119894 =1 2 1198731015840 let Y1015840 = [y10158401 y10158402 y10158401198731015840]T andoutput the 3D transition motion modelFinally let Ynew = [YT

1 Y1015840TYT2 ]T Xnew =

[XT1 X1015840TXT

2 ]T compute the Xu120572u120573uWu =argminX120572120573W(minus ln119901(Xnew120572120573WYnew)) andupdate Xu 120572u 120573uWu End the whole procedure

The method LSOPA can be written as pseudocode inAlgorithm 1 for the transition motion estimation

4 Experiment and Evaluation

We test the proposed method LSOPA in the performance ofthe vision and error test We choose the line initialization[8] (LI) and random initialization [17] (RI) as the comparedmethods LI means that the LD data of the transition motionis obtained linearly between the LD data of the two differentmotions RI means that the LD data of the transition motionis obtained randomly between the LD data of two differentmotions The LD data samples (green ones) in the LD spacecan be seen in Figures 3(a)ndash3(c) which denotes the transitionmotion The tested transition motion is a total of 15 framesand the tested database is the Carnegie Mellon University(CMU)motion capture databasewhich is shortened forCMUdatabase

The three methods can estimate the corresponding tran-sition motions but the transition motions are obviouslydifferent Let us see the effect in the vision from Figures4(a)ndash4(c) In the estimation of the transitionmotion betweenwalking and playing golf we can find that the transitionmotion generated by the LSOPA is smoothly and naturallyin vision The transition motions generated by the othertwo methods are invalid and messy they are abnormalhuman transitionmotions obviously In aword the transitionmotions of LI and RI are not like playing the golf It isdemonstrated that the performance of LSOPA is the best inthe visual effect among the three methods

6 International Journal of Digital Multimedia Broadcasting

minus2minus1

01

2

minus10

12

3

minus1

0

1

2

3

(a) LSOPAminus2

minus10

12

minus10

12

3

minus1

0

1

2

3

(b) LI

minus2minus1

01

2

minus10

12

3

minus1

0

1

2

3

(c) RI

Figure 3 The LD data samples of the LSOPA LI and RI

Then the error and mean error of estimating the tran-sition motion are tested How to compute the error can beseen in [19] but the scale of markers in the 3Dmodel must beconsidered The error can be computed as follows

119864119903 = 1119872119886119903119872119886119903sum119898119886=1

10038171003817100381710038171003817Bx119898119886 minus Bx101584011989811988610038171003817100381710038171003817 (23)

In (23) Bx119898119886Bx1015840119898119886 isin R3 Bx119898119886 and Bx1015840119898119886 are respectivelythe true position and the estimated position of the marker in

the 3Dmodel both are the jointmarker in the 3DmodelMaris the number of the markers in the 3D model

The transition motion includes four transition motionsthey are walking and playing golf walking and dancingwalking and swimming and walking and playing footballrespectively The contrast of the true data and the estimateddata can be seen in Figures 5(a)ndash5(c)The error of each framein the transition motion generated by LSOPA is the lowestamong the three methods so is the mean error Generallyspeaking the error of some frames will be close but the

International Journal of Digital Multimedia Broadcasting 7

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(a) The transition motion of LSOPA

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(b) The transition motion of LI

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(c) The transition motion of RI

Figure 4 The visual evaluation of the methods

error of LSOPA will be kept low and stable on the wholeThe tested result can also reveal that the LSOPA has the bestperformance in the error test

5 Conclusion

The LSOPA is proposed to solve the problem of estimatingthe human transition motion The LSOPA is tested in theexperiments of the vision and error test and we can find thatthe performance of the LSOPA is the best among the threemethods However the LSOPA still cannot process somecomplex human transition motion well and its initializationof optimization cannot be random thus the LSOPA needto be improved in the future research Then the improvedmethod can carry out the unsupervised learning of thetwo complex irrelevant human motions and estimate theirsmooth and valid transition motion The 3D human motion

model will also be replaced by other exquisitemodels [20ndash22]in the future research

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work is supported by the University Young CreativeTalent Project of Guangdong Province (no 2016KQNCX111)Science and Technology Plan Project of Guangzhou (no

8 International Journal of Digital Multimedia Broadcasting

LSOPA mean error=309007mmLI mean error=3366073mmRI mean error=284808mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(a) The error of the transitionmotion fromwalking to playing golf

LSOPA mean error=44449mmLI mean error=225848mmRI mean error=674034mm

0

20

40

60

80

100

120

140

160

180

erro

r (m

m)

5 10 150frame

(b) The error of the transition motion from walking to dancing

LSOPA mean error=68818mmLI mean error=352607mmRI mean error=1912774mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(c) The error of the transition motion from walking to swimming

LSOPA mean error=112612mmLI mean error=616732mmRI mean error=2228574mm

0

50

100

150

200

250

300

350

400

450er

ror (

mm

)

5 10 150frame

(d) The error of the transition motion from walking to playingfootball

Figure 5 The error of several transition motions

201804010280) National Natural Science Foundation ofChina (no 61772140) Science and Technology PlanningProject of Guangdong Province (no 2017A010101021) Appro-priative Researching Fund for Professors and DoctorsGuangdong University of Education (no 2015ARF17 no2015ARF25) Key Disciplines Of Network Engineering ofGuangdong University of Education (no ZD2017004) andComputer Practice Teaching Demonstration Center ofGuangdong University of Education (no 2018sfzx01)

References

[1] Z Liu L Zhou H Leung et al ldquoHigh-quality compatibletriangulations and their application in interactive animationrdquoComputers and Graphics vol 76 pp 60ndash72 2018

[2] A Sujar J J Casafranca A Serrurier et al ldquoReal-time anima-tion of human charactersrsquo anatomyrdquo Computers and Graphicsvol 74 pp 268ndash277 2018

[3] B Walther-Franks and R Malaka ldquoAn interaction approach tocomputer animationrdquo Entertainment Computing vol 5 no 4pp 271ndash283 2014

[4] D Archambault and H C Purchase ldquoCan animation supportthe visualisation of dynamic graphsrdquo Information Sciences vol330 pp 495ndash509 2016

[5] L Yebin G Juergen S Carsten et al ldquoMarkerless motioncapture of multiple characters using multiview image segmen-tationrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 11 pp 2720ndash2735 2013

[6] X Zhou M Zhu S Leonardos and K Daniilidis ldquoSparserepresentation for 3D shape estimation a convex relaxation

International Journal of Digital Multimedia Broadcasting 9

approachrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 39 no 8 pp 1648ndash1661 2017

[7] X ZhouMZhuG Pavlakos S Leonardos KGDerpanis andK Daniilidis ldquoMonoCap monocular human motion captureusing aCNNcoupledwith a geometric priorrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 41 no 4 pp901ndash914 2019

[8] J M Wang D J Fleet and A Hertzmann ldquoGaussian processdynamical models for human motionrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 30 no 2 pp 283ndash298 2008

[9] S T Roweis and L K Saul ldquoNonlinear dimensionality reduc-tion by locally linear embeddingrdquo Science vol 290 no 5500pp 2323ndash2326 2000

[10] N Lawrence ldquoProbabilistic non-linear principal componentanalysis with Gaussian process latent variable modelsrdquo Journalof Machine Learning Research (JMLR) vol 6 pp 1783ndash18162005

[11] C H Ek Shared Gaussian Process Latent Variables ModelsOxford Brookes University Oxford UK 2009

[12] J Xinwei G JunbinWTianjiang et al ldquoSupervised latent linearGaussian process latent variable model for dimensionalityreductionrdquo IEEE Transactions on Systems Man and Cybernet-ics Part B Cybernetics vol 42 no 6 pp 1620ndash1632 2012

[13] J B Tenenbaum V de Silva and J C Langford ldquoA globalgeometric framework for nonlinear dimensionality reductionrdquoScience vol 290 no 5500 pp 2319ndash2323 2000

[14] W-Y Li and J-F Sun ldquoHuman motion estimation basedon gaussion incremental dimension reduction and manifoldboltzmann optimizationrdquoActa Electronica Sinica vol 45 no 12pp 3060ndash3069 2017

[15] W Li J Sun X Zhang and Y Wu ldquoSpatial constraints-based maximum likelihood estimation for human motionsrdquo inProceedings of the 2013 IEEE International Conference on SignalProcessing Communications and Computing ICSPCC 2013 pp1ndash6 IEEE KunMing China 2013

[16] W Li J Sun and Z Song ldquoManifold latent probabilisticoptimization for human motion fitting based on orthogonalsubspace searchingrdquo Journal of Information and ComputationalScience vol 11 no 15 pp 5357ndash5365 2014

[17] W Li J Sun and X Zhang ldquoLearning for human transitionmotions estimation based on feature similarity optimizationrdquoJournal of Computational Information Systems vol 10 no 5 pp2127ndash2136 2014

[18] RH ByrdM EHribar and J Nocedal ldquoAn interior point algo-rithm for large-scale nonlinear programmingrdquo SIAM Journal onOptimization vol 9 no 4 pp 877ndash900 1999

[19] L Sigal A O Balan andM J Black ldquoHumanEva synchronizedvideo and motion capture dataset and baseline algorithm forevaluation of articulated human motionrdquo International Journalof Computer Vision vol 87 no 1-2 pp 4ndash27 2010

[20] M Sandau H Koblauch T B Moeslund H Aanaeligs T Alkjaeligrand E B Simonsen ldquoMarkerless motion capture can providereliable 3D gait kinematics in the sagittal and frontal planerdquoMedical Engineeringamp Physics vol 36 no 9 pp 1168ndash1175 2014

[21] E Yeguas-Bolivar R Munoz-Salinas R Medina-Carnicer andA Carmona-Poyato ldquoComparing evolutionary algorithms andparticle filters for markerless human motion capturerdquo AppliedSoft Computing vol 17 pp 153ndash166 2014

[22] J Gall B Rosenhahn T Brox et al ldquoOptimization and filteringfor human motion capture - a multi-layer frameworkrdquo Inter-national Journal of Computer Vision vol 87 no 1-2 pp 75ndash922010

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

International Journal of Digital Multimedia Broadcasting 5

Input Y1 Y2 Output Ynew Xu 120572u 120573uWu

Preset 120579(119894) 119897(119894) 119894 = 1 2 1198731015840 C 120572 120573WY = [YT

1 YT2 ]T

Use PCA to process Y and get X = [XT1 XT2 ]T

X120572120573W = arg minX120572120573W

(minus ln119901(X120572120573WY))For i=1 to1198731015840argmin

x1015840119894

1003817100381710038171003817y1015840119894 minus y1015840119894minus11003817100381710038171003817 = argmin

x1015840119894

1003817100381710038171003817f(x1015840119894 ) minus f(x1015840119894minus1)1003817100381710038171003817 x10158400 = x11198731

119904119905

(x1015840119894 minus x11198731)T (x21 minus x11198731 )10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817= 119894 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)100381710038171003817100381710038171003817100381710038171003817100381710038171003817

(x1015840119894 minus x11198731 ) minus (x1015840119894 minus x11198731)T (x21 minus x11198731) (x21 minus x11198731)10038171003817100381710038171003817(x21 minus x11198731 )100381710038171003817100381710038172100381710038171003817100381710038171003817100381710038171003817100381710038171003817= 119897(119894) 10038171003817100381710038171003817(x21 minus x11198731)10038171003817100381710038171003817(1198731015840 + 1)

arccos B1015840TC(B1015840 C) lt 120579 (119894)

y1015840119894 = f(x1015840119894 )End ForX1015840 = [x10158401 x10158402 x10158401198731015840 ]TY1015840 = [y10158401 y10158402 y10158401198731015840 ]TXnew = [XT

1 X1015840TXT2 ]T

Ynew = [YT1 Y1015840TYT

2 ]TXu120572u120573uWu = arg min

X120572120573W(minus ln119901(Xnew120572120573WYnew))

End the procedure

Algorithm 1 The brief procedure of LSOPA

initiated LD data X = [XT1 XT2 ]T Preset the values of120579(119894) 119897(119894) and C

(2) Preset 120572120573W Then compute X120572120573W =argminX120572120573W(minus ln119901(X120572120573WY)) update Xand get 120572 120573W

(3) Compute argminx1015840119894y119894 minus y119894minus1 = argminx1015840

119894f(x1015840119894 ) minus

f(x1015840119894minus1) x10158400 = x11198731 119894 = 1 2 1198731015840 and the corre-sponding constraint functions in (22) and get the LDdata X1015840 = [x10158401 x10158402 x10158401198731015840]T isin R119873

1015840times119902 of the transitionmotion

(4) Compute the transition motion y1015840119894 = f(x1015840119894 ) 119894 =1 2 1198731015840 let Y1015840 = [y10158401 y10158402 y10158401198731015840]T andoutput the 3D transition motion modelFinally let Ynew = [YT

1 Y1015840TYT2 ]T Xnew =

[XT1 X1015840TXT

2 ]T compute the Xu120572u120573uWu =argminX120572120573W(minus ln119901(Xnew120572120573WYnew)) andupdate Xu 120572u 120573uWu End the whole procedure

The method LSOPA can be written as pseudocode inAlgorithm 1 for the transition motion estimation

4 Experiment and Evaluation

We test the proposed method LSOPA in the performance ofthe vision and error test We choose the line initialization[8] (LI) and random initialization [17] (RI) as the comparedmethods LI means that the LD data of the transition motionis obtained linearly between the LD data of the two differentmotions RI means that the LD data of the transition motionis obtained randomly between the LD data of two differentmotions The LD data samples (green ones) in the LD spacecan be seen in Figures 3(a)ndash3(c) which denotes the transitionmotion The tested transition motion is a total of 15 framesand the tested database is the Carnegie Mellon University(CMU)motion capture databasewhich is shortened forCMUdatabase

The three methods can estimate the corresponding tran-sition motions but the transition motions are obviouslydifferent Let us see the effect in the vision from Figures4(a)ndash4(c) In the estimation of the transitionmotion betweenwalking and playing golf we can find that the transitionmotion generated by the LSOPA is smoothly and naturallyin vision The transition motions generated by the othertwo methods are invalid and messy they are abnormalhuman transitionmotions obviously In aword the transitionmotions of LI and RI are not like playing the golf It isdemonstrated that the performance of LSOPA is the best inthe visual effect among the three methods

6 International Journal of Digital Multimedia Broadcasting

minus2minus1

01

2

minus10

12

3

minus1

0

1

2

3

(a) LSOPAminus2

minus10

12

minus10

12

3

minus1

0

1

2

3

(b) LI

minus2minus1

01

2

minus10

12

3

minus1

0

1

2

3

(c) RI

Figure 3 The LD data samples of the LSOPA LI and RI

Then the error and mean error of estimating the tran-sition motion are tested How to compute the error can beseen in [19] but the scale of markers in the 3Dmodel must beconsidered The error can be computed as follows

119864119903 = 1119872119886119903119872119886119903sum119898119886=1

10038171003817100381710038171003817Bx119898119886 minus Bx101584011989811988610038171003817100381710038171003817 (23)

In (23) Bx119898119886Bx1015840119898119886 isin R3 Bx119898119886 and Bx1015840119898119886 are respectivelythe true position and the estimated position of the marker in

the 3Dmodel both are the jointmarker in the 3DmodelMaris the number of the markers in the 3D model

The transition motion includes four transition motionsthey are walking and playing golf walking and dancingwalking and swimming and walking and playing footballrespectively The contrast of the true data and the estimateddata can be seen in Figures 5(a)ndash5(c)The error of each framein the transition motion generated by LSOPA is the lowestamong the three methods so is the mean error Generallyspeaking the error of some frames will be close but the

International Journal of Digital Multimedia Broadcasting 7

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(a) The transition motion of LSOPA

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(b) The transition motion of LI

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(c) The transition motion of RI

Figure 4 The visual evaluation of the methods

error of LSOPA will be kept low and stable on the wholeThe tested result can also reveal that the LSOPA has the bestperformance in the error test

5 Conclusion

The LSOPA is proposed to solve the problem of estimatingthe human transition motion The LSOPA is tested in theexperiments of the vision and error test and we can find thatthe performance of the LSOPA is the best among the threemethods However the LSOPA still cannot process somecomplex human transition motion well and its initializationof optimization cannot be random thus the LSOPA needto be improved in the future research Then the improvedmethod can carry out the unsupervised learning of thetwo complex irrelevant human motions and estimate theirsmooth and valid transition motion The 3D human motion

model will also be replaced by other exquisitemodels [20ndash22]in the future research

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work is supported by the University Young CreativeTalent Project of Guangdong Province (no 2016KQNCX111)Science and Technology Plan Project of Guangzhou (no

8 International Journal of Digital Multimedia Broadcasting

LSOPA mean error=309007mmLI mean error=3366073mmRI mean error=284808mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(a) The error of the transitionmotion fromwalking to playing golf

LSOPA mean error=44449mmLI mean error=225848mmRI mean error=674034mm

0

20

40

60

80

100

120

140

160

180

erro

r (m

m)

5 10 150frame

(b) The error of the transition motion from walking to dancing

LSOPA mean error=68818mmLI mean error=352607mmRI mean error=1912774mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(c) The error of the transition motion from walking to swimming

LSOPA mean error=112612mmLI mean error=616732mmRI mean error=2228574mm

0

50

100

150

200

250

300

350

400

450er

ror (

mm

)

5 10 150frame

(d) The error of the transition motion from walking to playingfootball

Figure 5 The error of several transition motions

201804010280) National Natural Science Foundation ofChina (no 61772140) Science and Technology PlanningProject of Guangdong Province (no 2017A010101021) Appro-priative Researching Fund for Professors and DoctorsGuangdong University of Education (no 2015ARF17 no2015ARF25) Key Disciplines Of Network Engineering ofGuangdong University of Education (no ZD2017004) andComputer Practice Teaching Demonstration Center ofGuangdong University of Education (no 2018sfzx01)

References

[1] Z Liu L Zhou H Leung et al ldquoHigh-quality compatibletriangulations and their application in interactive animationrdquoComputers and Graphics vol 76 pp 60ndash72 2018

[2] A Sujar J J Casafranca A Serrurier et al ldquoReal-time anima-tion of human charactersrsquo anatomyrdquo Computers and Graphicsvol 74 pp 268ndash277 2018

[3] B Walther-Franks and R Malaka ldquoAn interaction approach tocomputer animationrdquo Entertainment Computing vol 5 no 4pp 271ndash283 2014

[4] D Archambault and H C Purchase ldquoCan animation supportthe visualisation of dynamic graphsrdquo Information Sciences vol330 pp 495ndash509 2016

[5] L Yebin G Juergen S Carsten et al ldquoMarkerless motioncapture of multiple characters using multiview image segmen-tationrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 11 pp 2720ndash2735 2013

[6] X Zhou M Zhu S Leonardos and K Daniilidis ldquoSparserepresentation for 3D shape estimation a convex relaxation

International Journal of Digital Multimedia Broadcasting 9

approachrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 39 no 8 pp 1648ndash1661 2017

[7] X ZhouMZhuG Pavlakos S Leonardos KGDerpanis andK Daniilidis ldquoMonoCap monocular human motion captureusing aCNNcoupledwith a geometric priorrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 41 no 4 pp901ndash914 2019

[8] J M Wang D J Fleet and A Hertzmann ldquoGaussian processdynamical models for human motionrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 30 no 2 pp 283ndash298 2008

[9] S T Roweis and L K Saul ldquoNonlinear dimensionality reduc-tion by locally linear embeddingrdquo Science vol 290 no 5500pp 2323ndash2326 2000

[10] N Lawrence ldquoProbabilistic non-linear principal componentanalysis with Gaussian process latent variable modelsrdquo Journalof Machine Learning Research (JMLR) vol 6 pp 1783ndash18162005

[11] C H Ek Shared Gaussian Process Latent Variables ModelsOxford Brookes University Oxford UK 2009

[12] J Xinwei G JunbinWTianjiang et al ldquoSupervised latent linearGaussian process latent variable model for dimensionalityreductionrdquo IEEE Transactions on Systems Man and Cybernet-ics Part B Cybernetics vol 42 no 6 pp 1620ndash1632 2012

[13] J B Tenenbaum V de Silva and J C Langford ldquoA globalgeometric framework for nonlinear dimensionality reductionrdquoScience vol 290 no 5500 pp 2319ndash2323 2000

[14] W-Y Li and J-F Sun ldquoHuman motion estimation basedon gaussion incremental dimension reduction and manifoldboltzmann optimizationrdquoActa Electronica Sinica vol 45 no 12pp 3060ndash3069 2017

[15] W Li J Sun X Zhang and Y Wu ldquoSpatial constraints-based maximum likelihood estimation for human motionsrdquo inProceedings of the 2013 IEEE International Conference on SignalProcessing Communications and Computing ICSPCC 2013 pp1ndash6 IEEE KunMing China 2013

[16] W Li J Sun and Z Song ldquoManifold latent probabilisticoptimization for human motion fitting based on orthogonalsubspace searchingrdquo Journal of Information and ComputationalScience vol 11 no 15 pp 5357ndash5365 2014

[17] W Li J Sun and X Zhang ldquoLearning for human transitionmotions estimation based on feature similarity optimizationrdquoJournal of Computational Information Systems vol 10 no 5 pp2127ndash2136 2014

[18] RH ByrdM EHribar and J Nocedal ldquoAn interior point algo-rithm for large-scale nonlinear programmingrdquo SIAM Journal onOptimization vol 9 no 4 pp 877ndash900 1999

[19] L Sigal A O Balan andM J Black ldquoHumanEva synchronizedvideo and motion capture dataset and baseline algorithm forevaluation of articulated human motionrdquo International Journalof Computer Vision vol 87 no 1-2 pp 4ndash27 2010

[20] M Sandau H Koblauch T B Moeslund H Aanaeligs T Alkjaeligrand E B Simonsen ldquoMarkerless motion capture can providereliable 3D gait kinematics in the sagittal and frontal planerdquoMedical Engineeringamp Physics vol 36 no 9 pp 1168ndash1175 2014

[21] E Yeguas-Bolivar R Munoz-Salinas R Medina-Carnicer andA Carmona-Poyato ldquoComparing evolutionary algorithms andparticle filters for markerless human motion capturerdquo AppliedSoft Computing vol 17 pp 153ndash166 2014

[22] J Gall B Rosenhahn T Brox et al ldquoOptimization and filteringfor human motion capture - a multi-layer frameworkrdquo Inter-national Journal of Computer Vision vol 87 no 1-2 pp 75ndash922010

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

6 International Journal of Digital Multimedia Broadcasting

minus2minus1

01

2

minus10

12

3

minus1

0

1

2

3

(a) LSOPAminus2

minus10

12

minus10

12

3

minus1

0

1

2

3

(b) LI

minus2minus1

01

2

minus10

12

3

minus1

0

1

2

3

(c) RI

Figure 3 The LD data samples of the LSOPA LI and RI

Then the error and mean error of estimating the tran-sition motion are tested How to compute the error can beseen in [19] but the scale of markers in the 3Dmodel must beconsidered The error can be computed as follows

119864119903 = 1119872119886119903119872119886119903sum119898119886=1

10038171003817100381710038171003817Bx119898119886 minus Bx101584011989811988610038171003817100381710038171003817 (23)

In (23) Bx119898119886Bx1015840119898119886 isin R3 Bx119898119886 and Bx1015840119898119886 are respectivelythe true position and the estimated position of the marker in

the 3Dmodel both are the jointmarker in the 3DmodelMaris the number of the markers in the 3D model

The transition motion includes four transition motionsthey are walking and playing golf walking and dancingwalking and swimming and walking and playing footballrespectively The contrast of the true data and the estimateddata can be seen in Figures 5(a)ndash5(c)The error of each framein the transition motion generated by LSOPA is the lowestamong the three methods so is the mean error Generallyspeaking the error of some frames will be close but the

International Journal of Digital Multimedia Broadcasting 7

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(a) The transition motion of LSOPA

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(b) The transition motion of LI

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(c) The transition motion of RI

Figure 4 The visual evaluation of the methods

error of LSOPA will be kept low and stable on the wholeThe tested result can also reveal that the LSOPA has the bestperformance in the error test

5 Conclusion

The LSOPA is proposed to solve the problem of estimatingthe human transition motion The LSOPA is tested in theexperiments of the vision and error test and we can find thatthe performance of the LSOPA is the best among the threemethods However the LSOPA still cannot process somecomplex human transition motion well and its initializationof optimization cannot be random thus the LSOPA needto be improved in the future research Then the improvedmethod can carry out the unsupervised learning of thetwo complex irrelevant human motions and estimate theirsmooth and valid transition motion The 3D human motion

model will also be replaced by other exquisitemodels [20ndash22]in the future research

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work is supported by the University Young CreativeTalent Project of Guangdong Province (no 2016KQNCX111)Science and Technology Plan Project of Guangzhou (no

8 International Journal of Digital Multimedia Broadcasting

LSOPA mean error=309007mmLI mean error=3366073mmRI mean error=284808mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(a) The error of the transitionmotion fromwalking to playing golf

LSOPA mean error=44449mmLI mean error=225848mmRI mean error=674034mm

0

20

40

60

80

100

120

140

160

180

erro

r (m

m)

5 10 150frame

(b) The error of the transition motion from walking to dancing

LSOPA mean error=68818mmLI mean error=352607mmRI mean error=1912774mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(c) The error of the transition motion from walking to swimming

LSOPA mean error=112612mmLI mean error=616732mmRI mean error=2228574mm

0

50

100

150

200

250

300

350

400

450er

ror (

mm

)

5 10 150frame

(d) The error of the transition motion from walking to playingfootball

Figure 5 The error of several transition motions

201804010280) National Natural Science Foundation ofChina (no 61772140) Science and Technology PlanningProject of Guangdong Province (no 2017A010101021) Appro-priative Researching Fund for Professors and DoctorsGuangdong University of Education (no 2015ARF17 no2015ARF25) Key Disciplines Of Network Engineering ofGuangdong University of Education (no ZD2017004) andComputer Practice Teaching Demonstration Center ofGuangdong University of Education (no 2018sfzx01)

References

[1] Z Liu L Zhou H Leung et al ldquoHigh-quality compatibletriangulations and their application in interactive animationrdquoComputers and Graphics vol 76 pp 60ndash72 2018

[2] A Sujar J J Casafranca A Serrurier et al ldquoReal-time anima-tion of human charactersrsquo anatomyrdquo Computers and Graphicsvol 74 pp 268ndash277 2018

[3] B Walther-Franks and R Malaka ldquoAn interaction approach tocomputer animationrdquo Entertainment Computing vol 5 no 4pp 271ndash283 2014

[4] D Archambault and H C Purchase ldquoCan animation supportthe visualisation of dynamic graphsrdquo Information Sciences vol330 pp 495ndash509 2016

[5] L Yebin G Juergen S Carsten et al ldquoMarkerless motioncapture of multiple characters using multiview image segmen-tationrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 11 pp 2720ndash2735 2013

[6] X Zhou M Zhu S Leonardos and K Daniilidis ldquoSparserepresentation for 3D shape estimation a convex relaxation

International Journal of Digital Multimedia Broadcasting 9

approachrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 39 no 8 pp 1648ndash1661 2017

[7] X ZhouMZhuG Pavlakos S Leonardos KGDerpanis andK Daniilidis ldquoMonoCap monocular human motion captureusing aCNNcoupledwith a geometric priorrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 41 no 4 pp901ndash914 2019

[8] J M Wang D J Fleet and A Hertzmann ldquoGaussian processdynamical models for human motionrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 30 no 2 pp 283ndash298 2008

[9] S T Roweis and L K Saul ldquoNonlinear dimensionality reduc-tion by locally linear embeddingrdquo Science vol 290 no 5500pp 2323ndash2326 2000

[10] N Lawrence ldquoProbabilistic non-linear principal componentanalysis with Gaussian process latent variable modelsrdquo Journalof Machine Learning Research (JMLR) vol 6 pp 1783ndash18162005

[11] C H Ek Shared Gaussian Process Latent Variables ModelsOxford Brookes University Oxford UK 2009

[12] J Xinwei G JunbinWTianjiang et al ldquoSupervised latent linearGaussian process latent variable model for dimensionalityreductionrdquo IEEE Transactions on Systems Man and Cybernet-ics Part B Cybernetics vol 42 no 6 pp 1620ndash1632 2012

[13] J B Tenenbaum V de Silva and J C Langford ldquoA globalgeometric framework for nonlinear dimensionality reductionrdquoScience vol 290 no 5500 pp 2319ndash2323 2000

[14] W-Y Li and J-F Sun ldquoHuman motion estimation basedon gaussion incremental dimension reduction and manifoldboltzmann optimizationrdquoActa Electronica Sinica vol 45 no 12pp 3060ndash3069 2017

[15] W Li J Sun X Zhang and Y Wu ldquoSpatial constraints-based maximum likelihood estimation for human motionsrdquo inProceedings of the 2013 IEEE International Conference on SignalProcessing Communications and Computing ICSPCC 2013 pp1ndash6 IEEE KunMing China 2013

[16] W Li J Sun and Z Song ldquoManifold latent probabilisticoptimization for human motion fitting based on orthogonalsubspace searchingrdquo Journal of Information and ComputationalScience vol 11 no 15 pp 5357ndash5365 2014

[17] W Li J Sun and X Zhang ldquoLearning for human transitionmotions estimation based on feature similarity optimizationrdquoJournal of Computational Information Systems vol 10 no 5 pp2127ndash2136 2014

[18] RH ByrdM EHribar and J Nocedal ldquoAn interior point algo-rithm for large-scale nonlinear programmingrdquo SIAM Journal onOptimization vol 9 no 4 pp 877ndash900 1999

[19] L Sigal A O Balan andM J Black ldquoHumanEva synchronizedvideo and motion capture dataset and baseline algorithm forevaluation of articulated human motionrdquo International Journalof Computer Vision vol 87 no 1-2 pp 4ndash27 2010

[20] M Sandau H Koblauch T B Moeslund H Aanaeligs T Alkjaeligrand E B Simonsen ldquoMarkerless motion capture can providereliable 3D gait kinematics in the sagittal and frontal planerdquoMedical Engineeringamp Physics vol 36 no 9 pp 1168ndash1175 2014

[21] E Yeguas-Bolivar R Munoz-Salinas R Medina-Carnicer andA Carmona-Poyato ldquoComparing evolutionary algorithms andparticle filters for markerless human motion capturerdquo AppliedSoft Computing vol 17 pp 153ndash166 2014

[22] J Gall B Rosenhahn T Brox et al ldquoOptimization and filteringfor human motion capture - a multi-layer frameworkrdquo Inter-national Journal of Computer Vision vol 87 no 1-2 pp 75ndash922010

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

International Journal of Digital Multimedia Broadcasting 7

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(a) The transition motion of LSOPA

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(b) The transition motion of LI

frame=63 frame=64 frame=65 frame=66 frame=67 frame=68 frame=69 frame=70 frame=71

frame=72 frame=73 frame=74 frame=75 frame=76 frame=77 frame=78 frame=79 frame=80

(c) The transition motion of RI

Figure 4 The visual evaluation of the methods

error of LSOPA will be kept low and stable on the wholeThe tested result can also reveal that the LSOPA has the bestperformance in the error test

5 Conclusion

The LSOPA is proposed to solve the problem of estimatingthe human transition motion The LSOPA is tested in theexperiments of the vision and error test and we can find thatthe performance of the LSOPA is the best among the threemethods However the LSOPA still cannot process somecomplex human transition motion well and its initializationof optimization cannot be random thus the LSOPA needto be improved in the future research Then the improvedmethod can carry out the unsupervised learning of thetwo complex irrelevant human motions and estimate theirsmooth and valid transition motion The 3D human motion

model will also be replaced by other exquisitemodels [20ndash22]in the future research

Data Availability

The data used to support the findings of this study areavailable from the corresponding author upon request

Conflicts of Interest

The authors declare no conflicts of interest

Acknowledgments

This work is supported by the University Young CreativeTalent Project of Guangdong Province (no 2016KQNCX111)Science and Technology Plan Project of Guangzhou (no

8 International Journal of Digital Multimedia Broadcasting

LSOPA mean error=309007mmLI mean error=3366073mmRI mean error=284808mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(a) The error of the transitionmotion fromwalking to playing golf

LSOPA mean error=44449mmLI mean error=225848mmRI mean error=674034mm

0

20

40

60

80

100

120

140

160

180

erro

r (m

m)

5 10 150frame

(b) The error of the transition motion from walking to dancing

LSOPA mean error=68818mmLI mean error=352607mmRI mean error=1912774mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(c) The error of the transition motion from walking to swimming

LSOPA mean error=112612mmLI mean error=616732mmRI mean error=2228574mm

0

50

100

150

200

250

300

350

400

450er

ror (

mm

)

5 10 150frame

(d) The error of the transition motion from walking to playingfootball

Figure 5 The error of several transition motions

201804010280) National Natural Science Foundation ofChina (no 61772140) Science and Technology PlanningProject of Guangdong Province (no 2017A010101021) Appro-priative Researching Fund for Professors and DoctorsGuangdong University of Education (no 2015ARF17 no2015ARF25) Key Disciplines Of Network Engineering ofGuangdong University of Education (no ZD2017004) andComputer Practice Teaching Demonstration Center ofGuangdong University of Education (no 2018sfzx01)

References

[1] Z Liu L Zhou H Leung et al ldquoHigh-quality compatibletriangulations and their application in interactive animationrdquoComputers and Graphics vol 76 pp 60ndash72 2018

[2] A Sujar J J Casafranca A Serrurier et al ldquoReal-time anima-tion of human charactersrsquo anatomyrdquo Computers and Graphicsvol 74 pp 268ndash277 2018

[3] B Walther-Franks and R Malaka ldquoAn interaction approach tocomputer animationrdquo Entertainment Computing vol 5 no 4pp 271ndash283 2014

[4] D Archambault and H C Purchase ldquoCan animation supportthe visualisation of dynamic graphsrdquo Information Sciences vol330 pp 495ndash509 2016

[5] L Yebin G Juergen S Carsten et al ldquoMarkerless motioncapture of multiple characters using multiview image segmen-tationrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 11 pp 2720ndash2735 2013

[6] X Zhou M Zhu S Leonardos and K Daniilidis ldquoSparserepresentation for 3D shape estimation a convex relaxation

International Journal of Digital Multimedia Broadcasting 9

approachrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 39 no 8 pp 1648ndash1661 2017

[7] X ZhouMZhuG Pavlakos S Leonardos KGDerpanis andK Daniilidis ldquoMonoCap monocular human motion captureusing aCNNcoupledwith a geometric priorrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 41 no 4 pp901ndash914 2019

[8] J M Wang D J Fleet and A Hertzmann ldquoGaussian processdynamical models for human motionrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 30 no 2 pp 283ndash298 2008

[9] S T Roweis and L K Saul ldquoNonlinear dimensionality reduc-tion by locally linear embeddingrdquo Science vol 290 no 5500pp 2323ndash2326 2000

[10] N Lawrence ldquoProbabilistic non-linear principal componentanalysis with Gaussian process latent variable modelsrdquo Journalof Machine Learning Research (JMLR) vol 6 pp 1783ndash18162005

[11] C H Ek Shared Gaussian Process Latent Variables ModelsOxford Brookes University Oxford UK 2009

[12] J Xinwei G JunbinWTianjiang et al ldquoSupervised latent linearGaussian process latent variable model for dimensionalityreductionrdquo IEEE Transactions on Systems Man and Cybernet-ics Part B Cybernetics vol 42 no 6 pp 1620ndash1632 2012

[13] J B Tenenbaum V de Silva and J C Langford ldquoA globalgeometric framework for nonlinear dimensionality reductionrdquoScience vol 290 no 5500 pp 2319ndash2323 2000

[14] W-Y Li and J-F Sun ldquoHuman motion estimation basedon gaussion incremental dimension reduction and manifoldboltzmann optimizationrdquoActa Electronica Sinica vol 45 no 12pp 3060ndash3069 2017

[15] W Li J Sun X Zhang and Y Wu ldquoSpatial constraints-based maximum likelihood estimation for human motionsrdquo inProceedings of the 2013 IEEE International Conference on SignalProcessing Communications and Computing ICSPCC 2013 pp1ndash6 IEEE KunMing China 2013

[16] W Li J Sun and Z Song ldquoManifold latent probabilisticoptimization for human motion fitting based on orthogonalsubspace searchingrdquo Journal of Information and ComputationalScience vol 11 no 15 pp 5357ndash5365 2014

[17] W Li J Sun and X Zhang ldquoLearning for human transitionmotions estimation based on feature similarity optimizationrdquoJournal of Computational Information Systems vol 10 no 5 pp2127ndash2136 2014

[18] RH ByrdM EHribar and J Nocedal ldquoAn interior point algo-rithm for large-scale nonlinear programmingrdquo SIAM Journal onOptimization vol 9 no 4 pp 877ndash900 1999

[19] L Sigal A O Balan andM J Black ldquoHumanEva synchronizedvideo and motion capture dataset and baseline algorithm forevaluation of articulated human motionrdquo International Journalof Computer Vision vol 87 no 1-2 pp 4ndash27 2010

[20] M Sandau H Koblauch T B Moeslund H Aanaeligs T Alkjaeligrand E B Simonsen ldquoMarkerless motion capture can providereliable 3D gait kinematics in the sagittal and frontal planerdquoMedical Engineeringamp Physics vol 36 no 9 pp 1168ndash1175 2014

[21] E Yeguas-Bolivar R Munoz-Salinas R Medina-Carnicer andA Carmona-Poyato ldquoComparing evolutionary algorithms andparticle filters for markerless human motion capturerdquo AppliedSoft Computing vol 17 pp 153ndash166 2014

[22] J Gall B Rosenhahn T Brox et al ldquoOptimization and filteringfor human motion capture - a multi-layer frameworkrdquo Inter-national Journal of Computer Vision vol 87 no 1-2 pp 75ndash922010

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

8 International Journal of Digital Multimedia Broadcasting

LSOPA mean error=309007mmLI mean error=3366073mmRI mean error=284808mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(a) The error of the transitionmotion fromwalking to playing golf

LSOPA mean error=44449mmLI mean error=225848mmRI mean error=674034mm

0

20

40

60

80

100

120

140

160

180

erro

r (m

m)

5 10 150frame

(b) The error of the transition motion from walking to dancing

LSOPA mean error=68818mmLI mean error=352607mmRI mean error=1912774mm

5 10 150frame

050

100150200250300350400450500550

erro

r (m

m)

(c) The error of the transition motion from walking to swimming

LSOPA mean error=112612mmLI mean error=616732mmRI mean error=2228574mm

0

50

100

150

200

250

300

350

400

450er

ror (

mm

)

5 10 150frame

(d) The error of the transition motion from walking to playingfootball

Figure 5 The error of several transition motions

201804010280) National Natural Science Foundation ofChina (no 61772140) Science and Technology PlanningProject of Guangdong Province (no 2017A010101021) Appro-priative Researching Fund for Professors and DoctorsGuangdong University of Education (no 2015ARF17 no2015ARF25) Key Disciplines Of Network Engineering ofGuangdong University of Education (no ZD2017004) andComputer Practice Teaching Demonstration Center ofGuangdong University of Education (no 2018sfzx01)

References

[1] Z Liu L Zhou H Leung et al ldquoHigh-quality compatibletriangulations and their application in interactive animationrdquoComputers and Graphics vol 76 pp 60ndash72 2018

[2] A Sujar J J Casafranca A Serrurier et al ldquoReal-time anima-tion of human charactersrsquo anatomyrdquo Computers and Graphicsvol 74 pp 268ndash277 2018

[3] B Walther-Franks and R Malaka ldquoAn interaction approach tocomputer animationrdquo Entertainment Computing vol 5 no 4pp 271ndash283 2014

[4] D Archambault and H C Purchase ldquoCan animation supportthe visualisation of dynamic graphsrdquo Information Sciences vol330 pp 495ndash509 2016

[5] L Yebin G Juergen S Carsten et al ldquoMarkerless motioncapture of multiple characters using multiview image segmen-tationrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 11 pp 2720ndash2735 2013

[6] X Zhou M Zhu S Leonardos and K Daniilidis ldquoSparserepresentation for 3D shape estimation a convex relaxation

International Journal of Digital Multimedia Broadcasting 9

approachrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 39 no 8 pp 1648ndash1661 2017

[7] X ZhouMZhuG Pavlakos S Leonardos KGDerpanis andK Daniilidis ldquoMonoCap monocular human motion captureusing aCNNcoupledwith a geometric priorrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 41 no 4 pp901ndash914 2019

[8] J M Wang D J Fleet and A Hertzmann ldquoGaussian processdynamical models for human motionrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 30 no 2 pp 283ndash298 2008

[9] S T Roweis and L K Saul ldquoNonlinear dimensionality reduc-tion by locally linear embeddingrdquo Science vol 290 no 5500pp 2323ndash2326 2000

[10] N Lawrence ldquoProbabilistic non-linear principal componentanalysis with Gaussian process latent variable modelsrdquo Journalof Machine Learning Research (JMLR) vol 6 pp 1783ndash18162005

[11] C H Ek Shared Gaussian Process Latent Variables ModelsOxford Brookes University Oxford UK 2009

[12] J Xinwei G JunbinWTianjiang et al ldquoSupervised latent linearGaussian process latent variable model for dimensionalityreductionrdquo IEEE Transactions on Systems Man and Cybernet-ics Part B Cybernetics vol 42 no 6 pp 1620ndash1632 2012

[13] J B Tenenbaum V de Silva and J C Langford ldquoA globalgeometric framework for nonlinear dimensionality reductionrdquoScience vol 290 no 5500 pp 2319ndash2323 2000

[14] W-Y Li and J-F Sun ldquoHuman motion estimation basedon gaussion incremental dimension reduction and manifoldboltzmann optimizationrdquoActa Electronica Sinica vol 45 no 12pp 3060ndash3069 2017

[15] W Li J Sun X Zhang and Y Wu ldquoSpatial constraints-based maximum likelihood estimation for human motionsrdquo inProceedings of the 2013 IEEE International Conference on SignalProcessing Communications and Computing ICSPCC 2013 pp1ndash6 IEEE KunMing China 2013

[16] W Li J Sun and Z Song ldquoManifold latent probabilisticoptimization for human motion fitting based on orthogonalsubspace searchingrdquo Journal of Information and ComputationalScience vol 11 no 15 pp 5357ndash5365 2014

[17] W Li J Sun and X Zhang ldquoLearning for human transitionmotions estimation based on feature similarity optimizationrdquoJournal of Computational Information Systems vol 10 no 5 pp2127ndash2136 2014

[18] RH ByrdM EHribar and J Nocedal ldquoAn interior point algo-rithm for large-scale nonlinear programmingrdquo SIAM Journal onOptimization vol 9 no 4 pp 877ndash900 1999

[19] L Sigal A O Balan andM J Black ldquoHumanEva synchronizedvideo and motion capture dataset and baseline algorithm forevaluation of articulated human motionrdquo International Journalof Computer Vision vol 87 no 1-2 pp 4ndash27 2010

[20] M Sandau H Koblauch T B Moeslund H Aanaeligs T Alkjaeligrand E B Simonsen ldquoMarkerless motion capture can providereliable 3D gait kinematics in the sagittal and frontal planerdquoMedical Engineeringamp Physics vol 36 no 9 pp 1168ndash1175 2014

[21] E Yeguas-Bolivar R Munoz-Salinas R Medina-Carnicer andA Carmona-Poyato ldquoComparing evolutionary algorithms andparticle filters for markerless human motion capturerdquo AppliedSoft Computing vol 17 pp 153ndash166 2014

[22] J Gall B Rosenhahn T Brox et al ldquoOptimization and filteringfor human motion capture - a multi-layer frameworkrdquo Inter-national Journal of Computer Vision vol 87 no 1-2 pp 75ndash922010

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

International Journal of Digital Multimedia Broadcasting 9

approachrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 39 no 8 pp 1648ndash1661 2017

[7] X ZhouMZhuG Pavlakos S Leonardos KGDerpanis andK Daniilidis ldquoMonoCap monocular human motion captureusing aCNNcoupledwith a geometric priorrdquo IEEETransactionson Pattern Analysis and Machine Intelligence vol 41 no 4 pp901ndash914 2019

[8] J M Wang D J Fleet and A Hertzmann ldquoGaussian processdynamical models for human motionrdquo IEEE Transactions onPatternAnalysis andMachine Intelligence vol 30 no 2 pp 283ndash298 2008

[9] S T Roweis and L K Saul ldquoNonlinear dimensionality reduc-tion by locally linear embeddingrdquo Science vol 290 no 5500pp 2323ndash2326 2000

[10] N Lawrence ldquoProbabilistic non-linear principal componentanalysis with Gaussian process latent variable modelsrdquo Journalof Machine Learning Research (JMLR) vol 6 pp 1783ndash18162005

[11] C H Ek Shared Gaussian Process Latent Variables ModelsOxford Brookes University Oxford UK 2009

[12] J Xinwei G JunbinWTianjiang et al ldquoSupervised latent linearGaussian process latent variable model for dimensionalityreductionrdquo IEEE Transactions on Systems Man and Cybernet-ics Part B Cybernetics vol 42 no 6 pp 1620ndash1632 2012

[13] J B Tenenbaum V de Silva and J C Langford ldquoA globalgeometric framework for nonlinear dimensionality reductionrdquoScience vol 290 no 5500 pp 2319ndash2323 2000

[14] W-Y Li and J-F Sun ldquoHuman motion estimation basedon gaussion incremental dimension reduction and manifoldboltzmann optimizationrdquoActa Electronica Sinica vol 45 no 12pp 3060ndash3069 2017

[15] W Li J Sun X Zhang and Y Wu ldquoSpatial constraints-based maximum likelihood estimation for human motionsrdquo inProceedings of the 2013 IEEE International Conference on SignalProcessing Communications and Computing ICSPCC 2013 pp1ndash6 IEEE KunMing China 2013

[16] W Li J Sun and Z Song ldquoManifold latent probabilisticoptimization for human motion fitting based on orthogonalsubspace searchingrdquo Journal of Information and ComputationalScience vol 11 no 15 pp 5357ndash5365 2014

[17] W Li J Sun and X Zhang ldquoLearning for human transitionmotions estimation based on feature similarity optimizationrdquoJournal of Computational Information Systems vol 10 no 5 pp2127ndash2136 2014

[18] RH ByrdM EHribar and J Nocedal ldquoAn interior point algo-rithm for large-scale nonlinear programmingrdquo SIAM Journal onOptimization vol 9 no 4 pp 877ndash900 1999

[19] L Sigal A O Balan andM J Black ldquoHumanEva synchronizedvideo and motion capture dataset and baseline algorithm forevaluation of articulated human motionrdquo International Journalof Computer Vision vol 87 no 1-2 pp 4ndash27 2010

[20] M Sandau H Koblauch T B Moeslund H Aanaeligs T Alkjaeligrand E B Simonsen ldquoMarkerless motion capture can providereliable 3D gait kinematics in the sagittal and frontal planerdquoMedical Engineeringamp Physics vol 36 no 9 pp 1168ndash1175 2014

[21] E Yeguas-Bolivar R Munoz-Salinas R Medina-Carnicer andA Carmona-Poyato ldquoComparing evolutionary algorithms andparticle filters for markerless human motion capturerdquo AppliedSoft Computing vol 17 pp 153ndash166 2014

[22] J Gall B Rosenhahn T Brox et al ldquoOptimization and filteringfor human motion capture - a multi-layer frameworkrdquo Inter-national Journal of Computer Vision vol 87 no 1-2 pp 75ndash922010

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom

International Journal of

AerospaceEngineeringHindawiwwwhindawicom Volume 2018

RoboticsJournal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Shock and Vibration

Hindawiwwwhindawicom Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwwwhindawicom

Volume 2018

Hindawi Publishing Corporation httpwwwhindawicom Volume 2013Hindawiwwwhindawicom

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwwwhindawicom Volume 2018

International Journal of

RotatingMachinery

Hindawiwwwhindawicom Volume 2018

Modelling ampSimulationin EngineeringHindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwwwhindawicom Volume 2018

Hindawiwwwhindawicom Volume 2018

Navigation and Observation

International Journal of

Hindawi

wwwhindawicom Volume 2018

Advances in

Multimedia

Submit your manuscripts atwwwhindawicom