advances on cognitive automation at lgi2p / ecole des ...urtado/slides/rr_13_02.pdf · advances on...

60
Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot 2012-2013 June 2013 Research report RR/13-02

Upload: others

Post on 10-Mar-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Advances on cognitive automationat LGI2P / Ecole des Mines d'Alès

Doctoral research snapshot 2012-2013

June 2013Research report RR/13-02

Page 2: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot
Page 3: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Foreword

This research report sums up the results of the 2013 PhD seminar of the LGI2P lab of the AlèsNational Superior School of Mines. This annual day-long meeting gathers presentations of thelatest research results of LGI2P PhD students.

This year edition of the seminar took place on June 27th. All PhD students presented theirwork for the past academic year. All presentations were be followed by extensive time forvery constructive questions from the audience.

The aggregation of abstracts of these works constitute the present research report and gives aprecise snapshot of the research on cognitive automation led in the lab this year.

I would like to thank all lab members, among which all PhD students and their supervisors,for their professionalism and enthusiasm in helping me prepare this seminar. I would also liketo thank all the researchers that came to listen to presentations and ask questions, thuscontributing to their thesis defense training.

I wish you all an inspiring reading and hope to see you all again for next year’s 2014 edition!!

Christelle URTADO

Page 1/58

Page 4: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Page 2/58

Page 5: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Contents

First year PhD students

Nawel AMOKRANE Page 5

Toward a methodological approach for engineering complex systems with formal verificationapplied to the computerization of small and medium sized enterprises

Mustapha BILAL Page 9

System of Systems’ (SoS) design verification by formal methods and simulation

Mirsad BULJUBASIC Page 13

Efficient local search for large scale combinatorial problems

Sami DALHOUMI Page 17

Methods of pattern classification for the design of a NIRS-based brain computer interface

Nicolas FIORINI Page 21

Assessing complementarity and using it as a basis for Information Retrieval

Abderrahman MOKNI Page 26

Software architecture: modeling and evolution management

Suzy POLKA Page 30

A unified algorithm for the optimisation of the vehicle routing problem with variable travelingtimes

Darshan VENKATRAYAPPA Page 34

Gesture recognition and object detection: a small report

Page 3/58

Page 6: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Third year PhD students

Afef DENGUIR Page 39

RIDER project: Research for IT Driven EneRgy efficiency based on a multidimensionalcomfort control

Sébastien HARISPE Page 43

Semantic Measures - From theory to applications

Mambaye LO Page 46

Evaluation in System engineering: application to mechatronic design

Fourth year PhD students

François PFISTER Page!50

An open and distributed framework for designing Domain Specific Graphical ModelingLanguages

Thanh-Liem PHAN Page 54

Modeling and verification techniques for incremental development of UML architectures

Page 4/58

Page 7: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

!"#$%&'$'()*+"&","-./$,'011%"$/+'2"%'34-.4))%.4-'5"61,)7'89:*)6:'#.*+';"%6$,'<)%.2./$*."4'011,.)&'*"'*+)'5"61=*)%.>$*."4'"2'86$,,'$4&'()&.=6?8.>)&'

34*)%1%.:):'

'!"#$%&!'()*+#,%-!./,0%,1!23#45+&#1-!',,%67/8%!2)5+9/8-!:3)(#8!7#(9)&#/8!#,;!

<)388/,%!=#33)5>!

8!

:%#(!?@AB!C!?,1%+)4%+#9&%!@D81%(!E!A+F#,/G#1/),!B,F/,%%+/,F!H!

@1#+1!;#1%I!JKLMJNK>!

@'''5"4*)7*''

=%O5/+%(%,18! %,F/,%%+/,F! 0),81/151%8! 13%! %#+&/%81! 43#8%! )P! #,D! 8D81%(!;%Q%&)4(%,1!4+)R%01>! ?1!3#8!9%%,!+%0)F,/G%;!#&()81! P)5+!;%0#;%8!#F)!#8!#,! /(4)+1#,1!#,;!0+/1/0#&!8%1! )P! #01/Q/1/%8! SNT-! 8/,0%! /1! 3/F3&D! /(4#018! 13%! O5#&/1D! )P! 13%! +%85&1/,F! 4+);501! )+!8%+Q/0%>!<)+%)Q%+-!#44&/%;!3%+%!/,!13%!0#8%!)P!8)P1$#+%!8D81%(8-!13%!&#1%!0)++%01/),!)P!%++)+8!)+!(/81#*%8!;5%!1)!#!4))+!+%O5/+%(%,18!(#,#F%(%,1!4+)0%88!0#,!/,;50%!0)818!54!1)!MJJ!1/(%8!()+%!/(4)+1#,1!$3%,!;%1%01%;!;5+/,F!13%!)4%+#1/,F!43#8%!SMT>!:3%8%!%++)+8! #+%! #&8)! Q%+D! 4%+8/81%,1! #,;! P+%O5%,1! )$/,F! 1)! 13%! /,Q)&Q%(%,1! )P! ;/PP%+%,1!81#*%3)&;%+8!$3)!3#Q%!;/PP%+%,1!05&15+%8!#,;!Q)0#95&#+/%8!#,;!(#D!3#Q%!0),P&/01/,F!4)/,18! )P! Q/%$>! :3/8! /,;50%8! F%,%+#&&D! ;/PP/05&1/%8! )P! (#,#F/,F! 13%! #01/Q/1/%8! )P!

+%O5/+%(%,18>! BQ%,! 13)5F3! 13%D! #;(/1! 13%! /(4)+1#,0%! )P! 13/8! 43#8%-! (#,D!)+F#,/G#1/),8!;)!,)1!8D81%(#1/0#&&D!4%+P)+(!(%13);)&)F/0#&!+%O5/+%(%,18!%,F/,%%+/,F!#01/),8-! %84%0/#&&D!$3%,! 13%D! ;)! ,)1! 3#Q%! 13%!(%#,8! #,;! 13%! 1/(%! 1)! ;)! /1>! :3/8! /8!4#+1/05&#+&D! 0+50/#&! P)+! 8(#&&! #,;! (%;/5(68/G%;! %,1%+4+/8%8! U@<B8V! $3/03! #+%!F%,%+#&&D! 0)(4)8%;! )P! 4#88/),#1%! (%(9%+8! $3)! %W4+%88! 13%/+! ,%%;8! /,! 13%/+! )$,!958/,%88! Q)0#95&#+D! #,;! ;)! ,)1! 3#Q%! 13%! 8*/&&8! 1)! 58%! 13%! %W/81/,F! +%O5/+%(%,18!%&/0/1#1/),!1))&8>!

A'''BCD)/*.E):''

A5+! +%8%#+03! /8! ;+/Q%,! 9D! #,! /,;581+/#&! ,%%;! )P! #,! ?:68%+Q/0%! %,1%+4+/8%! ,#(%;!=B@X7?@>! Y%#;! )P! 13/8! 0)(4#,D! )98%+Q%;! 13#1! ()81! )P! @<B8! ;)! ,)1! 3#Q%! $%&&6%81#9&/83%;!958/,%88!4+)0%;5+%8>!?,!13%!8#(%!1/(%!13%D!#+%!,)1!#&$#D8!#$#+%!)P!13%/+!,%%;8>!?,!0),8%O5%,0%-!$%!0),8/;%+!13#1!13%!%,;!58%+!3#8!1)!9%!/,Q)&Q%;!#8!#,!#01/Q%!

Page 5/58

Page 8: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

!"#$%& !'& '$$(& !'& )$''*+,-& ./%*(0& #1-& %-2/*%-3-(#'& -(0*(--%*(0& !"#*4*#*-'5& 61*'& 1-,)'&+/'*(-''& !(!,7'#'& !(.& .-'*0(-%'& /(.-%'#!(.& #1-& -8)-"#-.& '7'#-3& !'& -8),!*(-.& +7&+/'*(-''&-8)-%#'5&61*'&9$%:&!*3'&!#&3-!(& #$& !/#$($3$/',7& !(.& ;$%3!,,7&.-'"%*+-& #1-*%& +/'*(-''<& ')-"*;7& #1-*%& !"#*4*#*-'<&;$%3/,!#-&#1-*%&(--.'&!(.&;*(!,,7&"1-":&*;&#1-*%&.-'"%*)#*$(&*'&'/;;*"*-(#,7&3!#/%-&!(.&"$('*'#-(#5& 61-& 0$!,& *'& #1-(& #$& "$("-)#/!,*=-& !(.& #$& .-4-,$)& 4!%*$/'& "$("-)#/!,<&3-#1$.$,$0*"!,&!(.&#-"1($,$0*"!,&%-'/,#'5&

>*%'#<&*#&*'&)%$)$'-.&#$&!.!)#&!(.&.-4-,$)&!&'-#&$;&3$.-,*(0&3-!('<&,!(0/!0-'&!(.&-8*'#*(0& !%"1*#-"#/%!,& ;%!3-9$%:'& #1!#& %-2/*%-& ($& ')-"*!,& :($9,-.0-& *(& 3$.-,*(0&($#!#*$('&$%&?6&':*,,'5&?(.--.<&#1-&*.-!&*'&1-%-&#$&1-,)&!(7&'#!:-1$,.-%&;%$3&#1-&@AB&#$& 3$.-,& *(& "$(;*.-("-<& !3$(0& $#1-%& (--.'<& #1-& 9!7& 1-& )-%;$%3'& 1*'& +/'*(-''&)%$"-''-'<& #1-& *(;$%3!#*$(& !(.& %-'$/%"-'& 1-& /'-'& !(.& #1-& .*'#%*+/#*$(& $;& #1-&%-')$('*+*,*#*-'& *(& 1*'& $%0!(*=!#*$(5& ?(& #1-& '!3-& 9!7<& !& %-2/*%-3-(#'& 3$.-,*(0&,!(0/!0-& 1!'& #$& +-& )%$)$'-.5& ?#& '1$/,.& +-& '*3),-& -($/01& #$& +-& *(#/*#*4-,7& /'-.& +7&#1-'-&'#!:-1$,.-%'&!(.&;$%3!,&-($/01&#$&)-%3*#&4-%*;*"!#*$(&$;&#1-&"$('*'#-("7&!(.&#1-&"$3),-#-(-''& $;& #1-& %-'/,#*(0& '-#& $;& %-2/*%-3-(#'& +/#& !,'$& $;& #1-& $%0!(*=!#*$(& !(.&)%$"-''-'& 3$.-,'5& 61-& %-'/,#*(0& !%"1*#-"#/%!,& ;%!3-9$%:& ;$%& @AB'& 1!'& #$& "$4-%&4!%*$/'& 3$.-,*(0& 4*-9'& %-"$33-(.-.& +7& -(#-%)%*'-& -(0*(--%*(0& '#!(.!%.'& CDE&(!3-,7F&%-2/*%-3-(#'<&;/("#*$(<&*(;$%3!#*$(<&%-'$/%"-&!(.&$%0!(*=!#*$(&4*-9'5&

@-"$(.<&!&'-#&$;&4-%*;*"!#*$(&!(.&*;&)$''*+,-&4!,*.!#*$(&3-"1!(*'3'&!(.&%/,-'&1!4-&#$&+-&.-4-,$)-.&*(&$%.-%&#$&*("%-!'-&#1-&"$(;*.-("-&,-4-,&$;&#1-&4!%*$/'&3$.-,'&*(&-!"1&4*-9&+/*,#&+7&/'*(0&#1-&3$.-,*(0&3-!('<&,!(0/!0-'&!(.&;%!3-9$%:5&

61*%.<&!&#$$,-.&3-#1$.$,$07&*'&'-##,-.&#$&1!(.,-&#1-&)%$)$'-.&3$.-,*(0&,!(0/!0-'<&4-%*;*"!#*$(&3-"1!(*'3'&!(.&%/,-'<&!(.&!%"1*#-"#/%!,&;%!3-9$%:&%-')-"#*(0&3$.-,&!(.&.!#!&*(#-%$)-%!+*,*#7&-8)-"#!#*$('5&

!"""#$$%&'()"

61*'&9$%:&*'& ;*%'#,7&"$("-%(-.&+7&.-;*(*(0&!&'*3),-&!(.&*(#/*#*4-&3$.-,*(0&,!(0/!0-&#$& +-& /'-.& +7& +/'*(-''& -8)-%#'5& B8*'#*(0& %-2/*%-3-(#'& 3$.-,*(0& ,!(0/!0-'& !(.&3-#1$.'&1!4-&+--(&'#/.*-.&!(.&"$3)!%-.&'/"1&!'&GHI@&CJE<&KLA&!(.&MNO&$;&#1-&/'-%&%-2/*%-3-(#'&($#!#*$(&PKNQR&'#!(.!%.&CSE&#$&*.-(#*;7&#1-&'"$)-&$;&#1-*%&3$.-,*(0&)$''*+*,*#*-'5&61-'-&,!(0/!0-'&!%-&($#&*(#-(.-.&#$&@AB'&-(.&/'-%'&!(.&.$&($#&!,,$9&#$&.-;*(-& -8),*"*#& +/'*(-''& %/,-'5& T-("-& 9-& !%-& ;$,,$9*(0& #1-& #%!*,& $;& ,!(0/!0-'&-8)%-''*(0& +/'*(-''& :($9,-.0-& *(& !& '/+U'-#& $;& (!#/%!,& ,!(0/!0-& /(.-%'#!(.!+,-& +7&1/3!(&!(.&"$3)/#-%&'7'#-3'<&(!3-,7&@VWN&CXE&!(.&INA&CYE5&Z-&+-,*-4-&!,'$&#1!#&#-8#/!,& ;$%3/,!#*$(&/'*(0&(!#/%!,& ,!(0/!0-& *'&3$%-& !))%$)%*!#-& ;$%&($(U-8)-%#&/'-%'5&61-'-& ,!(0/!0-'& !,,$9& 3$.-,*(0& #1-& *(;$%3!#*$(& 4*-9& +/#& #1*'& *'& ($#& '/;;*"*-(#& #$&"$4-%&#1-&-(#-%)%*'-&3$.-,&!'& %-"$33-(.-.&+7&-(#-%)%*'-&-(0*(--%*(0&'#!(.!%.'& CDE5&6$& $;;-%& !& ;/,,& *3!0-& $;& #1-& -(#-%)%*'-& 3$.-,& !(.& #$& 0/!%!(#--& #1-& '!#*';!"#*$(& $;&-(#-%)%*'-&3$.-,*(0&)%*("*),-'&C[E& #1-&%-2/*%-3-(#'&3$.-,*(0& ,!(0/!0-& *'&.-4-,$)-.&#!:*(0& *(#$&!""$/(#& ;$%3!,& '7(#!"#*"&!(.& '-3!(#*"&.-;*(*#*$('&$;& '#!(.!%.&"$('#%/"#'&;$%& -(#-%)%*'-& 3$.-,*(0& C\E5& Z-& %-,7& $(& 9-,,U;$/(.-.& %-;-%-("-& !%"1*#-"#/%-'& !(.&

Page 6/58

Page 9: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

3$.-,'& ,*:-& #1-& Generic Enterprise Reference Architecture& PMBNHR& CDE& !(.& #1-&]!"13!(& ;%!3-9$%:& ;$%& *(;$%3!#*$(&!(.&'7'#-3&!%"1*#-"#/%-& C^_E& !'&!& '#-)& #$9!%.'&;$%3!,&4-%*;*"!#*$(&!(.&4!,*.!#*$(&!(.&!'&!&0/*.-&;$%&#1-&-,!+$%!#*$(&$;&!(&-(#-%)%*'-&-(0*(--%*(0&3-#1$.5&&&

>/%#1-%3$%-<&!(.&;$%&-!%,7&4!,*.!#*$(&'!:-<&3$.-,&#%!(';$%3!#*$(&3-"1!(*'3'&9*,,&+-&.-4-,$)-.&#$&!,,$9&"$("-)#/!,&!(.&#-"1(*"!,&*(#-%$)-%!+*,*#7&+-#9--(&#1-&%-'/,#*(0&3$.-,'<& !(.& !,'$& .-4-,$)3-(#& #$$,'5& N-,-4!(#& 3$.-,& #%!(';$%3!#*$('& %/,-'& !(.&3-#1$.'& !%-& ($9& /(.-%& '#/.7& C^^<& ^`E& "$('*.-%*(0& 3/,#*& )!%!.*03& 3$.-,*(0&,!(0/!0-'5& 61*'& 9$/,.& !,,$9& .-4-,$)-%'& )%$./"-&3$":& /)'& #$& +-& 4!,*.!#-.& +7& -(.&/'-%'5&&&&&

NLPFormel

Definition : Meta-model

Requirements and Business Modeling

Glossary Work-stations

Informational objects

!"#$%&'()%*'*)+,- ./

Scenarios

Business procedures

Verification Verification

Validation Validation

End User/ Business

Expert

Ideal Patterns

Generic level

Partial level

Particular levelConstructs

(ISO19440, UEML..)

Function viewInformation view

Resource viewOrganisation view

Business processes

Enterprise particular model

Meta-model instantiation

Meta-model instantiation

Model transformation

Model transformation

'*+,-".-&N-2/*%-3-(#'& &'''&&&&

Page 7/58

Page 10: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

References"

^5 V-,,<& 65B5<& 61!7-%<& 65H5F& @$;#9!%-& %-2/*%-3-(#'F& H%-& #1-7& %-!,,7& !& )%$+,-3a5&b%$"--.*(0?L@B& cYX& b%$"--.*(0'& $;& #1-& `(.& *(#-%(!#*$(!,& "$(;-%-("-& $(& @$;#9!%-&-(0*(--%*(0&P^\YXR5&

`5 V$-13<& V5Z<& b!)!""*$<& L5F& K(.-%'#!(.*(0& !(.& "$(#%$,,*(0& '$;#9!%-& "$'#'<& ?BBB&6%!('!"#*$('&$(&@$;#9!%-&B(0*(--%*(0&P^\[[R5&

D5 V-%(/'<& b5<& & Q-3-'<& O5F& 61-& "$(#%*+/#*$(& $;& #1-& 0-(-%!,*'-.& -(#-%)%*'-& %-;-%-("-&!%"1*#-"#/%-&#$&"$('-('/'&*(&#1-&!%-!&$;&-(#-%)%*'-&*(#-0%!#*$(5&b%$"--.*(0'&$;&?LB?A6\Y&P^\\YR5&

J5 W!(&O!3'9--%.-<&H5&F&N-2/*%-3-(#'&B(0*(--%*(0F&>%$3&@7'#-3&M$!,'&#$&KAO&A$.-,'&#$&@$;#9!%-<&@)-"*;*"!#*$('5Z*,-7&P`__\R5&

S5 ?6K 6<& N-"$33-(.!#*$(& ]5^S^F& K'-%& N-2/*%-3-(#'& Q$#!#*$(& PKNQR& & O!(0/!0-&d-;*(*#*$(<&M-(-4!<&@9*#=-%,!(.<&!))%$4-.&Q$4-3+-%&P`__[R5&

X5 I+e-"#&A$.-,*(0&M%$/)&PIAMRF&@-3!(#*"'&$;&V/'*(-''&W$"!+/,!%7&!(.&V/'*(-''&N/,-'&@)-"*;*"!#*$(<&W-%'*$(&^5_5&P`__[R5&

Y5 T!,)*(<&65H5F& U\<&($5&^_<&A*,,-%&>%--3!(<&@!(&A!#-$&LH<&))5&XXUY`5&P^\\XR5&

[5 W-%(!.!#<& >5F& KBAOF& #$9!%.'& !& /(*;*-.& -(#-%)%*'-& 3$.-,,*(0& ,!(0/!0-5& ?(#-%(!#*$(!,&f$/%(!,&$;&b%$./"#*$(&N-'-!%"1&P`__^R5&&&

\5 ?@Igd?@&^\JJ_F&B(#-%)%*'-&*(#-0%!#*$(&U&L$('#%/"#'&;$%&-(#-%)%*'-&3$.-,,*(0&P`__JR5&&^_5 ]!"13!(<& f5FH& ;%!3-9$%:& ;$%& *(;$%3!#*$(& '7'#-3'& !%"1*#-"#/%-5& ?VA& @7'#-3'& f$/%(!,<&

4$,/3-&`X<&P^\[YR5&^^5 Q-3/%!*#-<& O5<& @:-%'7'<& 65F& WB6?@& #$$,& ;$%& -.*#*(0& !(.& #%!(';$%3*(0& @VWN& +/'*(-''&

4$"!+/,!%*-'&!(.&+/'*(-''&%/,-'&*(#$&KAOhILO&3$.-,'&P`_^_R5&^`5 @3*!,-:<&A5<& f!%=-+$9':*<&Q5<& & Q$9!:$9':*<&Z5F& 6%!(',!#*$(& $;&K'-& "!'-& '"-(!%*$'& #$&

e!4!&"$.-5&L$3)/#-%&@"*-("-&&P`_^`R5&'

'

&

'

Page 8/58

Page 11: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

System of Systems’ (SoS) design verification byformal methods and simulation

Mustapha Bilal, Nicolas Daclin, and Vincent Chapurlat

LGI2P, Laboratoire de Genie Informatique et d’Ingenierie de Production, ENS Mınes

Ales, Parc Scientifique G.Besse, 30035 Nımes cedex 1, France

{firstname.lastname}@mines-ales.fr

Abstract. Proposed in this paper is the research in progress on the de-

velopment of a new methodology to achieve System of Systems (SoS)

design verification, particularly when facing emergence phenomenon of

behaviors due to the sub systems interactions during the SoS opera-

tional mission execution. On one hand, this methodology is based on

formal properties specification and proof approach which will allow the

verification of SoS model with regard to stakeholders’ requirements. On

the other hand, a simulation with Multi-Agents Systems (MAS) is used

to allow the execution of the architectural model of the SoS.

Keywords: System of Systems, Design Verification, Emergence, Multi-

Agents Systems, formal methods, system engineering

1 IntroductionA System of Systems (SoS) is a complex system [1] based on the collaborationand interaction between existing subsystems (of technical or socio-technical na-ture) in order to fulfill a common mission for a limited duration of time (e.g.coalition forces, air transportation system, networked enterprises, etc.). SoS de-sign or re-design is distinguished from classical system design [2]. Indeed, sys-tems, where most of them already exist, are selected according to their relevance,capacity and self-interest to fulfill the SoS mission. They are assembled in a wayto respect the requirements of SoS stakeholders and that their interaction allowsthem to fulfill this mission. During this assembly, interfaces are required whetherphysical (hardware), informational (model and data exchange protocols) or or-ganizational (rules, procedures and protocols) in order to ensure the necessaryinteroperability of subsystems [3]. Their behavior, decision-making autonomyand their organization should not be impacted or influenced by risky situationsor undesired effects resulting from the interaction between these subsystems.Moreover, the resulting behavior of SoS is not necessarily the expected one dueto the unpredictable emergent behaviors caused by these interactions [4]. Simi-larly, some of SoS emergent behaviors cannot be directly deduced and linked tothe set of subsystems behaviors.

A design verification is necessary during a design process. It is the confirma-tion that a model does what it was designed to do. Indeed, an error detected inlater phases can be costly to solve [5]. Many companies suffer losses due to largescale products recalls, and the need to rework costs and unexpected delays in

Page 9/58

Page 12: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

launching the products [6]. However avoiding the drawbacks in case of SoS design

cannot be solved due to the absence of efficient design verification. The design

framework is the system engineering since it is standardized, it covers larger

number of concepts and the verification process needs more formalization.Thus,

the importance of design verification in SoS system engineering.

SoS Designers are facing a major challenge: How to have a better control over

these behaviors and these properties in a relatively short time without making an

extra effort and without the need of additional knowledge in terms of modeling

both behavioral scenarios and properties?

After presenting some definitions in section 2, the proposed design verification

approach is presented in section 3. Then, section 4 concludes this paper, drawing

the orientation of future works.

2 SoS: concept definitionWhile there is no universal definition for SoS [7], there is a general consensus

about its several characteristics and growing importance. Fixing these charac-

teristics will help designers in their verification tasks.

The set of systems comprising the systems of system are independently useful

systems yet when integrated together, they deliver significantly improved capa-

bility. A single system or less than full combination of all systems cannot provide

the capability achieved by the system of systems [8]. The Sos is considered as a

complex system [1].

Maeir [4] mentioned five principal characteristics:

• “Operational Independence of the Elements : SoS is composed of sub systems

which are independent and useful in their own right.

• Managerial Independence of the Elements: The subsystems are separately

acquired and assembled but maintain a continuing operational existence in-

dependent of the SoS.

• Evolutionary Development : The SoS does not appear fully formed. Its de-

velopment and existence is evolutionary with functions and purposes added,

removed, and modified with experience.

• Emergent Behavior : The SoS performs functions and carries out purposes

that do not reside in any subsystem taken isolated but reside in the various

interactions between these sub systems. The principal purposes of the SoS

are fulfilled by these behaviors considered here then as emergent behaviors.

• Geographic Distribution: The geographic extent of the subsystems is large.

Large is a nebulous and relative concept as communication capabilities in-

crease, but at a minimum it means that the sub systems can readily exchange

only information and not substantial quantities of mass or energy.”

3 ContributionThe presented research focuses on the complementary role that the differentverification techniques can play. The goal is to ensure and subsequently improve

real-time design, e.g. in early stages of architectural and interfaces design [5],

Page 10/58

Page 13: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

SoS architecture and behavior verification, no matter what may be the size orthe complexity of its subsystems.

On one hand, it is about a technical formalization starting from stakeholdersrequirements and proof of properties on a model describing the SoS architecture.On the other hand, it is about the use of an advanced simulation technique ofthe architecture’s behavior in order to achieve two goals:

1. Establishing an evaluation approach of several non-functional characteris-tics [9] as proposed in the case of SoS engineering of [2] or systems engi-neering1 to meet the Model Based System Engineering (MBSE) hypothesisand principles [10]. We keep here the characteristics of robustness of SoSarchitecture. The robustness is defined here as the ability aggregation to en-sure its stability (it is always able to fulfill its mission despite the differentemergence phenomena and external events that threaten its behavior), itsintegrity (it is always able to fulfill its mission despite the various emergencephenomena and internal events causing an important dysfunction of thesesubsystems), its control (it is always able to maximize its performance).

2. To facilitate the detection of possible errors, omissions or uncertainties andto judge the behaviors plausibility and credibility which are considered asunexpected but they occur during the simulation.

This work should lead to the proposal of a modeling framework for the SoSarchitecture based on a mathematical formalization underlying the concept ofSoS. This will allow the use of formalization technique and proof of properties,as well as the formalization of essential non-functional characteristics expectedin a SoS. This framework will then be equipped by operational semantic andformal rules of transformation to a Multi-Agent System (MAS) [11] which allowssimulating the subsystems behavior that form the architecture, and characterizethe plausibility and credibility behavior, met during the simulation. The useof multi-agent systems (MAS) is a natural and effective solution to deal withcomplex situations in distributed environments [12].

Zachman is the architectural framework proposed to work with. There aremany models for a system, however, the ones considered important for us, inorder to achieve the verification, are: the requirements models, behavioral models(Data processing model and State machine model), architectural models, dataflow model (interfaces model) and the context model.

SoS requirements can be retrieved from its various characteristics (autonomy,belonging, connectivity, diversity, emergence, evolution and geographic distri-bution). Once these requirements are well defined, they have to be verified.Model-checking is one of the formal techniques which will be used for the verifi-cation of some of these requirements. However, it imposes some limitation (e.g.state-space explosion), therefore, a new mathematical model will be developedto bypass these limitations.

During the simulation, emergent behaviors are generated. Only some of thesebehaviors are good and to be taken into account by the designer. Therefore, this

1 http://www.sebokwiki.org (last accessed date 20/06/2013)

Page 11/58

Page 14: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

decision is based on the measurement of the plausibility and credibility of ascenario. Criteria have to be fixed in order to measure these two indexes. Theplausibility and credibility are a set of indicators that vary from a domain toanother (e.g. performance, utilization, coverage, impact, reliability, consistency,etc.). A scenario is considered plausible and credible when:

n�

1

PlausibilityIndicators *m�

1

CredibilityIndicators > 1 (n and m ∈ N∗),

where n is the number of indicators (factors) that determine the plausibility andm is the number of indicators (factors) that determine the credibility

4 Conclusion and PerspectivesThis paper has introduced the importance of (SoS) design model verification toreduce cost and time consuming. Formal methods are used to verify the SoScharacteristics (properties e.g. connectivity), and the (MAS) is used to detectunexpected behaviors (emergence) and filter them.

Further work has to be done to check if the verification of the SoS charac-teristics/properties allows us to approve that the whole SoS is verified.

References

1. Chapman, W.L., Bahill, A.T.: Complexity of the system design problem2. S Blanchard, B., J Fabrycky, W.: Systems Engineering and Analysis. 5th edn.

(2011)3. Mallek, S., Daclin, N., Chapurlat, V.: The application of interoperability require-

ment specification and verification to collaborative processes in industry. Comput-ers in Industry 63(7) (September 2012) 643–658

4. Maier, M.W.: Architecting principles for systems-of-systems. Systems Engineering1(4) (1998) 267–284

5. Dhillon, B, S.: Reliability in Computer System Design. Alex Publishin Corporarion(1987)

6. The Standish Group International: The Chaos report. Technical report (1995) Re-trieved from http://www.csus.edu/indiv/v/velianitis/161/ChaosReport.pdf(Last accessed date 13/03/2013).

7. Sage, A.: Processes for System Family Architecting, Design, and Integration. Sys-tems Journal IEEE 1(1) (2007) 5–16

8. Release, P.: United States Air Force Scientific Advisory Board Report on System-of-Systems Engineering for Air Force Capability Development Executive Summaryand Annotated Brief SAB-TR-05-04. (July) (2005)

9. Weck, O.L.D., Ross, A.M., Rhodes, D.H.: ESD Working Paper Series InvestigatingRelationships and Semantic Sets amongst System Lifecycle Properties ( Ilities )ESD-WP-2012-12. (March 2012)

10. Estefan: Survey of Model-Based Systems Engineering (MBSE) Methodology. Tech-nical report, Report, INCOSE MBSE Focus Group (2007)

11. Wooldridge, M.: An Introduction to Multiagent Systems. (2008)12. Khosla, R., Dillon, T.: Intelligent hybrid multi-agent architecture for engineering

complex systems. Proceedings of the 1997 IEEE international Conference on NeuralNetworks 4 (1997) 2449–2454

Page 12/58

Page 15: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

PhD Thesis Summary:Efficient Local Search for Large Scale

Combinatorial Problems

Mirsad Buljubasic1, Michel Vasquez1, and Haris Gavranovic2�

1 Ecole des Mines d’AlesLGI2P Research Center

Nimes, France{mirsad.buljubasic,michel.vasquez}@mines-ales.fr

2 International University of Sarajevo, [email protected]

1 Introduction

Many problems of practical and theoretical importance within the fields of Arti-ficial Intelligence and Operations Research are of a combinatorial nature. Com-binatorial problems involve finding values for discrete variables such that certainconditions are satisfied and objective function is optimized. One of the mostlyused strategies in solving combinatorial optimization problems is local searchtechnique. Local search is an iterative heuristic which typically starts with anyfeasible solution, and improves the quality of the solution iteratively. At eachstep, it considers only local operations to improve the cost of the solution.

The aim of the thesis is to develop an efficient local search algorithms forfew large scale combinatorial optimization problems 3 . The focus is set on thereal world Vehicle Routing Problems (VRP). Some of the problems of interestwill also be Machine Reassignment Problem (MRP), Generalized AssignmentProblem (GAP), Bin Packing Problem (BPP), Large Scale Energy ManagementProblem (LSEM), ...

2 Vehicle routing Problem

Vehicle routing problem (VRP) is concerning the distribution of goods betweendepots and final users. The standard objective is minimizing the total traveldistance of a number of vehicles, under various constraints, where every customermust be visited exactly once by a vehicle. They have been intensively studiedsince a paper by Dantzig and Ramser appeared in 1959, and there have beenhundreds of successful applications in many industries.

Real world vehicle routing problems include many constraints (drivers reg-ulations, traffic constraints, heterogeneous fleet, hired drivers or vehicles) and

� co-advisor3 start date of the thesis - December 1st 2012, 1st year

Page 13/58

Page 16: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

usually have a hierarchical objective function (travel distance, travel time, wait-

ing time, ...).

The main problem to be solved is provided by Geoconcept company

(http://en.geoconcept.com) , which is a large scale problem with up to tens of

thousands customers and includes a huge number of different (hard and soft)

constraints. The solution approach should consist of constraint programming

techniques used for constraints satisfaction (probably using free commercial con-

straints satisfaction solvers) and local search techniques used for optimizing the

solution. Currently, the subject is just defined.

3 Other problems

3.1 Machine Reassignment Problem - MRP

Machine Reassignment Problem is a problem proposed at ROADEF/EURO

2012 Challenge (http://challenge.roadef.org/2012/en/), the competition orga-

nized jointly by French Operations Research Society (ROADEF) and European

Operations Research Society (EURO). The problem was proposed by Google.

The aim of this challenge is to improve the usage of a set of machines. A ma-

chine has several resources as for example RAM and CPU, and runs processes

which consume these resources. Initially each process is assigned to a machine.

In order to improve machine usage, processes can be moved from one machine

to another. Possible moves are limited by hard constraints, as for example re-

source capacity constraints, and have a cost. A solution to this problem is a

new process-machine assignment which satisfies all hard constraints and mini-

mizes a given objective cost. Detailed description of the problem can be found

at http://challenge.roadef.org/2012/files/problem definition v1.pdf.

We will present a local search algorithm mainly based on the paper [1].

3.2 Generalized Assignment and Bin Packing

The generalized assignment problem (GAP) is a well-known NP-hard combi-

natorial optimization problem. It considers a situation in which n jobs have to

be processed by m agents. The agents have a capacity expressed in terms of a

resource which is consumed by job processing. The decision maker seeks the min-

imum cost assignment of jobs to agents such that each job is assigned to exactly

one agent subject to the agents’ available capacity. The version of the problem

with more than one resource is called Multi-Resource Generalized Assignment

Problem (MRGAP).

MRP is a generalization of MRGAP, with some additional constraints. There-

fore, the proposed local search algorithm should be similar to the one for MRP.

The bin packing problem (BPP) is a well-studied combinatorial optimization

problem requiring the assignment of a given set of items to a minimum num-

ber of identical bins with fixed capacity. Multi-Capacity Bin Packing Problem

Page 14/58

Page 17: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

(MCBPP) is a generalization of the classical binpacking problem in which the

machine (bin) capacity and task (item) sizes are given by multiple (resource)

dimensions.

We will present the local search algorithm for MCBPP based on the transforming

MCBPP to MRP. This reduction is simple and can be found in [3].

3.3 Large Scale Energy Management Problem - LSEM

Large Scale Energy Management is a problem proposed at ROADEF/EURO

2010 Challenge (http://challenge.roadef.org/2010/en/), the competition orga-

nized jointly by French Operations Research Society (ROADEF) and European

Operations Research Society (EURO).

The goal is to fulfil the respective demand of energy over a time horizon of sev-

eral years, with respect to the total operating cost of all machinery. The problem

was posted by Electricity de France (EDF) and it is a real world industry prob-

lem solved at EDF. The problem includes two mutually dependent and related

sub-problems: Determining the schedule of plant outages and Determining an

optimal production plan to satisfy demand. Both, outages schedule and produc-

tion plan, must satisfy a large number of technical constraints. The objective is

to minimize the expected cost of production. Detailed description of the problem

can be found at http://challenge.roadef.org/2010/files/sujetEDFv22.pdf.

We will present a local search algorithm mainly based on the paper [2].

4 What is done

In the first 6 months of the thesis:

– bibliography on VRP variants

– CVRP – Capacitated Vehicle Routing Problem

• implementing basic classes in C++

• implementing classical constructive heuristics (Savings - Clark-Wright,

Insertion, ...)

• constructive heuristic using matching

• simple improvement procedures : 2-opt, 3-opt, insertion, swap, ...

– RVRP – Rich Vehicle Routing Problems

• bibliography on RVRP (real world VRP, many side constraints, ...)

• implementing basic classes in C++

• collecting and analyzing data

– submitting a paper on Machine Reassignment problem (MRP)

- An Efficient Multi-Start Local Search with Noising Strategy for Google Ma-chine Reassignment problem ([1])

Page 15/58

Page 18: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

– submitting a paper on Large Scale Energy Management problem (LSEM)- Orchestrating CSP and Local Search to Solve a Large Scale Energy Man-

agement Problem ([2])

– Bin Packing Problem• transforming to MRP• testing on instances from literature• todo: improve the algorithm, submit a paper

– Writing the chapter on Greedy Randomized Adaptive Search Procedure(GRASP) approach for the book ”Metaheuristiques pour l’optimisation dif-

ficile”

• Michel Vasquez, Mirsad Buljubasic : Une procedure de recherche iterativeen deux phases : la methode GRASP

• GRASP approach for Set Covering Problem (combined with Tabu Search)- satisfiable results

References

1. Mirsad Buljubasic, Haris Gavranovic (2013). ”An Efficient Multi-Start Local Searchwith Noising Strategy for Google Machine Reassignment problem” Annals of Oper-

ational Research, submitted.2. Mirsad Buljubasic, Haris Gavranovic (2013). ”Orchestrating CSP and Local Searchto Solve a Large Scale Energy Management Problem” RAIRO, submitted.

3. R. Masson, T. Vidal, J. Michallet, P.H.V. Penna, V. Petrucci, A. Subramanian,and H. Dubedout. (2012). “An iterated local search heuristic for multi-capacity binpacking problems and machine reassignment.”url: http://www.cirrelt.ca/DocumentsTravail/CIRRELT-2012-70.pdf.

4. Aarts, Emile and Lenstra, Jan K. (1997). ”Local Search in Combinatorial Optimiza-tion.” John Wiley & Sons, Inc., New York, NY, USA.

5. Pisinger, David and Ropke, Stefan (2007). ”A general heuristic for vehicle routingproblems.” Comput. Oper. Res. 34(8), p. 2403 – 2435.

6. Yagiura, Mutsunori and Iwasaki, Shinji and Ibaraki, Toshihide and Glover, Fred(2004). ”A very large-scale neighborhood search algorithm for the multi-resourcegeneralized assignment problem.” Discret. Optim. 1(1), p. 87 – 98.

Page 16/58

Page 19: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Methods of pattern classification for the design of a NIRS-based brain computer interface

Sami DALHOUMI!, Gérard DRAY!, Jacky MONTMAIN!"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""

Gérard DEROSIERE#$%,Stéphane PERREY#, Tomas WARD%

!

Parc Sci mes, France. #Movement to Health (M2H), Montpellier-1 University, EuroMov, 700 Avenue du Pic Saint-

Loup - 34090 Montpellier, France. %Biomedical Engineering Research Group (BERG), National University of Ireland Maynooth

(NUIM), Co Kildare, Ireland. [email protected]

Abstract. A Brain-Computer Interface (BCI) is a communication system that offers the possibility to act upon the surrounding environment without using our nervous efferent pathways. One of the most important parts of a BCI is the pattern classification system which allows to translate mental activities into commands for an external device. This work aims at providing new pattern classification methods for the development of a Brain Computer Interface based on Near Infrared Spectroscopy. To do so, a thorough study of machine learning techniques used for developing BCIs has been conducted.

K eywords: Brain-Computer Interfaces, Near Infrared Spectroscopy, Pattern Classification, Machine Learning.

1 Introduction

In order to produce different motor or cognitive tasks, human brain generates different activity patterns that can be monitored by a BCI and translated directly into

There are two types of brain activities that can be used for a BCI: electrophysiological and hemodynamic [1]. The majority of BCIs developed so far are based on Electroencephalography (EEG). EEG is a non-invasive technology that measures electric brain activity through electrodes placed on the scalp. This technology has many drawbacks such as the very poor quality of signals, the sensitivity to many sources of noise and the use of many electrodes to monitor signals which make its use outside research context impossible. As an alternative to EEG technology, Near

Page 17/58

Page 20: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Infrared Spectroscopy (NIRS) presents itself as an attractive method for developing a BCI that can be used in daily life [2]. NIRS is an optical spectroscopy method that measures task-induced blood oxygen level dependent (BOLD) response. NIRS-based BCIs have many benefits like robustness to noise, good spatial resolution and portability. But this technology is still in its maturation phase and further research works are to be conducted to make it reliable for a usage outside the lab.

2 Brain-Computer Interfaces general architecture

Designing a BCI is a hard task that requires several skills including neurosciences, biomedical engineering, computer science, etc. The main functions of a BCI are signal acquisition, feature extraction and pattern classification, translation into commands and sometimes feedback is required for online paradigms (Fig. 1):

- Signal acquisition: different types of sensors are used for monitoring brain signals depending on the technology employed. In case of NIRS-based BCIs, a near-infrared light emitting source like a laser or a diode and a light detector like a photodiode are used.

- F eature extraction: relevant features are extracted from raw signals. Different signal processing techniques are used for this issue.

- Pattern classification: features are mapped into different classes corresponding to different mental states. This task is done by automatic machine learning algorithms called classifiers.

- T ranslation into commands: a specific command is associated to each class in order to control or communicate with an external device.

- F eedback: sometimes real-performances.

!F ig.1. Brain computer Interface architecture [3]

Page 18/58

Page 21: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

3 Pattern classification in NIRS-based B C Is

During the last years, many pilot studies have been conducted to solve problems related to NIRS-based BCIs and enhance their efficiency. Most of these studies focused on the signal acquisition [2] or the feature selection [4] parts of a BCI, and few ones addressed problems related to the pattern classification component of NIRS-based BCIs. The possibility of classifying different motor or cognitive tasks using NIRS-based BCI technology has been proved by many research groups [5], [6], but simple experimental protocols were used and common classification techniques were applied. The use of enhanced machine learning techniques was crucial for the development of EEG-based BCI technology. These techniques addressed the problems of long-time calibration sessions before the use of a BCI system and high variability of monitored signals [3], [7]. Nowadays, NIRS-based BCI technology faces the same problems and the use of adaptive and robust machine learning techniques is necessary to introduce it in a daily-life context. The high variability of brain signals makes the mapping of activation patterns from different sessions and different users to disjoint classes by an individual classifier impossible.

Different combination schemes of classifiers and online adaptation of classifiers seem to be promising methods for NIRS signals classification. Many studies showed that the use of multiple classifier systems is crucial for modeling very complex systems because there is no single classifier that solves all problems [8]. In the context of EEG-based BCI design, there are some papers highlighting the importance of using dynamic combination of classifiers to attain good classification rates in the context of transfer learning (transfer models between subjects and between sessions) [3], [7], [9], [10]. But, in our knowledge, there is no studies related to ensemble learning in the context of a NIRS-based BCI design. Fig. 2 illustrates a multiple classifier framework: different classifiers can be built by diversifying input data or diversifying models and the merging process can be a majority voting, a weighting average mean, a linear or non-linear function [11].

!

F ig. 2. Multiple classifier system

Page 19/58

Page 22: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

4 Conclusion

In this work, we highlight the need of developing adaptive and robust pattern classification techniques in order to make NIRS-based Brain Computer Interfaces more reliable. Many studies showed that there is no single classifier that can model all systems and usually a set of classifiers is more powerful than individual ones when dealing with complex systems. In a recent pilot study, we have investigated the detection of attention deficits through hemodynamic activity of the brain by applying usual pattern classification techniques on signals monitored by a NIRS system [12]. In next steps we will try to design more advanced techniques suitable for this type of signals.

References

!" #$%&'()*+'&,)&-./"0"-.1&234*1$'-. 5"6.78($,.9&2:;<38. =,<38>(%3)-.(.?3@$3A".B3,)&8)".!C-.!C!!**!CDE.FCG!CH...

C" 9&I'3.B"-.J(8K-.L"-.M(8NO(2-.9"-.M%P(8QI-.1"6.R,.<O3.);$<(Q$'$<I.&>.,3(8*$,>8(83K.F#=?H.)I)<32). >&8. ,3S<*T3,38(<$&,. Q8($, %&2:;<38. $,<38>(%3)". UOI)$&'&T$%('. M3();8323,<". CV-.W!V**WCC.FCGGXH".

Y" L;-.J"-.B;,-.B"6.+. );QZ3%<. <8(,)>38. >8(23A&8N. >&8.[[1.%'())$>$%(<$&,".#3;8&%&2:;<$,T".GG-.!**!!.FCG!!H".

X" \;(,. ]O(,T-. \"-. B<8(,T2(,-. 1"["-. 1(,$)-. 1"6. +K(:<$@3. >$'<38$,T. <&. 83K;%3. T'&Q('.$,<38>383,%3.$,.,&,*$,@()$@3.#=?B.23();83).&>.Q8($,.(%<$@(<$&,6.^&A.A3''.(,K.AO3,.K&3).$<.A&8N_"#3;8&=2(T3".XV-.DWW**DEX.FCGGEH".

V" U&A38-. B"P"-. 0('N-. L"^"-. 9O(;-. L"6. 9'())$>$%(<$&,. &>. :83>8&,<('. (%<$@$<I. K;3. <&. 23,<('.(8$<O23<$%.(,K.2;)$%. $2(T38I.;)$,T.O$KK3,.M(8N&@.2&K3').(,K.>83`;3,%I.K&2($,.,3(8*$,>8(83K.):3%<8&)%&:I".5&;8,('.&>.#3;8('.[,T$,338$,T".D.FCG!GH".

a" B$<(8(2-. ?"-^($O&,T. ]O(,T-. ^"-. 1;(,-. 9"-. LO;'()$K()-. M"-. ^&)O$-. b"-=)O$N(A(-. +"-.BO$2$4;-. c"-. 7$8Q(;238-. #"6. L32:&8('. %'())$>$%(<$&,. &>. 2;'<$%O(,,3'. ,3(8*$,>8(83K.):3%<8&)%&:I. )$T,('). &>. 2&<&8. $2(T38I. >&8. K3@3'&:$,T. (. Q8($, %&2:;<38. $,<38>(%3".#3;8&=2(T3-.YX-.!X!a**!XCD.FCGGDH".

D" c8(;'3K(<-.M"-.L(,T382(,,-.M"-.7'(,N38<4-.7"-..M;''38-.c"?"6.L&A(8K).]38&.L8($,$,T.>&8.78($,*9&2:;<38.=,<38>(%$,T".U'&).R,3-.Y.FCGGWH".

W" J&4,$(N-.M"-.18(d(-.M"-.9&8%O(K&-.["-.+.);8@3I.&>.2;'<$:'3.%'())$>$38.)I)<32).().OIQ8$K.)I)<32)".=,>&82(<$&,.0;)$&,.FCG!YH".

E" ?(N&<&2(2&,ZI-.+"-.1;$T;3-.e"6.79=.9&2:3<$<$&,.===6.P(<()3<.==.*.[,)32Q'3.&>.BeM).>&8.79=.UYGG.B:3''38".=[[[.L8(,)".7$&23K".[,T"-.VVFYH.FCGGWH".

!G" /;-. B"-. 1;(,-. 9"-. ]O(,T-. ^"6. f,);:38@$)3K. 78($,. 9&2:;<38. =,<38>(%3. 7()3K. &,.=,<38);QZ3%<. =,>&82(<$&,. (,K.R,'$,3. +K(:<(<$&,". =[[[. L8(,)(%<$&,). &,. #3;8('. BI)<32).(,K.?3O(Q$'$<(<$&,.[,T$,338$,T-.!D.FCGGEH".

!!" c$<<'38-. 5"-. ^(<3>-. M"-. P;$,-. ?"U"J"-. M(<()-. 5"6. R,. 9&2Q$,$,T. 9'())$>$38)". =[[[.L8(,)(%<$&,).&,.U(<<38,.+,('I)$).(,K.M(%O$,3.=,<3''$T3,%3-.CG.F!EEWH".

!C" P38&)$g83-.1"-.P('O&;2$-.B"-.7$''&<-.M"-.U3883I-.B"-.J(8K-.L"-.P8(I-.1"6.L&A(8K).(.#=?B*Q()3K.K3<3%<$&,.&>. '(:)3). $,.(<<3,<$&,6.(.B;::&8<.e3%<&8.M(%O$,3).)<;KI".U8&%33K$,T).&>.<O3.!a<O.=9#=?B.9&,>383,%3.FCG!YH".

.

Page 20/58

Page 23: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Assessing complementarity and using it as a

basis for Information Retrieval

Nicolas Fiorini, Jacky Montmain, Sylvie Ranwez, and Vincent Ranwez

Ecole Nationale Superieure des Mines d’Ales - LGI2PParc scientifique Georges Besse

Nımes F-30035 cedex 1{nicolas.fiorini,jacky.montmain,sylvie.ranwez}@mines-ales.fr

{vincent.ranwez}@supagro.inra.fr

Abstract. The goal of an Information Retrieval (IR) system is to se-lect relevant information within a corpus in order to fulfil one needsexpressed by a query. Conventionally, the results are ordered by theirrelevance, which is individually computed thanks to the similarity be-tween the result and a user’s query. However, the most relevant resultdoes not always fulfil the user’s information needs, specially when thequery covers several domains or is ambiguous. Result diversification tendsto eliminate redundancy and add novelty in the first results. This pro-cess contributes to a better global user satisfaction. To extend this idea,instead of computing a score for each document, we propose to evaluatethe relevance of a set of documents with regard to a query thanks to twocriteria, namely the complementarity — partly based on diversity — ofthe documents and the relevance to the query.

Keywords: information retrieval, result diversification, complementar-ity

1 Introduction to IR

The goal of an Information Retrieval (IR) system is to select relevant informationwithin a corpus in order to fulfil one needs expressed by a query. A whole systemis composed by three main tasks:

– indexing, to make the whole corpus more compact and logic– matching, the step selecting the relevant results– post-process operations, such as result diversification, result personalization,

relevance feedback, etc., aiming to make the final set of results more satis-fying

The results (also called documents) are generally displayed as a list, ranked ac-cording to each document’s Relevant Status Value (RSV) [1]. This value is com-puted during the second task and is basically based on the similarity betweenthe document and the query. Even though the relevance models have been well

Page 21/58

Page 24: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

studied and improved over the years, they only rely on word presence, absence,frequency and co-occurrence. Models are di!erent from each other by what isdetermined as relevant or irrelevant, but it is possible to formally synthesise allmodels as a quadruple {D, Q, F, RSV(di, qj)}, where D is the document repre-sentation — the index —, Q is the set of queries, F is the framework — basedon boolean relations, vector/linear algebra, probability distribution, etc. — andthe RSV function assigning a score to a document di with regard to a query qj[2]. In other words, relevance models do not take the other documents into ac-count. This constraint led the IR community to implement various post-processoperations, in order to refine the results by also considering other documentswhen (re)ordering results [3].

2 Search result diversification

Nowadays, many web searches are exploratory. They consists in gathering in-formation from several domains in order to cover users’ information needs [4].For instance, consider a user’s query: “buy a car”. It seems to be much moreinformative for the user to propose a list of cars with di!erent sizes, brands,horsepower and other features than a list of homogeneous cars, e.g. from onlyone brand. Diverse results will thus enhance user satisfaction [5] by reducingredundancy.

Result diversification is also useful when the query is ambiguous [6]. A tra-ditional IR system will not be able to tackle this issue correctly as it will notmake any di!erence among meanings. For an ambiguous query like “java” a sat-isfying result list should contain the programming language, the co!ee and theisland where it comes from. Another solution to tackle the ambiguity issue is toimplement a concept-based IR system [7] but this requires resources such as aconceptual taxonomy or a domain ontology [8]. The structures are not alwaysavailable, or there is some lack of information in their content, sometimes leadingto hybrid IR systems such as [9, 10].

Including diversity in the set or in a subset of result is a solution to am-biguous queries but also to unambiguous queries dealing with several aspects ordomains. [5] categorized diversity in three definitions based on: (i) content, rely-ing on distance with previously selected documents; (ii) novelty, based on gainin information and (iii) coverage, covering all or most important interpretationsof the query. These definitions are used as a basis for evaluation [11–13] but alsoas a criterion during a post-process reranking of results in IR.

2.1 Implementation of result diversification in IR

Once diversity is defined, it needs to be implemented into IR systems as post-processing: this is result diversification. Usually, the other criterion to whichdiversity is added is relevance. The first work trying to optimize both relevance

Page 22/58

Page 25: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

and diversity in results was, to the best of our knowledge, the Maximal MarginalRelevance — MMR [15].

Methods usually rely on pairwise distances between documents denoted d(., .)to assess diversity (see section 2’s third paragraph). Few works explain how thisdistance is set or chosen because the definition aims to be general. This way,one can adapt every method to the data set: with or without data structure,semantically or terminologically indexed, etc.

Result diversification is known to be an NP-hard problem. Many works pro-pose an algorithm adapted to the exposed definition [3, 14–16], most often agreedy algorithm.

3 Defining the complementarity

Intuitively, it is very easy to say whether two items are complementary. Forinstance, a smartphone and a dedicated cover are complementary. Formallythough, there are di!culties to define this notion: a car and a smartphone arenot complementary... except if the smartphone is used as GPS. How is this judge-ment done? Which parameters does it rely on?

To the best of our knowledge, complementarity has not been formally definedthe way we understand it yet. A definition exist in mathematics, specially in settheory. Given a set X and a subset S, the complementary of S denoted S is ev-erything contained in X but not in S so X \S. Set theory complementarity relieson several laws, such as: S ! S = !. This rule means that two complementarysets are disjoints, which is not compatible with our vision explained below.

Complementarity seems to be an extension of diversity. When assessing thediversity of a set, one only cares about how di!erent the items are. One onehand, this is part of the complementarity as we see it because two items withthe same features are not complementary but redundant. There is no — or atleast less — use in having 2 toothbrushes than in having toothbrush and tooth-paste. Diversity seems thus to play a role when evaluating complementarity. Onanother hand, too much diversity implies no complementarity — cf. the car andsmartphone example.

Therefore, complementarity may be somewhere between similarity and di-versity. The main motivation of our work is to improve information retrieval.IR is a potential field in which complementarity could be used. For example,consider a query “biologist and computer scientist”. Let us assume there is noresult in the corpus (of professional profiles) matching for both professions. Thebest professional profiles answering to this query would be an expert in biologyand an expert in computer science. Diversification would provide such a set, withsay the biologist first, then the computer scientist. However, the user preciselytyped both professions in only one query. This means that (s)he wanted some

Page 23/58

Page 26: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

link between the two profiles, a synergy. Two expert people sharing some skills— e.g. english speakers, interests in other’s profession, etc. — are thus muchmore relevant to the user. The result would not be a ranked list with single doc-uments (here, professional profiles) but groups of potentially relevant documents.

3.1 Approaches

As diversity can be considered as a whole or used as a criterion, it seems to beimportant to proceed with two steps, namely (i) defining an objective functionmodelling complementarity and (ii) implement it in an IR system.

Formal definition of complementarity Even though we thought about po-tential models, none of them is final yet. The draft objective function that shouldbe maximized relies on two somehow opposite yet fully satisfactory properties:

– the more the set is diverse, the higher should be the score– a minimum overlap among items of the set must exist (meaning the inter-

section of items should not be void). If such overlap does not exist, the finalscore should be highly penalized

As a first try, we worked on pairwise complementarity — evaluating com-plementarity between only two items. Assessing a complementarity score for aset of items is much more di!cult because some questions remain unanswered.First of all, should the model optimize a single overlap between two items withinthe set or should it promote overlap among each item with any other? On whichparameters should the complementarity rely: similarity, commonality/di"erence,proximity? These are semantic parameters, but is it possible to assess it on ter-minological sets?

Complementarity as a criterion Diversity has been defined as an objectivefunction to maximize, then it has been implemented as a criterion during resultdiversification. We would like to proceed the same way with complementarity:formal definition then implementation. We will need to aggregate at least theindividual relevance of each document to a given query to evaluate the overallrelevance of a set — including the synergy generated by the complementarity ofthe set. As diversity is an NP-hard problem, our guess is that complementary isalso (NP-hard) and that we will have to rely on heuristic approaches.

3.2 Future work

Our idea of complementarity is complex, case-specific — person-complementarityis di"erent of device-complementarity — and no evaluation exists yet. Duringthe project, we will try to overcome all these problems, starting with the com-plementarity definition.

Page 24/58

Page 27: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

References

1. Buckley, C.: Implementation of the SMART information retrieval system. CornellUniversity (1985)

2. Baeza-Yates, Ricardo and Ribeiro-Neto, Berthier and others: Modern informationretrieval, Second Edition, p.58. ACM press New York (2011)

3. Agrawal, R. and Gollapudi, S. and Halverson, A. and Leong, S.: Diversifying searchresults. In Proceedings of the Second ACM International Conference on Web Searchand Data Mining (pp. 5-14). ACM (2009)

4. Bozzon, A. and Brambilla, M. and Ceri, S. and Fraternali, P.: Liquid Query :Multi-Domain Exploratory Search on the Web. Interfaces (pp. 161–170). ACM Press(2010)

5. Drosou, Marina and Pitoura, Evaggelia: Search result diversification. ACM SIG-MOD Record 39, 41–47 (2010)

6. Minack, Enrico and Demartini, Gianluca and Nejdl, Wolfgang: Current Approachesto Search Result Diversification. Proc. of 1st Intl. Workshop on Living Web (2009)

7. Stokoe, C. and Oakes, M. P., and Tait, J. Word sense disambiguation in informa-tion retrieval revisited. Proceedings of the 26th annual international ACM SIGIRconference on Research and development in informaion retrieval - SIGIR ’03, 159.doi:10.1145/860465.860466 (2003)

8. Haav, H. and Lubi, T. A survey of concept-based information retrieval tools on theweb. Proceedings of the 5th East-European Conference (2001)

9. Bhagdev, R. and Chapman, S. and Ciravegna, F. and Lanfranchi, V. and Petrelli,D.: Hybrid Search: E!ectively Combining Keywords and Semantic Searches. TheSemantic Web Research and Applications, 5021(ii), 554–568 (2008)

10. Giunchiglia, F. and Kharkevich, U. and Zaihrayeu, I. Concept search. The SemanticWeb: Research 2009

11. Jarvelin, K. and Kekalainen, J.: Cumulated gain-based evaluation of IR techniques.ACM Transactions on Information Systems, 20(4), 422–446 (2002)

12. Clarke, C. L. a. and Kolla, M. and Cormack, G. V. and Vechtomova, O. andAshkan, A. and Buttcher, S. and MacKinnon, I.: Novelty and diversity in informa-tion retrieval evaluation. Proceedings of the 31st annual international ACM SIGIRconference on Research and development in information retrieval - SIGIR ’08 (2008)

13. Zhai, C. and Cohen, W. and La!erty, J.: Beyond independent relevance: methodsand evaluation metrics for subtopic retrieval. Development in information retrieval(2003)

14. Ziegler, C. N. and McNee, S. M. and Konstan, J. A. and Lausen, G.: Improv-ing recommendation lists through topic diversification. In Proceedings of the 14thinternational conference on World Wide Web (pp. 22-32). ACM (2005)

15. Carbonell, J. and Goldstein, J.: The use of MMR, diversity-based reranking forreordering documents and producing summaries. In Proceedings of the 21st annualinternational ACM SIGIR conference on Research and development in informationretrieval (pp. 335-336). ACM (1998)

16. Gollapudi, S. and Sharma, A.: An axiomatic approach for result diversification. InProceedings of the 18th international conference on World wide web (pp. 381-390).ACM (2009)

Page 25/58

Page 28: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Software architecture: modeling and evolutionmanagement

Abderrahman Mokni+, Marianne Huchard*, Christelle Urtado

+, and Sylvain

Vauttier+

+LGI2P, Ecole Nationale Superieure des Mınes Ales, Nımes, France*LIRMM, CNRS and Universite de Montpellier 2, Montpellier, France

{Abderrahman.Mokni, Christelle.Urtado, Sylvain.Vauttier}@mines-ales.fr,[email protected]

1 Introduction

Component-based software development (CBSD) emerged the two last decades

as a new approach to produce software faster and at a lower cost. The key

concept of CBSD is reuse which consists in building software from pre-existant

and decoupled off-the-shelf components. The selected components are then con-

nected and structured within an architecture. This latter is the blueprint of

software system construction and evolution [1]. Software architectures are usu-

ally described with Architecture Description Languages (ADLs). Dedal [2] is an

ADL that models architectures in three abstraction levels which fit the CBSD

process. The remaining of this paper is organized as follows: Section 2, briefly

presents the background of our research work. Section 3 lists contributions and

objectives. Finally, section 4 concludes with future work directions.

2 Background

2.1 Architectural modeling

Software architecture A software architecture defines the structure of a soft-

ware system, it describes the constituents of this system, their relationships and

furthermore, important software design decisions including functional behaviors,

interactions and so on [3]. Architectural elements can be classified into two prin-

cipal types: components and connectors.

Components. A software component is a decoupled architectural element which

provides functionalities and/ or data. It can be deployed independently or within

a software system. A component should contain three constituents : interfaces,

an implementation and a descriptive specification [3].

Connectors. Software connectors are architectural elements which perform con-

nection management functions between components and also auxiliary non-

functional tasks, such as managing information about component interactions.

Usually, connectors define communication rules which govern the interactions

between components [3].

Page 26/58

Page 29: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Architecture Description Languages. An architecture description language

is a domain-specific language that provides features for modeling a software sys-

tem’s conceptual architecture, distinguished from the system’s implementation.

An ADL must support the building blocks of an architectural description [3].

Architecture levels. In the approach proposed by [2], which serves as a basis

for this work, software architectures can be modeled at three abstraction levels:

specification, configuration and assembly.

Specification level. Architecture specification is a formal translation of system

requirements. Software components, at this level, are only described by their roles

(usage). These roles are then used to search for matching component classes in

a component repository.

Configuration level. Architecture configuration is the most commonly used level

to describe software architectures. In [2], it captures implementation decisions

by selecting the appropriate component classes from the repository to implement

component roles.

Assembly level. Architecture assemblies define how component instances are

created and initialized in order to deploy architectures in different execution

contexts.

The Dedal ADL. One of the main features of Dedal is its support of modeling

three architecture levels.

Modeling the specification level. Using the Dedal notation, architects specify

component roles as abstract and partial component types, connections and ar-chitecture behavior using SOFA 2.0 behavior protocols [4].

Modeling the configuration level. To increase reuse, Dedal enables the definition

of concrete component types to classify component classes. These latter must

match with the abstract component types defined at the specification level.

Modeling the assembly level. In addition to the instantiation of component

classes as component instances, Dedal integrates assembly constraints that definethe rules applied to build assemblies.

2.2 Managing architecture-centric software evolution

Software architectures may change over time, due to the need to improve soft-

ware or/and remove identified deficiencies. These changes can be static when

they occur at design time or dynamic when they occur at runtime. The sec-

ond type of changes is more complicated to manage since it must not affect thenormal execution of the software. Whenever they happen, design time or run-

time, changes must be propagated to all architecture levels in order to avoid

architecture drift and erosion [1].

Page 27/58

Page 30: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Architecture-centric software evolution. As defined in [3], architecture-

centric evolution is the collection of software architectural activities to change

a software from its older version to a new version, which is activated by archi-

tecture changes. The architectural activities comprise the modifications of both

the software architecture model and its runtime software counterpart.

Dedal support for architecture-centric evolution. Dedal also supports

forward and reverse architecture-centric evolution [5]. It models changes as first

class entities [6] and manages versioning and change propagation in an ad hoc

manner. The evolution process in Dedal is managed and driven by connectors [7].

3 Objectives and contributions

3.1 Classifying ADLs from their support of three abstraction levels

As a first contribution, we attempt to classify a set of existing ADLs such as

C2SADEL [8], Wright [9] and SOfA 2.0 [10], from their support of the three

architecture levels. The proposed classification is extended from the one in [3].

This latter focuses mainly on comparing architectural elements and type systems

at different abstraction levels. In our work, we enhance this classification by

adding new criteria such as the level of formality of ADLs and their support

of architectural analysis. This criteria are important since we are interested in

formalizing the Dedal model by specifying all constraints that apply inside or

between two description levels for a given architecture.

3.2 Classifying ADLs from their support of architecture-centricevolution

As a second contribution, we will focus on classifying ADLs from their support

of handling evolution. Three main aspects on which we will classify ADLs: char-

acteristics of architecture change, activities of architecture evolution and type of

specification (level and type of formalism, analysis and level of automation).

3.3 Formalizing relationships between three architecture levels

To automate analysis activity and manage change propagation from an abstrac-

tion level to another as well as versioning, we need to provide Dedal with a

high level of formalism. Our first objective is to formalize relationships between

components in different abstraction levels as mentioned in [2] (realization, instan-

tiation, implementation, matching, . . . ). We will then give a formal definition to

architecture consistency and completeness according to the Dedal model. This

work will help us automate analysis activity for the whole development process.

Page 28/58

Page 31: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

3.4 Providing an environment and tool support for Dedal

Although there is already an environment to model, check and evolve architec-tures using Dedal in an ad hoc manner, we aim to provide a new environmentmore systematic and easily extensible. We plan to integrate our tool into theEclipse environment.

4 Conclusion and perspectives

This paper first, gives an overview of architectural modeling, software evolutionand the Dedal model. It then lists the main objectives of our research whichconsist in classifying existing ADLs and formalizing the concepts of the Dedalmodel. Future work will focus on dynamic software evolution, especially in per-vasive environments.

References

1. Taylor, R., Medvidovic, N., Dashofy, E.: Software architecture: Foundations, The-

ory, and Practice. Wiley (Jan. 2009)

2. Zhang, H.Y., Zhang, L., Urtado, C., Vauttier, S., Huchard, M.: A three-level

component model in component based software development. In: GPCE. (2012)

70–79

3. Zhang, H.: A multi-level architecture description language for forward and re-

verse evolution of component-based software. PhD thesis, Montpellier II University

(2010)

4. Plasil, F., Balek, D., Janecek, R.: Sofa/dcup: architecture for component trading

and dynamic updating. In: CDS. (1998) 43–51

5. Zhang, H.Y., Urtado, C., Vauttier, S.: Architecture-centric development and evo-

lution processes for component-based software. In: SEKE. (2010) 680–685

6. Zhang, H.Y., Urtado, C., Vauttier, S., Zhang, L., Huchard, M., Coulette, B.: Dedal-

cdl: Modeling first-class architectural changes in dedal. In: WICSA/ECSA. (2012)

272–276

7. Zhang, H.Y., Urtado, C., Vauttier, S.: Connector-driven process for the gradual

evolution of component-based software. In: Australian Software Engineering Con-

ference. (2009) 246–255

8. Medvidovic, N., Rosenblum, D.S., Taylor, R.N.: A language and environment for

architecture-based software development and evolution. In: ICSE. (1999) 44–53

9. Allen, R., Douence, R., Garlan, D.: Specifying and analyzing dynamic software

architectures. In: FASE. (1998) 21–37

10. Bures, T., Hnetynka, P., Plasil, F.: Sofa 2.0: Balancing advanced features in a

hierarchical component model. In: SERA. (2006) 40–48

Page 29/58

Page 32: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

A unified algorithm for the optimisation of thevehicle routing problem with variable traveling

times

Suzy Polka1, Michel Vasquez1, Yannick Vimont1 and Vassilissa Lehoux2

1 LGI2P, Ecole des mines d’Ales, Nımes, France{suzy.polka, michel.vasquez, yannick.vimont}@mines-ales.fr

2 Dpt R&D, Groupe Geoconcept, Grenoble, [email protected]

Abstract. Tackling the vehicle routing problem with time-dependentcosts on large scale instances using a unified algorithm is the aim of thisPHD thesis. Owning to the fact that Google is developing open-sourcetools for operations research available under the Apache License 2.0, wewant to try and use them in our solution. First, we made a straightfor-ward state-of-the-art on the Vehicle Routing Problem, especially on thework which has been done on MAVRP, LSVRP and TDVRP. Then, westudied Google’s Operations Research Tools (or-tools). Finally, we imple-mented simple heuristics and compared them with heuristics using theGoogle’s solver. We test our heuristics on Christofides’s CVRP instancesand Cordeau’s MDVRP instances.

Keywords: operations research, vehicle routing problem, unified algorithm,large scale, multi-attribute, time-dependent, or-tools

1 Introduction

The Vehicle Routing Problem aims at allocating visits among available vehiclesand at determining the best customers order for each trip so as to minimizethe total number of used vehicles and the total travel time or distance. Easy tostate, this problem is in fact part of NP Hard (Non-deterministic Polynomial-time Hard) problems since it mixes three NP-Hard problems categories: packing,routing and scheduling problems. Over the years, several variants of the problemappeared in order to model the real-world needs in the transportation domain(VRPTW, MDVRP, LSVRP...). Through this PHD thesis (started in October2012), we want to develop a unified algorithm for time-dependent multi-attributevehicle routing problem since the correctness of the solutions returned by thebest heuristics severely suffers when time-dependent events such as congestionor accidents occur.

Moreover, the PHD thesis is founded by a company, which needs to have asolver able to tackle a wide range of VRP variants in a reasonable amount of

Page 30/58

Page 33: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

time. This context incites us not to find the best solutions but rather good fea-sible solutions on a maximal number of VRP variants. Maintenance constraintsalso encourage us to develop a unified algorithm, easier to maintain than severalwell-tuned different algorithms.

Several libraries (ParadisEO, CHOCO, etc.) provide basis for heuristic im-plementations and keep their users from having to code everything from thestart. Google has developed its own operations research tools, including a rout-ing library, and gathered them together in or-tools. They can be included incommercial software without a fee or code divulgation clauses. We have hencedecided to try and see if we could use some of its features in our algorithms.

In the following parts, we first introduce a straightforward state-of-the-arton the VRP and on the variants we are interested in. Then, we present Googleoperations research tools. Last, we will expose simple heuristics and heuristicsusing Google’s operations research tools and compare them on academic bench-marks.

2 State-of-the-art

The Vehicle Routing Problem was first introduced in 1959 by Dantzig andRamser [1]. Nowadays, there are roughly several thousands of papers which havebeen written on the subject. Hence there is a difficulty in sorting the articles de-pending on their relevance. In the first part of the PHD thesis, we try and classifythe Vehicle Routing Problem literature in three categories: the general literature,the specific literature and the personal literature. The general literature on theVRP are general state-of-the-art which either describe the different variants ofthe VRP and how the authors tackle them, or describe different heuristics on aspecific VRP variant ([2]).

On the contrary, the specific literature gathers articles aiming to accuratelyexplain how a specific heuristic solve more or less efficiently a variant of the VRPlike in [4]. The last category of articles is largely composed with such articles.

The last category is the personal literature, it includes the articles closelyrelative to the thesis subject i.e. literature on the time-dependent VRP (TD-VRP,[3],[4]), the multi-attribute VRP (MAVRP) and the large-scale VRP (LSVRP),with decreasing interest order.

3 Google’s OR-tools

In 2010 Google began to develop operations research tools, including a routinglibrary. They are still working on it and they regularly improve the solver. The

Page 31/58

Page 34: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

work is done in a user-friendly way and the all project is well documented on-line in the provided user’s manual [5]. Nevertheless, this documentation is notfinished yet.

From the technical point of view, the project was first developed in the C++language, but by the mean of SWIG it is currently also available in other well-known languages like Java, Python and .NET. Notice that, the project is portablebetween Windows, Unix and MAC systems. Or-tools contains a collection of op-erations research algorithms able to solve famous academic problems such asthe Knapsack problem and graph problems (shortest path problems, min-costor max flow problems, linear sum assignments). It provides a Constraint Pro-gramming solver and includes also third-party solvers such GLPK, CLP, CBC,SCIP and Sulum for linear or mixed programming. As Google uses its own toolsfor working purposes, they are constantly tested.

Google also provides a routing library to model and solve routing problemssuch as the traveling salesman problem (TSP), vehicle routing problems (VRP)and arc routing problems (ARP). The routing model enables the users to modelclassical general constraints (time windows, heterogeneous fleet of vehicles, mul-tiple depots). In order to model complex vehicle routing problems you can eitheradd new constraints thanks to the existing variables handler or use the routingmodel as a starting point to create your own model.

4 First heuristics

In the first part of this PHD thesis, we implemented heuristics for the Capaci-tated Vehicle Routing Problem (CVRP) and the Multi-Depots Vehicle RoutingProblem (MDVRP). We are currently working on a time-dependent heuristic forthe vehicle routing problem (TDVRP).We created our own data model to represent the vehicle routing problem and weimplement an insertion heuristic for the CVRP on the Christofides’s instances.To compare it with or-tools heuristics, we include the Google routing and con-straint solver classes in our source code.On the Christofides’s instances (between 50 and 199 customers, without timewindows), results show that or-tools finds better solutions than our insertionheuristic in terms of solution quality. Results are in average 28% better. Interms of speed, or-tools requires higher computational time since our heuristicis less than 3 seconds on every instance whereas or-tools heuristic takes at best3 seconds and 7 minutes in the worst case (vrpn5). Nevertheless, or-tools findssolution costs which are relatively close to the best known solutions on severalChristofides’s instances.

Page 32/58

Page 35: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Notice that or-tools was unable to provide a feasible solution for an instance(vrpnc2).

We also used the or-tools’s Hungarian Optimizer in a matching heuristicon 23 Cordeau’s multi-depots instances. The Hungarian algorithm works well.No comparison has been made yet. The results of the matching heuristic onMDVRP instances are unfortunately not so good and we will try and improvethem in the forthcoming months.

Finally, we began to implement a time-dependent heuristic based on Hashi-moto’s paper [4]. It has to be tested on time-dependent instances. Hashimotoused the Solomon’s 56 academic instances and modified them in the same way asIchoua et al. work in [3]. Ichoua et al. introduced three different time-dependentscenarii. Each scenario is associated with a 3x3 speed matrix where a row cor-responds to a category of arcs (categories differ on the allowable speed) and acolumn corresponds to a time period (morning, midday, evening). Nevertheless,they did not give their choices for the time periods and the distribution of arcsbetween the categories, hence a comparison will be difficult to make.

5 Conclusion

This PHD started with the bibliography study. We try and decide which ofthe numerous articles available would be relevant for this work. Then we beganto learn to use the operations research tools developed by Google. Afterwardwe implemented simple insertion heuristics to compare them with some morecomplex heuristics based on Google heuristics, comparisons were made on theChristofides’s instances. We want to implement in the months to come the iter-ated local search (ILS) of Hashimoto [4] using the Google’s local search operators.This heuristic tackles the time-dependent vehicle routing problem and providesquite good results. Though a comparison with other time-dependent heuristicsis indeed difficult due to missing data on the parameters choices. We believe thisheuristic can be an inspiration for our future work.

References

1. G. B. Dantzig, J. H. Ramser. The Truck Dispatching Problem. Management Science6, pp. 80-91, 1959.

2. G. Laporte. Fifty Years of Vehicle Routing. Transportation Science 43(4), pp. 408-416, 2009.

3. S. Ichoua, M. Gendreau, J.-Y. Potvin. Vehicle dispatching with time-dependenttravel times. European Journal of Operational Research 144 (2), pp. 379-396, 2003.

4. H. Hashimoto, M. Yagiura, and T. Ibaraki. An iterated local search algorithm for thetime-dependent vehicle routing problem with time windows. Discrete Optimization5(2), pp. 434-456, 2008.

5. N. van Omme, L. Perron and V. Furnon, or-tools users manual, Google, 2013.

Page 33/58

Page 36: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

GESTURE RECOGNITION AND OBJECT

DETECTION:A SMALL REPORT

Darshan Venkatrayappa, Philippe Montesinos, Daniel.Depp

Ecole des Mines d’Ales, LGI2P,Parc Scientifique Georges Besse30035 Nimes Cedex, France

{Darshan.Venkatrayappa,Philippe.Montesinos,Daniel.Depp}@mines-ales.fr

Abstract. Gesture recognition and object detection is an important topicin the field of computer vision. Our work mainly focus on gesture recogni-tion of elderly and disabled persons. In our work we intend to reconstructthe 3D scene using several cameras. With the help of this 3D model, wedetect and recognise the persons or objects in the scene. We also aimto characterize the status or gestures of objects and persons and theirlocation in the 3D environment. We also intend to track the object orgesture of a person in the 3D scene. Our approach makes use of pointsof interest, edges/segments, regions/contours to recognise the objects andpersons in the scene. To achieve the required video rate we use GPU pro-gramming and parallelization.

Keywords: 3D scene reconstruction, Gesture recognition, object detec-tion, object tracking, GPU programming.

1 Introduction

Gesture recognition and object detection is a basic requirement in many appli-cations related to video surveillance, robotics, interactive video games, security,telepresence, text editing, healthcare, Robot control, car system control, virtualreality, multimedia interfaces. Gesture recognition is used in the recognition ofsign language, lie detection, visual environment manipulating, etc.

As human beings get older, there is an increased necessity of maintainingthem at home. These elderly people may face di!culties in performing their dayto day tasks. For example, a weak eyesight may hinder their ability to detect andlocalize objects and human beings in a room or, people su"ering from memoryrelated diseases such as Alzheimer’s cannot recollect where they have placed theobjects. Object detection and the analysis of human gesture based on computervision is one of the most promising approach and gives many interesting solutionsto these problems. In the next section we will scan the literature in this fieldthen in the 3rd section we briefly speak about our approach on human detection.Finally, we conclude this report in the 4th section.

Page 34/58

Page 37: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

2 Lecture Notes in Computer Science

2 Literature Review

A quick glance in to the gesture recognition literature reveals two main ap-proaches in gesture modelling : 3D model based and Appearance based. Themain goal of these approaches is to analyse the gesture and to estimate theparameters of the gesture related to the model using measurements from theimages and videos. This task is generally done by feature detection followed byparameter estimation and recognition [1].

2.1 Gesture modelling

The 3D model based gesture recognition can be split in to two types : VolumetricModels and Skeletal Models. Volumetric models are mainly used to describe the3D visual appearance of human body parts such as arms and hands [1]. Initiallythese techniques were mainly used in the field of computer animation, but theiruse have recently been extended to video games and of course to the problemof maintaining at home old people. In [2] is defined an analysis by synthesisapproach in which the body posture is analysed by synthesising the 3D model ofthe human body, and then varying the parameters model until the synthetizedbody and the image appears the same. The main problem with such approachesis they that are generally computationally slow. However this can be overcomeby approximating the body parts using cylinders, spheres, ellipsoids, rectanglesetc. as in [3], [4]. These simple geometric shapes can be connected to obtain thecomplete 3D shape.

Appearance models based on visual images are often used to describe gesturalactions. Many Appearance based models have been proposed in the literature. Inappearance models, the parameters are not directly derived from the 3D spatialdescription of the human body. The gestures are modeled by relating the gestureto a set of predefined template gestures [1]. The simplest form of appearancemodel is based on the deformable 2D templets of human hands, arm, or body [6].Deformable 2D templets can be defined as the sets of points on the outline of anobject, used as interpolation nodes for the object approximation [1]. In anotherapproach [5], the gestural actions are modelled by motion history images (MHI).These are 2D images are formed by accumulating the motion of every singlepixel in the visual image over some temporal window. This way the intensity ofthe pixel in the MHI relates to how much prolonged motion is observed at thatpixel.

2.2 Gesture Analysis

Gesture Analysis has three stages. In the feature detection stage the first objec-tive is to localize the person, next the desired set of features can be detected. Tolocalize the person color cues and motion cues are of great help. Colour segmen-tation techniques depend on histogram matching [7] or rely on a simple look up

Page 35/58

Page 38: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

GESTURE RECOGNITION AND OBJECT DETECTION 3

table approach based on a training data for skin [8]. variability of skin colour indi!erent lightning conditions is a major drawback of this technique. This draw-back can be overcome to some extent by using restrictive background and cloth-ing. Some methods have also made use of markers and coloured gloves [9]. Motioncues are used along with certain assumptions about the person [10]. Assump-tions such as only one person gestures at a time, that background is stationaryetc. If the assumption doesn’t hold this approach fails. Several approaches makesuse of a combination of color, motion and other visual cues [11]. [12] makes useof visual cues along with non visual cues such as speech or gaze. The featuresused in all the types of models are almost similar. silhouettes are among thesimplest, yet most frequently used features can be used in both 3D [13] andappearance based models [14]. Contours make up another group of commonlyused features [15]. Another commonly used feature is the fingertip this is usedin both 3d and appearance model. In case of occlusions multiple cameras can beused [16].

2.3 Parameter Estimation

The parameters used in 3D case can be joint angles, lengths and other dimen-sions. Estimation of 3D parameter models involves two steps. First is the initialparameter estimation and secondly updating these parameters as it evolves intime. All the 3D models described above assume that the parameters are knowna priori. In [17] the authors uses 2 stages for initial parameter estimation. Herethe authors concentrate on hand gesture recognition. The first phase deals withthe initial wrist positioning while the second phase deals with the palm and fin-ger adjustment. This approach is accurate but complex. Some simpler solutionsinvolve a user interactive model parameter initialization [18]. The parametersupdate stage makes use of prediction schemes such as kalman filtering or parti-cle filtering. In case of Appearance based models, the simplest case of parametersare the sets of key visual frames, as in [19]. Eigen decomposition representationof visual images in the sequence with respect to an average image [20] is anotherapproach. In appearance models using silhouettes, model parameters attemptto capture a description of the body shape. In this case commonly employedtechniques uses Geometric moments [21]. Many other shape descriptors such asorientation histograms have been tested .

2.4 Gesture Recognition

In this stage the data analysed from the visual images of gestures is recognisedas a specific gesture. Two tasks are commonly associated with the recognitionprocess : optimal partition of the parameter space and the recognition procedure[1]. Some methods use vector quantization for parameter clustering. The problemof accurate recognition of postures which use model parameters that cluster innon convex sets can also be solved by selecting non-linear clustering schemes.Neural networks are one such option [22]. Di!erent approaches are proposed for

Page 36/58

Page 39: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

4 Lecture Notes in Computer Science

gesture recognition they include approaches based on Hidden Markov model,based on motion history images.

3 Human Detection

In [23], we have proposed a new contour based approach for people detectionin internet fixed images. This approach depends on a large part on perceptualorganization principles at either low level edge detection and higher geometricallevel. Our method is made up of two stages : contour [24] and skin [25] detectionthen geometrical analysis. The contour detection stage uses a perceptual edgedetector to detect the objects boundary. Once the edges are detected, the geo-metrical analysis stage uses these edges and skin information to form elementarygroups called boxes and then chain these boxes for arms and legs detection. Fi-nally the torso search is achieved using the knowledge of arms and legs positions.We are currently working on the adaptation of this method to video processing.In this case skin color may not be available, but we can rely on motion detection.Implementation makes use of GPU and parallel programming.

4 Conclusion

In this report we have seen the di!erent approaches used in the recognition ofhuman and gesture detection. Each method described above has its own advan-tages and disadvantages which will be taken into account in our future work.We have briefly presented our method for human detection in video. We intendto use this method to develop our own gesture recognition algorithm in the fu-ture. All these methods are just a small part of the vast literature in the field ofGesture recognition.

References

1. Vladimir I. Pavlovic and Rajeev Sharma and Thomas S. Huang : Visual Interpre-tation of Hand Gestures for Human-Computer Interaction : A Review. In IEEETransactions on Pattern Analysis and Machine Intelligence, Euro-Par 1997, vol. 19,pp. 677–695.(1997).

2. Reinhard Koch : Dynamic 3D Scene Analysis through Synthesis Feedback Control.In IEEE Transactions on Pattern Analysis and Machine Intelligence.(1993).

3. Azarbayejani, A. and Wren, C. and Pentland, A. : Real-Time 3D Tracking of theHuman Body. In proc. IMAGE’COM 96, Bordeaux, France,(1998).

4. Clergue, E. and Goldberg, M and Madrane, N. and Merialdo, B. : Automatic Faceand Gestural Recognition for Video Indexing. In Proc. Intl Workshop on AutomaticFace and Gesture Recognition, Zurich, Switzerland, pp. 110-115, June (1995).

5. Bobick, A. F. and Davis, J. W. : Real-Time Recognition of Activity Using TemporalTemplates. In Proc. Intl. Conf. Automatic Face and Gesture Recognition, Killington,Vt., Oct. (1996).

Page 37/58

Page 40: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

GESTURE RECOGNITION AND OBJECT DETECTION 5

6. Cipolla, R. and Hollinghurst, N. J : Human-Robot Interface by Pointing With Un-calibrated Stereo Vision. Image and Vision Computing, vol. 14, pp. 171-178, Mar.(1996).

7. Ahmad, S. : A Usable Real-Time 3D Hand Tracker, IEEE Asilomar Conf., (1994).8. Kjeldsen, R. and Kender, J. : Finding Skin in Color Images. In Proc. Intl. Conf.

Automatic Face and Gesture Recognition, Killington, Vt., pp. 312-317, Oct. (1996).9. Cipolla, R., Okamoto, Y. and Kuno, Y. : Robust Structure From Motion Using

Motion Parallax, Proc. IEEE Intl Conf. Computer Vision, pp. 374-382, (1993).10. Quek, F. K. H. : Eyes in the Interface. Image and Vision Computing, vol. 13, Aug.

(1995).11. Azoz, Y., Devi, L. and Sharma, R. : Vision-Based Human Arm Tracking for Gesture

Analysis Using Multimodal Constraint Fusion, in Proc. 1997 Advanced DisplayFederated Laboratory Symp., Adelphi, Md., Jan. (1997).

12. Sharma, R. Huang, T. S. and Pavlovic, V. I. : A Multimodal Framework for Inter-acting With Virtual Environments, in Human Interaction With Complex Systems,C.A. Ntuen and E.H. Park, eds., pp. 53-71. Kluwer Academic Publishers, (1996).

13. Kuch, J. J. and Huang, T. S. : Vision-Based Hand Modeling and Tracking, in Proc.IEEE Intl. Conf. Computer Vision, Cambridge, Mass., June (1995).

14. Krueger, M. W. : Environmental Technology : Making the Real World Virtual, inComm. ACM, vol. 36, pp. 36-37, July (1993).

15. Downton, A. C. and Drouet, H. : Image Analysis for Model-Based Sign LanguageCoding, in Progress in Image Analysis and Processing II. Proc. Sixth Intl. Conf.Image Analysis and Processing, pp. 637- 644, (1991).

16. Lee, J. and Kunii, T. L. : Model-Based Analysis of Hand Posture, IEEE ComputerGraphics and Applications, pp. 77-86, Sept. (1995).

17. Lee, J. and Kunii, T. L. : Constraint-Based Hand Animation, Models and Tech-niques in Computer Animation, pp. 110-127. Tokyo, Springer-Verlag, (1993).

18. Kuch, J. J. and Huang, T. S. : Vision-Based Hand Modeling and Tracking, in Proc.IEEE Intl. Conf. Computer Vision, Cambridge, Mass., June (1995).

19. Darrell, T., Essa, I. and Pentland, A. : Task-Specific Gesture Analysis in Real-Time Using Interpolated Views, in IEEE Trans. Pattern Analysis and MachineIntelligence, vol. 18, no. 12, pp. 1,236-1,242, Dec. (1996).

20. Wilson, A. D. and Bobick, A. F. : Recovering the Temporal Structure of NaturalGestures, in Proc. Intl. Conf. Automatic Face and Gesture Recognition, Killington,Vt., pp. 66-71, Oct. (1996).

21. Brockl-Fox, U. : Real-Time 3D Interaction With Up to 16 Degrees of Freedom FromMonocular Image Flows, in Proc. Intl. Workshop on Automatic Face and GestureRecognition, Zurich, Switzerland, pp. 172-178, June (1995).

22. Kjeldsen,R. and Kender,J. : Visual Hand Gesture Recognition for Window Sys-tem Control, in Proc. Intl. Workshop on Automatic Face and Gesture Recognition,Zurich, Switzerland, pp. 184-188, June (1995).

23. Montesinos, P. Claude-amat, G. : Finding People in Internet Images. IEEE In-ternationalSymposium on Multimedia ISM 2009. dec. 2009, San Diego USA, Ref :ACT/I09-24.

24. Montesinos, P. Magnier, B. : A New Perceptual Edge Detector in Color Images.Advanced Concepts for Intelligent Vision Systems, ACIVS 2010. Dec. 13-16, 2010,Macquarie University, Sydney, Australia.

25. Sidibe, D. Montesinos, P. Janaqi, S. : A Simple and E!cient Eye Detection Methodin Color Images. Image and Vision Computing IVCNZ 2006, Great Barrier Island,New Zeland. November 27-29, 2006.

Page 38/58

Page 41: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

RIDER* project: Research for IT Driven EneRgy efficiency based on a multidimensional comfort control

Afef Denguir1,2, François Trousset1, Jacky Montmain1

1 LGI2P, EMA Nîmes, France 2 LIRMM, Université Montpellier Montpellier 2, France

<first>.<last>@mines-ales.fr

1 Introduction

According to the European Parliament guidelines [1], buildings are the very first energy consumers by using 40% of the total energy intake in Europe. They are also responsible of 36% of the total CO2 gaz emission. This implies that improving build-ing’s energy performance may contribute significantly to energy conservation and environment protection. Air-conditioning (heating and cooling) is the most consum-ing task in buildings. It uses around 47% of building’s energy consumption [2] which is equivalent to 18% of the total energy consumption; all sectors being included. In fact, in 2011, more than 28 billion Euros had been paid, in France, to cover air-conditioning costs in residential and tertiary sector [2]. Hence, an efficient air-conditioning could significantly reduce energy consumption in the residential and tertiary sector. This leads to enhance thermal processes in buildings and neighbor-hoods by reducing energy consumption and maintaining a comfortable environment from thermal and indoor-air quality points of views.

Thermal processes could be improved in different levels; starting from energy pro-viders to energy kit suppliers. Recently, through smart-home/building/city rising-up concepts, new approaches considering system characteristics (i.e. building location and orientation) and uses (i.e. building main use and occupiers habits) have been de-veloped in order to produce a new intermediate thermal process enhancement layer. It helps the energy manager to compute customized set-points considering physical and operational thermal processes characteristics. RIDER (Research for IT Driven EneR-gy efficiency) project has been so far working on the intermediate enhancement layer for thermal processes. RIDER project supports quite challenging requirements dealing with the spread of the solution (i.e. applicable for different thermal process scales), reusability of the enhancement techniques (i.e. one building should be able to apply another building thermal enhancement routines), and the auto-bootstrap software ability (i.e. the ability to start computations without requiring neither specific nor much data). This work aims to develop techniques for thermal process enhancement satisfying RIDER project requirements.

* FUI RIDER project: http://rider-project.com/

Page 39/58

Page 42: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

2 Previous works & Literature

Given the complexity of thermal processes (i.e. climate, materials and construc-tions thermodynamic properties, HVAC “Heating, Ventilation, and air-conditioning” technical and control performances, human behavior, etc.), first works focused, only, on HVAC regulation enhancement [3]. Considering yet more thermal process influ-ence factors may lead to better energy performances. Indeed, given the low reactivity of thermal processes, anticipating these additional influence factors could lead to significant control quality improvement. This justifies the number of recent works focusing on, depending on models accuracy, predictive or advanced control of ther-mal processes.

The predictive control consists on computing a control sequence that optimizes a criterion. This last reflects the objective of the control process in a time window. Ap-plied to thermal processes [4], the predictive control objective function optimizes socio-economic aspects such as minimizing energy consumption, maximizing thermal comfort, and ensuring a compromise between the two previous ones. The predictive control is based on a mathematical model of the thermal process. The more accurate the model is, the more efficient the control would be. However, to compute mathe-matical models, detailed data and settings on the thermal process and its context are required. Moreover a costly expertise process is needed which has restricted their application.

The advanced control [5] is considered as the new generation of process manage-ment. It has been used since 20 years for thermal process applications. It is built on Artificial Intelligent (AI) approaches and aims to provide a simple, efficient and adap-tive control without necessarily using a mathematical model of the system. Learning techniques are rather used for system modeling. We distinguish 2 types of learned models; quantitative ones (computed through statistical [6] and AI techniques such as ANNs (Ant Neural Networks) [7] and SVMs (Support vector Machines) [8]), and qualitative ones (usually computed through expert knowledge of the process and characterized by a fuzzy representation [9]). Quantitative models require input-data, usually collected from “on-site” measurements, surveys, and available documentation, for their training. A time relevant sampling acquisition is required, data pre-treatment and post-treatments are, hence, requested to reduce data noisiness. For the thermal process enhancement these models are solved using fuzzy control or AI optimization approaches (i.e. heuristics) depending on the model formalism.

Learning quantitative models for thermal processes is obviously a complicated task requiring heavy calculation loads. Indeed, in practice, the mathematical models are difficult to compute because of their complexity and lack of setting data. Statistical models are much easier to learn however they have low precision and flexibility per-formances that make them unsuitable for thermal process enhancement. ANNs and SVMs models allow more accurate predictions when the model is well chosen and the parameters are well learned. Moreover, they require complex calculations and effi-cient training-data (cleaned and filtered). Since they are based on expert knowledge, qualitative models are less data-dependent compared to the quantitative ones. Howev-er, they are much less accurate and more adapted for approximate control.

Page 40/58

Page 43: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

3 Contribution

In order to fulfil the RIDER project requirements, the thermal process modelling should be as less as possible data-dependent. Yet, all quantitative models need, be-forehand, detailed settings or efficient training-data. Qualitative modelling is mostly based on an “a priori” knowledge about the process collected from experts and/or system behaviour analysis. However, it is less accurate than the quantitative one and cannot guarantee enhancement for a long time (rapidly becomes obsolete). To oppose these weaknesses, we propose for the RIDER project, a new enhancement approach based on, first, aggregated performance based reasoning, and second, a hybrid model based thermal process enhancement. Figure 1 displays an overview of our solution.

Fig. 1. Overview of the RIDER project thermal process enhancement

In building’s thermal process enhancement, the aggregated performance based rea-soning consists on focusing the enhancement process on the thermal comfort concept rather than ambient temperature. Because of the thermal comfort multidimensional nature, the ambient temperature set-point (known as being the comfort temperature) may change and lead its place to a less costly one depending on the other parameter variations. To proceed this, first, a thermal comfort preference model (utility theory, Choquet integral) [10] has been identified from the ISO 7730 [11] statistical model which makes it easily generalized for different building’s occupiers. And second, optimization routines and qualitative rules have been defined to respectively compute customized and less costly set-points depending on situations, and efficiently “regu-late” occupier’s thermal comfort sensation.

The hybrid model based thermal process enhancement consists on carrying out the appropriate system command in order to achieve the newly computed set-points. This Hybrid approach has been chosen to solve quantitative and qualitative models weak-nesses when applied separately. Indeed, using qualitative model is unavoidable espe-cially when no quantitative measurements are available. Nevertheless, during the runtime, the cumulated thermal process quantitative measurements are collected “online” (during the run processing time) and used to compute more accurate solu-tions. In our hybrid approach, the qualitative model learns “a priori” qualitative en-hancement rules by analyzing a simplified mathematical model rather than asking for expert knowledge. It has a fuzzy representation and models gradient variations on each enhancement criterion regarding events occurrence. This model is coupled with a heuristic based algorithm in order to deal with its fuzzy representation and ensure the enhancement control using these rules. The quantitative model is computed

Page 41/58

Page 44: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

through “online” data acquisition and computes, using AI techniques, the most likely applicable known solution for a given control process. The solution is, then, brought back from the time scale to the event occurrence scale which enables approximate reasoning and ensures the connection between the quantitative and qualitative formal-isms. Our solution allows, then, the computation of optimized set-points and com-mands for a thermal process. It has been tested and evaluated in a building’s thermal process.

4 Conclusion

This work focuses on proposing generic and the least data-dependent optimization techniques for thermal processes based on, first, aggregated performance reasoning (i.e. comfort preference mode), and second, hybrid thermal process modeling. Our experimentations reveal about 10% of energy consumption decrease when applying the aggregated performance based reasoning for building’s thermal process enhance-ment and 6% additional energy consumption decrease when using hybrid model based thermal process enhancement. It should be highlighted that these approaches, first, don’t need any setting or training data to proceed for the thermal comfort enhance-ment and, second, become more and more efficient and accurate through the process experience.

References 1. European Parliament and Council, «Directive 2012/27EU of European Parliament and Council of 27

October 2012 on the energy performance of buildings», In: Official Journal Of the European Union, pp. L315:1-56 (2012).

2. Pacific Northwest National Laboratory, 2011 Building Energy, D&R International (March 2012). 3. Landau, Y. D. Adaptive control, Springer-Verlag, New York (1998). 4. Ma, Y., Borrelli, F., Hencey, B., Packard, A., Bortoff, S., Model predictive control of thermal energy

storage in building cooling systems. In Conference on Decision and Control and Chinese Control Conference, pages 392–397 (2009).

5. Joint Center for Energy Management (JCEM), Final report: Artificial neural networks applied to LoanSTAR data. Technical Report TR/92/15, (1992).

6. Lei, F., Hu, P. A. baseline model for office building energy consumption in hot summer and cold win-ter region. In Proceedings of International Conference on Management and Service Science, vol. 6, pp . 1 – 4, (2009).

7. Kalogirou, S. A. Artificial neural networks in energy applications in buildings. International Journal of Low-Carbon Technologies, vol 3, pp. 201–216 (2006).

8. Li, Q., Meng, Q. L., Cai, J. J., Hiroshi, Y., Akashi, M. Applying support vector machine to predict hourly cooling load in the building. Applied Energy, 86(10):2249–2256 (2009).

9. Terziyska, M., Todorov, Y., Petrov, M. Adaptive supervisory tuning of nonlinear model predictive controller for a heat exchanger. In Energy saving control in plants and buildings, pages 93–98 (2006).

10. Labreuche, C. Construction of a Choquet integral and the value functions without any commensurate-ness assumption in multi-criteria decision making. EUSFLAT-LFA, Aix-les-Bains, France (2011).

11. Norme, NF EN ISO 7730. Ergonomie des ambiances thermiques : Détermination analytique et inter-prétation du confort thermique à l'aide de calculs des indices PMV et PPD et du confort thermique lo-cal. AFNOR (2006).

Page 42/58

Page 45: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Semantic M easures F rom T heory to A pplications

Sébastien Harispe, Sylvie Ranwez, Stefan Janaqi and Jacky Montmain

LGI2P, Ecole N Alès, Parc Scientifique G. Besse, F-30035 Nîmes Cedex 1

!"#$%&'()*&'()+("&)$,'-)$*.#/

Abstract. Semantic measures are widely used to estimate the likeness of con-cepts or conceptualized resources (web pages, patient records) by analyzing their meaning. They are the cornerstones of information retrieval and recom-mendation algorithms based on ontologies. In our work, we study how struc-tured knowledge such as ontologies and semantic graphs can be used to assess the semantic likeness of resources focusing on both theoretical and practical aspects. This report briefly presents our main contributions related to this the-matic.

K eywords: Semantic Measures, Ontologies, Semantic Graphs, Linked Data

1 Motivations & Contributions

Numerous ontologies formalize domain-specific knowledge to make it sharable and workable for both human experts and software agents. These ontologies, here reduced to semantic graphs structuring concepts through semantic relationships, are widely adopted to characterize various entities (e.g. patient records, scientific publications, genes) through non-ambiguous meaning. In addition, thanks to the growing adoption of the Linked Data paradigm, the Web is becoming a network of structured data inter-linking large datasets from different sources. To take advantage of this valuable knowledge, an increasing number of algorithms extensively rely on semantic measures to estimate the proximity of resources accord-ing to their conceptual characterization. Several communities are therefore involved in semantic measures study: Artificial Intelligence, Cognitive Science, Bioinformat-ics, etc. Our work focuses on the theoretical study of semantic measures and on their applica-tions for information retrieval and recommendation techniques taking advantage of ontologies and Linked Data. This paper briefly introduces some of the contributions related to the field which we made during the period 2011-2013.

Page 43/58

Page 46: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

1.1 A Unifying Theoretical F ramework To Better Study and Character ize Semantic Similarity Measures

Due to their importance in several usage contexts and domain-specific applications, a large diversity of semantic measures has been designed over the last decades. This diversity of proposals is mainly due to context specific ad hoc tuning. However, prac-titioners today encounter difficulties in selecting measures according to a specific usage context and measure designers fail in providing more relevant arguments than ad-hoc empirical studies. This is partially due to the lack of an in-depth theoretical understanding of measures key components. Communities involved in semantic measure design therefore require better theoretical tools to improve characterization, understanding and analysis of semantic measures.

Based on a thorough analysis of state-of-the-art measures, recently enriched in col-laboration with David Sanchez from the University of Tarragona, we proposed a theo-retical framework which unifies commonly used approaches for designing similarity semantic measures, i.e. semantic measures only evaluating taxonomical resemblance. This framework distinguishes the core elements from which most semantic similarity measures are derived. It opens interesting perspectives for defining, studying and optimizing semantic measures. This work will be submitted to the Journal of Biomed-ical Informatics [1].

1.2 A G ener ic Software to Compute and Empir ically Analyze Semantic M easures

Most semantic measures have been proposed in specific usage contexts. Therefore, several domain-specific software tools have been developed to compute semantic measure scores for specific ontologies or semantic graphs. Most of them are highly limited as they can only be used in very specific contexts of use and therefore only target a highly specific community. Indeed, semantic measures users and designers do not have an extensive software tool federating the various communities.

To bridge this gap, we developed and maintain the Semantic Measures Library, a generic open source Java library dedicated to the computation and analysis of seman-tic measures. The project aims at providing both an efficient library for developers and a command-line toolkit for users. Downloads and documentation at dedicated website: http://www.semantic-measures-library.org/. The publication of this work is in preparation, the library has been extensively used to support our research projects [1 3].

1.3 Semantic M easures Tak ing Advantage of Multiple K nowledge Bases.

Most measures have been designed to compare pairs of concepts defined in a single ontology. However, a growing number of knowledge modelers and curators mix mul-tiple ontologies to characterize domain-specific resources, e.g. patient medical rec-ords. There is therefore a need to extend classical models of semantic measures to accurately estimate the likeness of concepts defined in multiple ontologies.

Page 44/58

Page 47: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

In collaboration with Montserrat Batet, David Sanchez (both from the University of Tarragona) and Vincent Ranwez (Montpellier Supagro), we designed a new ap-proach to estimate the semantic similarity of concepts taking advantage of multiple knowledge bases. We empirically demonstrated that our proposal improves state-of-the-art performances of measures. This work has been submitted to the Journal Infor-mation Sciences [2].

1.4 Semantic relatedness in RD F graphs: Application To Music Recommendation based on L inked Data

Linked Data enables resource characterization through an RDF graph, in which nodes are resources (e.g. music bands) and edges define relationships established

- - Rolling Such a graph model interconnects resources and is therefore particularly suited to design content-based recommendation systems, i.e. recommendation systems based on the properties of the recommended items. Once more, semantic similarity measures are of particular interest to estimate the relatedness of two resources charac-terized in an RDF graph. However, existing approaches only considered particular properties of instances. To overcome this limitation, we proposed a new framework to express semantic measures better exploiting the various properties characterizing instances in an RDF graph. We applied the proposed approach to design a content-based recommender system offering music band recommendations considering a par-ticular band of interest. A demonstrator is available: http://146.19.252.227/rc_music/. This work will be presented in the conference Ingénierie des Connaissances 2013 [3]. An extended version has also been submitted to the conference on Ontologies, Data-Bases, and Applications of Semantics for Large Scale Information Systems (ODBASE'13).

References

1. Harispe, S., Sánchez, D., Ranwez, S., Janaqi, S., Montmain, J.: A Framework for Unifying Ontology-based Semantic Similarity Measures: a Study in the Biomedical Domain. Journal of Biomedical Informatics. Submitted, (2013).

2. Batet, M., Harispe, S., Ranwez, S., Sánchez, D., Ranwez, V.: An information theoretic approach to improve the semantic similarity assessment across multiple ontologies. Information Sciences. Submitted, (2013).

3. Harispe, S., Ranwez, S., Janaqi, S., Montmain, J.: Mesures Sémantiques basées sur la notion de projection RDF pour les systèmes de recommandation.

- In press (2013).

Page 45/58

Page 48: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Evaluation in System engineering: application to mechatronic design

Mambaye Lo, Pierre Couturier, Vincent Chapurlat

LGI2P, Ecole des Mines d’Alès Site EERIE, Parc scientifique G. Besse, F-30035, Nîmes, France

[email protected]

Abstract. During the lifecycle of a complex system, designing constitute a pre-liminary definition phase. In a context of increasing complexity and interdisci-plinary, a clear and justified definition is required for system’s elaboration. Fac-ing to this challenge, two objectives must be achieved with: respect of custom-ers’ needs and respect of technical constraints found by technical teams. The work presented here is addressed to the evaluation of complex systems de-sign. A methodological and tooled framework is provided in order to answers to evaluation problems in the design of complex systems. From these problems, we can identify the unified vision of Evaluation lacking, the incertitude inherent to preliminary design, and the multiple objectives potentially contradictory

1 Introduction

Design is defined as an iterative process, which is present all along the engineering of a given system, and which coordinates synthesis, analysis and evaluation activities (Blanchard et Fabrycky 2011). In order to realize these activities, design is declined in the following phases: requirement definition, conceptual design, embodiment design and detailed design. From conceptual to detailed design many choices have to be made, which are determining for the elaboration of the future product. This fact im-plies the need of structured and correct justifications in order to assume such design choices.

2 Evaluation in design

Evaluation in design is in charge of justification of design choices, through effec-tiveness analysis, cost and risk analysis and trade off studies of design alternative solutions. However, the implementation of the design evaluation faces many difficul-ties including some that we treat in this research: the lack of consensual and unified vision of evaluation from the multi-expertise point of view, the lack of integrated environment for modeling the systems and proceeding to the solutions analyses and evaluation, the estimation of design choices considering the incertitude specific to design, and the multiple objectives potentially contradictory.

Page 46/58

Page 49: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

A state of the art allowed us to overview the design theories and methodologies (DTMs) (Tomiyama 2006). We chose to evolve with de System Engineering (SE) framework, a structural framework for designing a System Of Interest (SOI)in an iterative and recursive way (Faisandier 2005a). Furthermore, this framework defines four different technical processes (needs definition, requirement analysis, logical ar-chitecture definition and physical architecture definition), and two support processes, the verification and validation process and the evaluation/optimization support pro-cess.

By extending the SE metamodel, we introduce necessary data to evaluation and propose a metamodel of the necessary data for evaluation (MMDE) in SE !"#$%&$'()*&)+,%+$-./-0, based on Model Based System Engineering (MBSE). This aspect of our contribution constitutes an answer to the lack of unified and consensual vision of evaluation during design, and is a preliminary phase to the design analysis and eval-uation. The estimation of the consequences of design choices on the global system performance is treated by formalizing traceability links between the MMDE entities. Doing SE in successive and more and more detailed layers according to the STRATA model (« Vitech Corporation Releases ‘A Primer for Model-Based Systems Engineer-ing’ | Business Wire » 2012), and by using the traceability links, an algorithm able to detect and repair inter layers traceability incoherence is proposed !'()&)+,%+$ %&$ "#$-./-0 (C.f. Figure 1).

1+,2,34&,32$+%5),+%6%3&7 8%5),+%6%3&$

4349:7,7

;+<=,&%<&)+%$>%7,23

8%5),+%6%3&$4349:7,7

;+<=,&%<&)+%$>%7,23

8%5),+%6%3&$4349:7,7

;+<=,&%<&)+%$>%7,23

?=%3$94:%+$<(6@9%&%

?=%3$94:%+$<(6@9%&%AAAAAAA

"4:%+$/

"4:%+$-

"4:%+$3

!"#$%&'(')*%+*+',-.&/'01'2"*&34'5-%6-%+*"-7'8&/&+9&9':;'<%"=&%'>-%',-.&/?@+9&.')A9*&=9'

B7#"7&&%"7#C'D'@$9"7&99'E"%&'F'GH(GI'

In another hand, we propose to join the behavioral models of SOIs to decision models !BC9D@$-..E0, in order to capture the objective of the design and to evaluate the design solution alternatives. This is to ensure that ‘multiple’ and sometimes contradictory objectives of the design are taken into account. Finally a qualitative analysis, inspired from (Imoussaten et al. 2011; Giorgini et al. 2002), completes our contributions and answer to our last identified problem: the incertitude management for the evaluation of design alternatives. In this case behavioral models are supposed to be unavailable, as it is often the case in Systems Engineering, but with help of experts, qualitative evaluation can then be done.

Page 47/58

Page 50: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

3 Mechatronic Example

This work is applied on mechatronic systems, which are complex and interdiscipli-nary ones. A methodological and tooled guide is proposed and illustrated on a power assisted wheelchair, in order to assist the design when facing many choices.

Considering the example of the wheelchair, provided by the mechatronic platform of Ecole des Mines d’Alès, the methodological and tooled guide has been applied for the design decision making, without the presence of behavioral models of the SOI.

We have developed some qualitative evaluation method able to synthetize qualita-tive advices of experts. Considering criteria and Technical Indicators, decomposed from the requirement hierarchical decomposition, design alternatives can be evaluated and compared. For example we consider the criterion ‘Allowed employability using the wheelchair’, which is decomposed by two criteria: ‘Mobility of the wheelchair’, and’ User friendliness of the wheelchair’. Many Technical indicators are associated to these criteria (C.f. Figure 2).

!"#$%&'(')&*+,-+."/"+0'+1'/2&'3%"/&%"+0'43%5'607'6..+*"6/&7'8&*20"*69':07"*6/+%.'4;<=>';<?.>'8?;.5'

Design alternatives are identified by their input Design Dependent Parameters (iDDPs). The values of iDDPs have impacts on the output design Dependant Parame-ters (oDDPs).

!"#$%&'@'?%+,"."0#'*2+"*&.'476%A'6%&6.5'1+%'&11&*/"B&0&..'4C=,-9+D6E"9"/D';<=C'5'

Page 48/58

Page 51: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Evaluation is made by comparing for each oDDP the difference between the satis-faction degree reached on theses oDDPs, and the one wished by the stakeholders of the SOI. This evaluation is made through the use of Technical Indicators. Finally, the satisfaction degrees of the low level technical indicators are aggregated successively until the parent indicator for which satisfaction is measured. From this analysis, prom-ising design choices may be identified (C.f. Figure 3).

4 Conclusion

By adopting the MBSE framework in Systems Engineering, we have proposed a me-ta-model of evaluation so that designers from different technical backgrounds can share a common vision of the evaluation process. Based on the relationships exhibited by such a meta-model, we have also proposed a protocol for identifying the potential impact relations of design choices on future product performance. Depending on the accuracy and precision of the models introduced to advance the design, a quantitative or qualitative analysis of such impacts must be performed. During the conceptual design phase, given that the data are quite uncertain or imprecise, we have extended an original qualitative impact analysis in order to detect the most promising alterna-tive system design solutions. Our proposals have been illustrated through the design of mechatronic systems.

5 References

Blanchard, Benjamin S., et Wolter J. Fabrycky. 2011. Systems Engineering and Anal-ysis (5th Edition). 5e éd. Prentice Hall.

Couturier, Pierre, et Mambaye Lô. 2012. « Needs for tracing the consequences of decisions in Mechatronics design ». Proceeding of the ASME2012 11th BIENNAL Conference on Engineering Systems Design and Analysis (ESDA2012), juillet 2.

Faisandier, Alain. 2005a. « Ingénierie des systèmes complexes ». MAP systeme, Edi-tion 1.

Fülöp, János. 2005. « Introduction to Decision Making Methods ». Giorgini, Paolo, John Mylopoulos, Eleonora Nicchiarelli, et Roberto Sebastiani. 2002.

« Reasoning with Goal Models ». In Proceedings of the 21st International Conference on Conceptual Modeling, 167–181. ER ’02. London, UK, UK: Springer-Verlag.

Imoussaten, Abdelhak, Jacky Montmain, Francois Trousset, et Christophe Labreuche. 2011. « Multi-criteria improvement of options ». In Atlantis Press.

Lô, Mambaye, et Pierre Couturier. 2012. « A Metamodel of Evaluation in Systems Engineering: Application to a Mechatronic Design ». In , 1562 1567.

Tomiyama, T. 2006. « A classification of design theories and methodologies ». In Proceedings of the ASME Design Engineering Technical Conference, 2006:–

« Vitech Corporation Releases ‘A Primer for Model-Based Systems Engineering’ | Business Wire ». 2012. mai 22.

Page 49/58

Page 52: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

An open and distributed framework for designing Domain Specific Graphical Modeling Languages

1François Pfister, 1Vincent Chapurlat, 2Marianne Huchard, 2Clémentine Nebut

1LGI2P, Ecole des Mines d'Alès, site de Nîmes, Parc Scientifique G. Besse, 30000 Nîmes,

France {forename.lastname}@mines-ales.fr 2LIRMM, CNRS – Université Montpellier 2, 161 rue Ada, 34095 Montpellier Cedex 5, France

{lastname}@lirmm.fr

Abstract. DSML (Domain Specific Modeling Languages) are an alternative to general purpose modeling languages (e.g. UML or SysML) for describing models with concepts and relations specific to a domain. The design of DSML requires defining a graphical notation over an abstract meta-model. We introduce a novel approach and a tool (Diagraph) to assist the design of a graphical DSML. The main points are: non-intrusive annotations of the meta-model to identify nodes, edges, nesting structures and other graphical information; immediate validation of meta-models by immediate generation of an EMF-GMF instance editor supporting multi-diagramming. Diagraph is based on a pattern recognition principle in order to infer most of the concrete syntax.

Keywords: dsml, mde, meta-model, language workbench, graphical concrete syntax

1 Introduction

Our area is Systems Engineering (SE), a discipline that, beyond the technological aspects, focuses primarily on functional architectures, deriving the needs expressed by stakeholders, and only in a second step, on the allocation of the resulting functions onto technical components. The obtained artifacts (architectural models) are verified and validated, in a constant interaction with the technical disciplines (software, mechatronics, organizational), to which they delegate the actual construction of the targeted system. Experts use modeling tools specifically tailored to their domain, including notations such as functional flow block diagrams (eFFBD), or physical component based architectures (ADL - Architecture Description Language, or PBD - Physical Block Diagram). However, when a new idea is proposed in the matter of engineering processes, existing languages do not capture these innovations, and then it is necessary either to modify existing languages, or to create new ones that supports the proposed innovation.

We have been faced to such a situation, in our case we had to integrate support for reusable patterns in the eFFBD notation. Since no open-source solution was available, we were forced to develop our own technology. The technical infrastructure to carry out such an editor would exist, but many obstacles remain, related to the lack of a

Page 50/58

Page 53: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

formal definition of graphical modeling languages. Existing practices are more based on recipes rather than on grammatical definitions.

2 Contribution

The process of defining a DSML starts with a class-relation formalism, either by extending UML, or using MOF, an initial class formalism, to define the concepts which were previously unavailable, so as to obtain a new language tailored to the targeted field. Such a work has two major phases: first, defining the abstract syntax with the use of a class diagram, and second, defining the concrete syntax, which specifies the form of (textual or graphical) statements that conform to the abstract syntax.

We are interested in graphical concrete syntaxes. There is no consensus (as this exists with MOF for abstract syntaxes or EBNF for textual concrete syntaxes) about a description language for graphical concrete syntaxes. Indeed, designing and implementing a graphical notation is a strenuous activity requiring significant expertise, both in its semiotic and cognitive concerns, and in its technical and operational aspects. We demonstrate a process and a tool for agile development of graphical modeling languages on the top of Ecore.

So to be able, in an agile way, to design domain specific tailored languages, our framework, named Diagraph, allows to design simultaneously their abstract syntax (meta-model) and their concrete syntax (notation and diagramming). Numerous solutions exist, but none of them satisfies the needs we will detail in the demo.

We aim to integrate Diagraph into the Eclipse ecosystem, and to comply with the current standards of Model Based Engineering (MBE). Our intention is to reuse already available components. Thus, we designed Diagraph as a technical overlay over GMF [2], which is powerful but overly complex for the end user. In addition, we propose a methodology, which lacks in most of the existing propositions. In our approach, defining a new graphical modeling language is made by annotating the concrete syntax on the classes that make up the abstract syntax, by the mean of our Diagraph description language. These annotations, that have been automatically generated by an integrated wizard, and amended by the human expert, associate the concrete syntax to the abstract constructs. The resulting target modeling language is defined in one sole artifact, by this principle, which is that of a grammar.

3 Related work

Many frameworks are able to generate graphic editors from which we can create models, instances of a given meta-model. The generation of these editors takes at the input the given meta-model on one hand, and manual parameters given by the modeling expert on the other hand. The degree of automation of the generation process remains a challenge.

GMF [2] is a framework based on a mapping between MOF and a graph drawing engine, as a part of Eclipse Emf-Ecore stack [1]. This framework is powerful, but

Page 51/58

Page 54: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

poorly documented, and therefore requires a huge technical expertise, this results in a steep learning curve. MetaEdit+ [7] is not based on the Emf-Ecore stack, but on a specific meta-meta-model named GOPRR (Graph, Object, Property, Role and Relationship). GME (Vanderbilt University) is based on MS Component Object Model technology. Microsoft Dsl Tools has a proprietary meta-meta-model, while XMF Mosaic’s is based on an infrastructure named XCore. Obeo Designer [3] is a modeling environment based on the notion of points of view. It is a component of the Eclipse platform, and is based on EMF and GMF-Runtime. Obeo Designer and MetaEdit+ are commercial tools that are split in two different parts: a workbench, a tool for designing modeling languages and a modeler, a tool for using modeling languages.

Eugenia[4], a free and open tool based on Emf, proposes to annotate the meta-model with concrete syntax statements, to generate the GMF artifacts, avoiding the user to deal with the poorly designed GMF workflow. This latter tool lacks of an integrated support of multiple points of view, required for large meta-models in the true life.

4 The Diagraph proposal

As it results from the above survey, several limitations of the existing offer lead us to design Diagraph, a new language and framework. As Eugenia[4], we adopt the principle of annotations, but in an improved way, enhanced by a semi-automated mechanism that will infer a huge part of the graphical notation. Furthermore, the tool is a part of an integrated framework which acts as core engine within an open and world-wide distributed environment [5, 6] dedicated to the design of visual modeling languages. Diagraph offers at the same time:

• An easy to use solution • A graphical notation inference mechanism, based on pattern recognition, • A native support of the multi-view paradigm, • A native and easy support of nested and affixed nodes, • Integration in the Eclipse-OSGI ecosystem, which is a de facto standard in

the Model Based Engineering field, • An open technology [5], with a published meta-model (MOF compliant), that

defines a pivot concept of diagramming, independent of any platform, in one hand, and targets several platforms on the other hand (GMF runtime and Graphiz Dot at the moment),

• A really usable and regularly updated tool. • A shared and collaborative repository [6] of visual modeling languages,

coming as use cases, bundled with their graphical editors, and several examples for each language.

Page 52/58

Page 55: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

5 References

1. Budinsky, F. et al.: Eclipse Modeling Framework. Pearson Education (2003). 2. Gronback, R.: Eclipse Modeling Project: A Domain-Specific Language

(DSL) Toolkit. Addison-Wesley Professional (2009). 3. Juliot, E., Benois, J.: Viewpoints creation using Obeo Designer or how to

build Eclipse DSM without being an expert developer?, http://www.obeo.fr/resources/WhitePaper_ObeoDesigner.pdf.

4. Kolovos, D. et al.: Taming EMF and GMF Using Model Transformation. In: Petriu, D. et al. (eds.) Model Driven Engineering Languages and Systems. p. 211--225 Springer, Berlin / Heidelberg (2010).

5. Pfister, F.: Diagraph, a Framework over EMF and GMF to automate the design of graphical Domain Specific Modeling Languages, http://code.google.com/p/diagraph/.

6. Pfister, F.: OpenDSML: An Open Framework for Domain Specific Graphical Modeling Languages, http://www.opendsml.org.

7. Tolvanen, J.-P., Kelly, S.: Integrating Models with Domain-Specific Modeling Languages. Systems Programming Languages and Applications: Software for Humanity (formerly known as: OOPSLA). (2010).

Page 53/58

Page 56: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Modeling and verification techniques forincremental development of UML architectures

Thanh-Liem Phan, Anne-Lise Courbis, Thomas Lambolais

Ecole des mines d’Ales, LGI2P

Parc Scientifique Georges Besse, 30035 Nımes cedex 1, France

{thanh-liem.phan,anne-lise.courbis,thomas.lambolais}@mines-ales.frhttp://www.mines-ales.fr

Abstract. Our goal is to assist system architectural model development

by providing techniques and tools to early detect specification and de-

sign errors. The models being developed deal with behavioral aspects of

reactive systems: we wonder whether liveness properties are preserved

during the model developments. For that purpose, we propose incre-

mental modeling operations supported by formal relations to compare

increments. The chosen relations are based on the conformance relation

formally defined on labeled transition systems.

Keywords: incremental construction, architectures, UML, verification

1 Problems and objectives

We are interested in the development of architectural models of critical reac-

tive systems expressed as assemblies of UML components. In order to specify

and design architectures and behaviors, we advocate the use of an incremental

development approach and model evaluations during the development process.

Refinement techniques offer several benefits. In particular, they integrate

formal developments into the design and implementation process instead of con-

ducting them in parallel, which reduces costs and improves consistency. However,

classical refinement techniques do not allow us to follow an Agile method in which

requirements are not only refined but also extended in order to offer new services

or new functions. Consequently, they must start from initial abstract specifica-

tions which cover the whole system. This has two drawbacks: initial models are

tricky to design; concrete implementations arrive lately which does not provide

rapid feedback to clients and increases the stress of development teams.

The incremental development approach aims at responding to the limitations

above by integrating both refinement and extension techniques: initial specifi-

cations can be partial; concrete implementation can be provided quickly which

means feedbacks to clients are considered early. It appears that there isn’t any

technique that formally support the incremental construction of UML models,

especially for reactive systems. The reactive properties are formalized by live-

ness properties (if a system is solicited on specific events, will it respond by the

expected actions?). The verifications that we propose are conducted by binary

Page 54/58

Page 57: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

2 Thanh-Liem Phan, Anne-Lise Courbis, Thomas Lambolais

relations between models to know if the second one is actually a refinement,

an extension, or an increment of the first one. All these relations guarantee

the preservation of liveness properties. We propose a framework, IDCM [5] (In-

cremental Development of Conforming Models), in order to verify the above

relations between UML models (state machines or composite structures).

2 Proposal: incremental construction of UMLarchitectures

The proposed incremental construction approach for developing architectures

in UML consists in informal aspects and formal aspects. Informal aspects cover

the step by step development of architectures using UML notations. Formal

aspects mean that software architects can compare at each development step the

obtained model with respect to its previous step. Formal semantics that we have

chosen is LTS (Labeled Transition Systems). In our incremental construction

approach, informal aspect is realized by the toolkit TOPCASED [3], while formal

aspect is implemented by our tool IDCM [5]. The overview of this approach is

shown in Fig 1.

Fig. 1. Incremental construction approach of UML architectures.

The incremental construction of an architecture starts from a first model

which is verified as being deadlock-free and without critical live-locks. The model

is developed by adding, substituting, splitting components or reconfiguring archi-

tectures. In order to realize this approach, we consider two problems: i) tech-niques for the construction of UML architectures, and ii) semanticsand evaluations of UML architectures. Evaluations are performed through

the preorders shown in Fig. 1. We present the conformance relations proposed

in [2] and introduce the should-testing preorder, which support compositional

reasoning in software architecture, as follows.

Definition 1. Let P and Q be two components and A the set of all operationsof P and Q:

Page 55/58

Page 58: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Title Suppressed Due to Excessive Length 3

– Tr(P ) is the set of traces of P . A trace is a partial observable execution.– Q conf P (Q conforms to P ) if for all σ ∈ Tr(P ), for all A ⊆ A, P

must accept A after σ ⇒ Q must accept A after σ.– P �EXT Q (Q extends P ) if Tr(P ) ⊆ Tr(Q) and Q conf P .– P �RED Q (Q reduces P ) if Tr(Q) ⊆ Tr(P ) and Q conf P .– P �INC Q (Q increments P ) if for any I such that I conf Q, I conf P .– P �REF Q (Q refines P ) if P �RED Q and P �INC Q.– P �SHD Q (Q can substitute P ) if for all components t describing a test, P

should accept t ⇒ Q should accept t.

P must accept A after σ refers to the set of sets of actions that P must acceptafter a trace σ [2]. If Q conf P , then Q is more deterministic than P and allliveness properties of P are satisfied by Q. P should accept t implies everyreachable state of P ||t (P and t are executed concurrently) is required to beon a path to success [6]. If P �SHD Q, then P �RED Q and moreover P can bereplaced by Q in any hiding and parallel composition context.

In the case of substitution of components, none of the extension or refinementrelations between the new component and the substituted component ensuresthe conformance of architectures. Then, we select the should-testing relation [1,6], a congruence relation stronger than the refinement relation.

We give a simple example for illustrating the benefits of �SHD compare with�EXT. Consider two components C1 and C2 in Fig. 2. We have C1 �EXT C2 butwhen we substitute C1 by C2 in an architecture, we have the unwanted resultA1 ��EXT A2. By using the relation �SHD, we detect that C1 ��SHD C2.

ba

/b

!"#!$#

%& '

ba

/b

!"#!$#

%( '

b

)&

)(

Fig. 2. C1 �EXT C2 but A1 ��EXT A2.

2.1 Techniques for construction of UML architectures

In our approach, architectures can be developed following two axes: the verticalaxis represents the level of abstraction, whereas the horizontal axis representsthe level of requirements coverage. �REF is used for vertical techniques, �EXT ischosen for horizontal techniques, while �INC is a combination of both.

Vertical techniques. The construction of architectures along the vertical axisdoes not change the functionality of the system in terms of mandatory offeredor required services, but the level of abstraction of the description. We presenthere some of these techniques:

Page 56/58

Page 59: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

4 Thanh-Liem Phan, Anne-Lise Courbis, Thomas Lambolais

– Refinement techniques (along the downward direction):• Refinement solving to find a component satisfying constraints to developan architecture with reduced services. Let C a component to be refinedusing an existing component C1. Refinement solving consists in develop-ing or finding a component X and an architectural configuration f suchthat A := f(C1, X) satisfies C �REF A.

• Refine substitution to replace components in an architecture by refinedones. For an architecture A1 := f(C1, ..., Ci, ..., Cn), refine substitutionconsists in finding a component C �

i such that A2 := f(C1, ..., C �i, ..., Cn)

satisfies A1 �REF A2.– Abstraction techniques (along the upward direction):

• Abstraction solving to find components satisfying constraints to obtainan architecture more abstract. Let C a component to be abstracted usingan existing component C1. Abstraction solving consists in developing orfinding a component X and an architectural configuration f such thatA := f(C1, X) satisfies A �REF C.

• Abstract substitution to replace components in an architecture by otherabstract ones.

Horizontal techniques. The construction of architectures along the horizontalaxis does not change the level of abstraction of the model but involves themodification of behavioral specifications that can be realized through the offeredand required services and the interactions with the environment. Main horizontaltechniques are:

– Extension techniques, whose main modeling operations are:• Extension solving to find components which satisfy constraints to de-velop an architecture offering more services. For an architecture A1 :=f(C1, . . . , Cn), extension solving consists in finding g, C �

1, . . . , C�m such

that A2 := g(C �1, . . . , C

�m) satisfies A1 �EXT A2.

• Extension substitution to replace a component by another one whichsatisfies constraints to obtain an architecture offering more services. ForA1 := f(C1, . . . , Ci, . . . , Cn), extensive substitution consists in finding acomponent C �

i such that A2 := f(C1, . . . , C �i, . . . , Cn) satisfies A1 �EXT

A2.– Restriction techniques, whose modeling operations are restriction solving

and restriction substitution. These techniques are the opposite of extensiontechniques and result in removing some behaviors or reducing interfaces.

2.2 Semantics and evaluation of UML models

To analyze components and architectures, their formal semantics need to bedefined. We have automated two transformations of models developed using theTopcased framework [3]: the transformation of UML state machines into LTS andthe transformation of UML architectures in intermediate specifications definedby synchronization vectors from which LTS are automatically generated usingthe CADP toolbox [4].

Page 57/58

Page 60: Advances on cognitive automation at LGI2P / Ecole des ...urtado/Slides/RR_13_02.pdf · Advances on cognitive automation at LGI2P / Ecole des Mines d'Alès Doctoral research snapshot

Title Suppressed Due to Excessive Length 5

In order to evaluate UML models, designers choose the appropriate relation(among extension, refinement, increment, and substitution) according to themodeling relationship expressed in the component diagram. In case of failures,the verdict is expressed by refusal sets given after a sequence of interactions.Then, designers can correct their models and continue the incremental develop-ment. Experiments about modeling, transformation and conformance analyseshave been conducted on several case studies.

3 Conclusion

We have developed a JAVA tool allowing LTS transformation and verificationof UML models. The value of the incremental development approach is to offera compromise between pragmatic approaches such as Agile development, andformal approaches such as method B.

The contribution of this work is twofold: i) providing evaluation means asso-ciated to specialization and realization UML relationships; and ii) supporting asubstitution relation between UML components.

References

1. Ed Brinksma, Arend Rensink, and Walter Vogler. Fair Testing. In Scott Smolka,editor, CONCUR ’95: Concurrency Theory, volume 962, pages 313–327. Springer-Verlag, August 1995.

2. Ed Brinksma and Giuseppe Scollo. Formal Notions of Implementation and Con-formance in LOTOS. Technical report, Twente University of technology, Enschede,December 1986.

3. Patrick Farail, Pierre Gaufillet, Agusti Canals, Christophe Le Camus, David Sci-amma, Pierre Michel, Xavier Cregut, and Marc Pantel. The TOPCASED project:a toolkit in open source for critical aeronautic systems design. Embedded Real Time

Software (ERTS), 2006.4. Hubert Garavel, Frederic Lang, Radu Mateescu, and Wendelin Serwe. CADP 2010:

a toolbox for the construction and analysis of distributed processes. In LNCS,volume 6605 of LNCS, pages 372–387. Springer-Verlag, March 2011.

5. Hong-Viet Luong, Anne-Lise Courbis, Thomas Lambolais, and Thanh-Liem Phan.IDCM:un outil d’analyse de composants et d’architectures dedie a la constructionincrementale. In 11emes Journees Francophones sur les Approches Formelles dans

l’Assistance au Developpement de Logiciels, pages 50–53, January 2012.6. A. Rensink and W. Vogler. Fair testing. Information and Computation, 205(2):125–

198, 2007.

Page 58/58