a method for similarity assessment between death masks and ... · a method for similarity...

8
A method for similarity assessment between death masks and portraits through linear projection: The case of Vincenzo Bellini Gaetano Sequenzia a, * , Dario Allegra b , Gabriele Fatuzzo a , Filippo Luigi Maria Milotta b , Filippo Stanco b , Salvatore Massimo Oliveri a a Department of Electric, Electronic and Informatic Engineering (DIEEI), University of Catania, Italy b Department of Mathematics and Computer Science (DMI), University of Catania, Italy ARTICLE INFO Keywords: 3D death mask Landmarks projection 3D-2D comparison Face recognition ABSTRACT The aim of this study was to conrm the identities of numerous portraits attributed to the composer Vincenzo Bellini by using 3D-to-2D projection. This study also followed on from earlier research on three death masks of Bellini, the results of which had shown that the wax mask in Catanias Bellini museum best represented Bellinis face compared to the other two. This study used the aforementioned 3D wax death mask obtained through Reverse Engineering as a reference for a morphometric comparison with 14 other portraits. For each portrait, the linear 3D-to-2D transformation M was found which minimized the distance between the 2D landmarks in the picture and the projected landmarks on the 3D mask. This normalized the distances considering the scale of the portrait and the nal dissimilarity score with the mask. In particular, the analytical results were able to identify two portraits which particularly resembled the 3D death mask providing future researchers with the chance to carry out historical-artistic evaluations. We were also able to develop a new tool Image Mark Pro - to easily annotate 2D images by introducing landmark locations. Since it was so reliable for manually annotating land- marks, we decided to make it publicly available for future research. 1. Introduction In recent years, numerous studies have concerned the development of computer-based methodologies to identify faces in paintings, sculptures and death masks (Srinivasan et al., 2015; Rudolph et al., 2017; Wang et al., 2019). Over time, and prior to the advent of photography, these representations were of the famous royal, religious, historical or famous artists. Sometimes, identifying the representation whether 2D (portrait) or 3D (sculpture or death mask) can be difcult. Identifying the portraits sitter is a complex process given they are subject to the social and artistic conventions of their time (Li et al., 2011) and to the particular artist (Hayes et al., 2018). Traditionally, their identication was limited to expert opinion, often causing discord and unresolved debate. Today, current facial recognition techniques (Srinivasan et al., 2013a,b; Gupta et al., 2018) are reliable and can objectively identify degrees of similarity in portraits of unknown sitters and in various portraits of the same sitter whose identity is well-known or presumed. There are many computer-based facial recognition systems both by type and end-product. The most common method for portraiture is feature-based with algorithms which compare sets of facial reference points (landmarks) and their relative anthropometric distances (Tyler et al., 2012). 1.1. Related works Face recognition can be applied to digitally acquired pictures as well as to artistsportraits, sculptures and death masks. The main issue about face recognition in cultural heritage concerns artistic subjectivity: the inuence of factors like politics, salary, personal sense of beauty, social and artistic conventions or historical setting. Since there are many por- traits in which the sitter is totally or almost unknown, facial recognition of the same sitter in different subjective artistic interpretations could help in both re-discovering lost heritage and provide signicant insights into the sitterspersonalities. Srinivasan et al. (2013a,b) proposed using Portrait Feature Space (PFS), as part of a more complex pipeline. Through this procedure, they conduct a discriminant analysis of features extracted * Corresponding author. E-mail addresses: [email protected] (G. Sequenzia), [email protected] (D. Allegra), [email protected] (G. Fatuzzo), [email protected] (F. Stanco), [email protected] (S.M. Oliveri). Contents lists available at ScienceDirect Digital Applications in Archaeology and Cultural Heritage journal homepage: www.elsevier.com/locate/daach https://doi.org/10.1016/j.daach.2020.e00144 Received 9 October 2019; Received in revised form 23 March 2020; Accepted 3 April 2020 2212-0548/© 2020 Elsevier Ltd. All rights reserved. Digital Applications in Archaeology and Cultural Heritage 17 (2020) e00144

Upload: others

Post on 28-Jan-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

  • Digital Applications in Archaeology and Cultural Heritage 17 (2020) e00144

    Contents lists available at ScienceDirect

    Digital Applications in Archaeology and Cultural Heritage

    journal homepage: www.elsevier.com/locate/daach

    A method for similarity assessment between death masks and portraitsthrough linear projection: The case of Vincenzo Bellini

    Gaetano Sequenzia a,*, Dario Allegra b, Gabriele Fatuzzo a, Filippo Luigi Maria Milotta b,Filippo Stanco b, Salvatore Massimo Oliveri a

    a Department of Electric, Electronic and Informatic Engineering (DIEEI), University of Catania, Italyb Department of Mathematics and Computer Science (DMI), University of Catania, Italy

    A R T I C L E I N F O

    Keywords:3D death maskLandmarks projection3D-2D comparisonFace recognition

    * Corresponding author.E-mail addresses: [email protected] (G. Se

    [email protected] (S.M. Oliveri).

    https://doi.org/10.1016/j.daach.2020.e00144Received 9 October 2019; Received in revised form2212-0548/© 2020 Elsevier Ltd. All rights reserved

    A B S T R A C T

    The aim of this study was to confirm the identities of numerous portraits attributed to the composer VincenzoBellini by using 3D-to-2D projection. This study also followed on from earlier research on three death masks ofBellini, the results of which had shown that the wax mask in Catania’s Bellini museum best represented Bellini’sface compared to the other two. This study used the aforementioned 3D wax death mask obtained throughReverse Engineering as a reference for a morphometric comparison with 14 other portraits. For each portrait, thelinear 3D-to-2D transformation M was found which minimized the distance between the 2D landmarks in thepicture and the projected landmarks on the 3D mask. This normalized the distances considering the scale of theportrait and the final dissimilarity score with the mask. In particular, the analytical results were able to identifytwo portraits which particularly resembled the 3D death mask providing future researchers with the chance tocarry out historical-artistic evaluations. We were also able to develop a new tool – Image Mark Pro - to easilyannotate 2D images by introducing landmark locations. Since it was so reliable for manually annotating land-marks, we decided to make it publicly available for future research.

    1. Introduction

    In recent years, numerous studies have concerned the development ofcomputer-based methodologies to identify faces in paintings, sculpturesand death masks (Srinivasan et al., 2015; Rudolph et al., 2017; Wanget al., 2019). Over time, and prior to the advent of photography, theserepresentations were of the famous – royal, religious, historical or famousartists. Sometimes, identifying the representation whether 2D (portrait)or 3D (sculpture or death mask) can be difficult. Identifying the portrait’ssitter is a complex process given they are subject to the social and artisticconventions of their time (Li et al., 2011) and to the particular artist(Hayes et al., 2018). Traditionally, their identification was limited toexpert opinion, often causing discord and unresolved debate. Today,current facial recognition techniques (Srinivasan et al., 2013a,b; Guptaet al., 2018) are reliable and can objectively identify degrees of similarityin portraits of unknown sitters and in various portraits of the same sitterwhose identity is well-known or presumed.

    There are many computer-based facial recognition systems both by

    quenzia), [email protected] (D

    23 March 2020; Accepted 3 Ap.

    type and end-product. The most common method for portraiture isfeature-based with algorithms which compare sets of facial referencepoints (landmarks) and their relative anthropometric distances (Tyleret al., 2012).

    1.1. Related works

    Face recognition can be applied to digitally acquired pictures as wellas to artists’ portraits, sculptures and death masks. The main issue aboutface recognition in cultural heritage concerns artistic subjectivity: theinfluence of factors like politics, salary, personal sense of beauty, socialand artistic conventions or historical setting. Since there are many por-traits in which the sitter is totally or almost unknown, facial recognitionof the same sitter in different subjective artistic interpretations could helpin both re-discovering lost heritage and provide significant insights intothe sitters’ personalities. Srinivasan et al. (2013a,b) proposed usingPortrait Feature Space (PFS), as part of a more complex pipeline. Throughthis procedure, they conduct a discriminant analysis of features extracted

    . Allegra), [email protected] (G. Fatuzzo), [email protected] (F. Stanco),

    ril 2020

    mailto:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]://crossmark.crossref.org/dialog/?doi=10.1016/j.daach.2020.e00144&domain=pdfwww.sciencedirect.com/science/journal/22120548www.elsevier.com/locate/daachhttps://doi.org/10.1016/j.daach.2020.e00144https://doi.org/10.1016/j.daach.2020.e00144

  • Fig. 1. The 14 portraits chosen for compar-ison with the 3D death mask of VincenzoBellini:(1) lithography by Vassalli; (2) portrait byPietro Luchini (1799–1883); (3) portrait byGiuseppe Platania (1780–1852); (4) portraitby Federico Maldarelli (1826–1893); (5)portrait by Federico Maldarelli (1826–1893);(6) portrait by Adolfo Matarelli (1832–1887);(7) lithography by Girolamo Bozza (fromportrait by Natale Sciavone (1777–1858)); (8)portrait by unknown author (from litograghyby Roberto Focosi (1806–1862)); (9) portraitby unknown author; (10) portrait by AngeloD’Agata (1842-post1913) (from portraitFrancesco Di Bartolo); (11) portrait by Giu-seppe Cammararo (1766–1850); (12) portraitby Giuseppe Guizzardi (1779–1861); (13)portrait by unknown author; (14) portrait byFrancesco Di Bartolo (1826–1913).

    G. Sequenzia et al. Digital Applications in Archaeology and Cultural Heritage 17 (2020) e00144

    2

  • Fig. 2. The extant Vincenzo Bellini death masks- (a) wax mask from Catania’s Bellini Museum; (b) plaster cast mask from Napoli’s S. Pietro a Majella Museum; (c)plaster cast mask from Catania’s Bellini Museum.

    G. Sequenzia et al. Digital Applications in Archaeology and Cultural Heritage 17 (2020) e00144

    from known image pairs, the most significant features being selected todefine the PFS distribution functions. The PFS uses a Gabor filter andmeasures the distances between key facial points. In a nutshell, the aim isto build a probability distribution for the similarity score betweenmatching pairs (i.e., portraits depicting the same person). It is a super-vised approach, since matching pairs of portraits must be provided tocreate the model. Finally, when a pair of images is tested to check if theydepict the same person, the similarity score is computed and a hypothesistest assesses if there is a match or not, according to the model previouslybuilt.

    Stork (2009) gives an overview of Computer Vision tools for theanalysis of paintings, starting from image processing techniques (John-son et al., 2008; Stork and Johnson, 2010). Other studies have employed3D shape information of the face (Tyler et al., 2012), even if anthropo-metric distances and local features have proven to be more discrimi-nating (Srinivasan et al., 2015). Local features can also be landmarks oranatomical reperees (Caple and Stephan, 2016) (e.g., iris or eye corners),while distances are the metrics computed from defined local features(e.g., distance between cheekbones, or distance between irises). A 3DMorphable Model (3DMM) was designed in (Tyler et al., 2012), with theaim of finding the similarities between a portrait of Leonardo and twosimilar sculptures. 3DMM is based on a set of 70 Farkas local features,while the similarities between analysed faces is measured according toEuclidean distances in 3D space.

    The interest around face recognition applied to art has grown inrecent years. The US National Endowment for the Humanities has fundeda project named FACES (Faces, Art, and Computerized Evaluation Sys-tems) to help art historians guess sitters’ identities or define/recogniseartists’ styles (Srinivasan et al., 2013a,b; Rudolph et al., 2017). Morerecently, deep learning techniques have been investigated to verifyidentity in renaissance portraits (Gupta et al., 2018). In such work,extracting local features is demanded to a Siamese Convolutional NeuralNetwork (Siamese CNN). Although in recent years CNN based methodshave performed outstandingly in most Computer Vision tasks, theyrequire a large amount of data to work properly. Even thoughhigh-quality performance is a valuable advantage, there is a drawback inhandling very different art styles. To deal with the scarcity of renaissanceportrait data, the authors employ a style-transfer module to apply aclassical renaissance style to modern pictures. After a proper dataset isbuilt, a Siamese CNN is used to learn the feature and similarity metricsbetween pairs of images. However, as did Srinivasan et al., to detect pairsdepicting the same subject, the authors build a PFS and then employ aNeyman-Pearson hypothesis test.

    As opposed to most of the problems addressed in the literature, weaimed to find the most faithful portrait of a subject within a set ofpaintings of that subject. This can be done by using a digital 3D model asreference. Specifically, in our case study of 14 paintings of Vincenzo

    3

    Bellini, we aimed to find the most realistic portrait by analysing the 3Dmodel of his death mask. Hence, we have proposed a pipeline to comparea 3D mask with a 2D portrait. Given these different domains, most thepresented literature is not suitable. Other works which take advantage of3D-2D relationships are out of the context of Cultural Heritage. In (Liuet al., 2016; Liu et al., 2020) the authors propose to use two sets of re-gressors to locate the landmarks on a 2D image and meanwhile recon-struct its 3D face shape. In (Gallo et al., 2016), 3D models of breasts areprojected in 2D space and a set of landmarks is located to describe thebreast shape. 3D human pose estimation is addressed through RGBDcamera in (Tang et al., 2019). The authors use convolutional neuralnetworks for 2D human pose estimation by getting joint points in colourimage; then they map the results to the corresponding depth channel toobtain 3D joint points. RGBD cameras are also used to learn a 3D-2Dcorrespondence in food volume estimation for diet (Allegra et al.,2017; Lu et al., 2018), to support assistive technologies (Tian, 2014;Milotta et al., 2015) and even in industrial manufacturing field (Munaroet al., 2016; Liu and Wang, 2019).

    However, also the reported literature on 3D-2D related to other fields,cannot be used for our case study. In fact, we cannot take advantage ofcolour and texture features, since they are not available for the 3D mask.Moreover, there is no way to employ RGBD images since portrait arenatively 2D. Gupta et al. and Srinivasan et al. methods compare just 2Dimages and assess if they match. This led us to use anthropometriclandmarks to geometrically describe facial features and use a 3D-2Dlinear projection to evaluate the similarity between a 3D mask and a2D portrait.

    2. Case study

    2.1. Vincenzo bellini iconography

    From the vast portraiture of Vincenzo Bellini, there are two icono-graphic tendencies. One is where the images of the composer’s face havevery different morphologies accentuating contrasting expressions and/orshowing different ages. The other is when the composer’s face is repro-duced identically across a range of artworks: different busts and/orscenes or wearing different clothes. The latter shows that a significantpictorial iconography, as in the case of prints, seems to have been copiedfrom other portraits, sometimes to the detriment of the original.

    To identify those portraits most closely resembling Bellini’s deathmask, only those held to be of greatest value were selected due to thereputation of their custodial museums and the fame of their artists. Someportraits were done within the composer’s lifetime, while others weredone by famous artists from reliable iconography no longer extant.Furthermore, the chosen portraits show the composer both young andelderly. Fig. 1 shows the 14 portraits chosen for this study taking into

  • Fig. 3. Numbered map of identified landmarks (Caple and Stephan 2016).

    G. Sequenzia et al. Digital Applications in Archaeology and Cultural Heritage 17 (2020) e00144

    account their state of conservation, image definition and critical validity(https://letteredalconvento.wordpress.com/tag/vincenzo-bellini/).

    2.2. The vincenzo bellini death masks

    A previous study (Fatuzzo et al., 2016) by the same authors concernedthree death masks attributed to Vincenzo Bellini and compared them tothe post mortem results of Prof. Dalmas commissioned by King LuigiFilippo to curtail any indiscretions regarding possible violent causes ofdeath. In detail, the three masks have the following characteristics:

    1) wax mask (Fig. 2a) made in 1935 by the artist Salvo Giordano from ahitherto undiscovered mould and kept in Catania’s Bellini Museum inoptimum condition.

    2) plaster mask (Fig. 2b) presumably made very soon after death by thesculptor Jean-Pierre Dantan from a hitherto undiscovered mould andkept in Napoli’s S. Pietro a Majella Museum in good condition.

    3) plaster mask (Fig. 2c) made by the artist Giulio Monteverde in 1876during the exhumation of a hitherto undiscovered mould and kept inCatania’s Bellini Museum in advanced deteriorated condition.

    From that study, Catania’s wax mask had the highest definitioncompared to Napoli’s plaster cast mask despite both presumably beingmodelled on the same mould taken shortly after Bellini’s death. BecauseCatania’s plaster cast mask showed advanced deterioration it was un-suitable for this study, so mask (a) (below) was chosen as the reference.

    3. Choice and description of landmarks

    Numerous literature studies concern computational methods basedon geometric data to recognise and identify faces (Gupta et al., 2010;Charlier et al., 2014; Marcolin and Vezzetti, 2017; Charlier et al., 2019).

    Anthropometric evaluations of faces begin by identifying facialreference points (landmarks) with visible or palpable characteristics onthe skin or bone. The landmarks are chosen to obtain as much facial dataas possible particularly relating to the morphology of eyes, ears, nosesand mouth contours.

    In this case study, 16 soft-tissue landmarks were identified amongthose proposed in the anthropometric literature (Farkas, 1994; Short

    4

    et al., 2014; Caple and Stephan, 2016). The choice criteria were based ontheir high visibility and identifiability in 2D as well as in 3D and also fortheir ability to morphologically characterise those particulars of thehuman face like eyes, nose and mouth (see Fig. 3).

    Points Landmark Definition

    Nomenclature

    0–8

    Exocanthion (sx - dx) Most lateral point of the palpebral fissure, at theouter commissure of the eye

    3–5

    Endocanthion (sx -dx)

    Most medial point of the palpebral fissure, at theinner commissure of the eye

    1–6

    Palpebrale superius(sx - dx)

    The highest point on the margin of the uppereyelid

    2–7

    Palpebrale inferius (sx- dx)

    The lowest point on the margin of the lower eyelid

    4

    Sellion

    Deepest midline point of the nasofrontal angle

    9–11

    Alar curvature point

    (sx - dx)

    The most posterolateral point of the curvature ofthe base line of each nasal ala

    10

    Pronasale The most anterior protruding point of the noseapex

    12

    Subnasale Median point at the junction between the lowerborder of the nasal septum and the philtrum area

    14

    Labiale superius

    Midpoint of the vermilion border of the upper lip

    13–15

    Cheilion (sx - dx) Outer corners of the mouth where the outer edges

    of the upper and lower vermilions meet

    4. Proposed approach

    In this section, we detail the proposed methodology to objectivelyevaluate the similarity between the 3D Bellini mask and the facesdepicted in the set of paintings. We need to find the most realistic portraitby using N landmarks.

    The main challenge is posed by the different domains of the twoobjects - a 3D mask and 2D portrait - which led to apply the cameramatrices estimation (Szeliski, 2010). Camera matrix refers to thegeometrical 3D-2D transformation involved in the projection of a 3Dobject of the real word to the 2D image domain. Our idea was to find thelinear 3D-to-2D transformation Mfor each portrait which minimized thedistance between the 2D markers in the picture and the projectedmarkers on the 3D mask. The distance is then normalized by consideringthe scale of the portrait to finally obtain the degree of resemblance to themask. This means the portrait which exhibits most minimum distances ispotentially the most faithful.

    This evaluation can be done by applying the procedure reportedbelow to each painting in our set:

    1) Fixing the origin in 3D and 2D space, and properly translating themarkers;

    2) Solving a least squares problem to compute the transformation matrixM;

    3) Assessing the similarity between a 2D portrait and a 3Dmask by usingthe related transformation matrix M.

    A detailed description of the three steps is reported in the nextsubsections.

    4.1. Fixing origin

    Since linear transformations do not include translation, we haddecided to previously record the sets of 2D points (of the portraits) and3D points (of the mask) by using the position of one of the landmarks asorigin. A point c, that should be approximately at the face center, isselected as origin. Letting Q ¼ fq0; q1;…; qN�1g be a set of 2D landmarksfor a portrait, we translated the origin at point qcto get the new set P ¼Q–qc. Similarly, we translated the 3D mask’s landmarks S ¼ fs0; s1;…;

    https://letteredalconvento.wordpress.com/tag/vincenzo-bellini/

  • Fig. 4. Pipeline to compute the dissimilarity score between a portrait and the 3D mask.

    Fig. 5. Gauge R&R analysis of the landmark annotation.

    G. Sequenzia et al. Digital Applications in Archaeology and Cultural Heritage 17 (2020) e00144

    sN�1g to get the set R ¼ S–sc.

    4.2. 3D landmark projection

    A proper comparison between 3D landmarks and 2D ones, requiresprojecting the 3D points onto the 2D image domain. This can be done byfinding a linear transformation that can be described with a 3 � 2 matrixM 2 R3�2. Choosing a linear transformation was motivated by the factthat the projection we were looking for must never allow rigid trans-formation. Namely, we could not allow the size of one eye to changewithout the other one doing so too; again, we could not allow the dis-tance between the eyes to change more than the distance between thenose and mouth. In other words, we could not alter any facial features.

    To find the matrix M, we set this least squares problem:

    M¼ðRmTRmÞ�1RmTPm (1)

    where Rm 2 RN�3 is a matrix in which the row i includes the coordinatesof the 3D landmark ri. Similarly, Pm 2 RN�2 is a matrix in which the row iincludes the coordinates of the 3D landmark pi. RmT is the transpose ofRm; the term ðRmTRmÞ�1RmT is the pseudo-inverse matrix and serves as ageneralization of the inverse matrix for non-square or singular ones.

    4.3. Similarity assessment

    Matrix M can now be used to project 3D landmarks in P onto 2Dspace. Specifically, the projection bpi of point ri is:

    bpi ¼ riM (2)Then, we can compute error E as:

    E¼PN

    i¼1pi � bpi 2PNi¼1pi � p2

    (3)

    where k �k12 is the L2 norm (i.e. Euclidean distance) and p is the centroid(average point) of the expected coordinates pi. The denominator in theformula, serves as the normalization term to obtain a scale invariantevaluation. Error E is positively correlated with the amount of rigidalteration of facial features which is required for a perfect match. Hence,we expect that a faithful portrait of the person to whom themask belongs,results in a small value of E. In a nutshell, error E is used as a dissimilarityscore. The pipeline to compute the dissimilarity between a 2D portraitand the 3D mask is summarized in Fig. 4.

    5. Landmark annotation

    Pictures were manually labelled with the 16 landmarks previouslydescribed. These 16 reference points were taken from the 2D and 3Ddomains. In the former, the markers were employed using an applicationwhich we developed in Processing called Image Marker Pro. This appli-cation has several features, like browsing, opening, and the saving ofimages in any common format. The core of the application is the labellingtool which can mark and save landmarks on the image, ensuring an

    5

    indexed order which corresponded to the 16 reference points. Finally, thelabelled images were exported together with a list of the coordinates ofthe labelled points. Labelling the 3D mask model was done usingMeshLab (Cignoni et al., 2008) by picking the 3D points and saving theircoordinates in a file sheet.

    To assess the quality of the annotations, we conducted a statisticalanalysis highlighting the repeatability of placing the landmarks multipletimes and the robustness of the landmark relating to the ease with whicha specific landmark is placed in the same location multiple times.

    A statistical analysis based on a Gauge Repeatability and Reproduc-ibility (R&R) study has been carried out to quantify measurement errorand test the quality of the annotations (see Montgomery (2013) forfurther details about the Gauge R&R analyses). To investigate the mea-surement error,W ¼ 14 portraits were considered for each of the N ¼ 16landmarks. Multiples portraits improved the robustness of the annotationprocedure compared to landmark variability. For each landmark i andportrait j, for i ¼ 0;…;N � 1 and j ¼ 0;…;W � 1; we asked O ¼ 4different operators to annotate for Z ¼ 5 times the x and y position.Overall, a total of 4480 landmarks has been labelled. The multiplemeasurements xi;j;k;l and yi;j;k;l, for k ¼ 1; :::;O� 1 and l ¼ 1;…; Z � 1 ofthe coordinates for each landmark were randomly collected to eliminatelurking effects due to fatigue. The performances along the two axes x andy were considered separately. For each landmark i, we calculated totalvariability TVi between the landmark positions on the W different por-traits and total precision errorGRRi due to the annotation procedure byOoperators. Then, we calculated the ratio εi ¼ GRRiTVi . As suggested byMontgomery (2013) referring to industrial measurement systems, theannotation procedure can be considered fully acceptable when ε � 0:1. Aratio 0:1 < ε � 0:3 can be considered only partially acceptable. Fig. 5shows the εiratios forN landmarks along the two axes x and y. The resultsshow that the x coordinate is stable for each landmark (i.e.ε � 0:1). Thismeans it is relatively easy for the operators to place the same landmark inthe same x position multiple times whereas error ε along axis y is out ofthe acceptable range. Most of the landmarks show a partially acceptable

  • Fig. 6. Start window of Image Mark Pro.

    G. Sequenzia et al. Digital Applications in Archaeology and Cultural Heritage 17 (2020) e00144

    error (i.e., 0:1 < ε � 0:3) while landmarks 4, 9, 10, 11 and 12 are veryhard to identify along the y axis.

    The low robustness of such landmarks is not surprising. We believethat for landmark 4, this is due to the difficulty of detecting a key point onsmooth skin with few geometric references. A similar problem affectslandmarks 10 and 12. Finally, landmarks 9 and 11 are hard to identifysince they are hidden in most of the portraits because of the three-quarterview.

    Fig. 7. Two different annotations performed with Image Mark Pro on the sa

    Fig. 8. 2D to 3D transformation

    6

    5.1. Image marker PRO

    Image Marker Pro is a novel labelling tool for manually insertinglandmarks and key-points on 2D images. Although this software was fullydeveloped to facilitate this study, it can even be applied in totallydifferent fields such as medicine, engineering, industrial applications andso on. It provides a very simple and intuitive graphical interface to openan image and identify interesting points.

    The start window (see Fig. 6) allows to browse and select the image toannotate. Then, the image can be fitted to any screen size. Annotation isvery simple being mouse driven. Markers are placed with the left buttonand removed with the right. Each marker is well defined by an integernumber. When the annotation is over, the user can save the work bygenerating the annotated image and a text file which includes all themarker coordinates. Fig. 7 shows an image annotated twice with ImageMarker Pro. The developed tool application and its source code can befound at the following URL: https://iplab.dmi.unict.it/bellini/.

    me portraits. Note that the landmark positions are not exactly the same.

    error E for the 14 portraits.

    https://iplab.dmi.unict.it/bellini/

  • Fig. 9. 3D reconstruction of the possible face of Vincenzo Bellini. (a) 3D mesh of the mask; (b) Portrait 10 (Figs. 1–10); (c) Overlay between (a) and (b); (d) Projectionof the portrait onto the mask; (e) Detailed view of the projected portrait (black pixels were added on the mesh boundary and in vertices that cannot be viewed from theselected viewpoint); (f) Rotated view of (e); (g) Wholly colored mask, where color of black pixels were inferred from neighbours. (For interpretation of the referencesto colour in this figure legend, the reader is referred to the Web version of this article.)

    G. Sequenzia et al. Digital Applications in Archaeology and Cultural Heritage 17 (2020) e00144

    6. Experimental setup

    A set of 14 portraits of Bellini were selected. Each was labelled 5 timesby 4 different operators. This labelling strategy allows to decrease theinter- and intra-operators error for 2D markers. Following the threeproposed steps in our methodology, reference point #10 was chosen asthe new origin c for the system. Then, for each portrait and landmark, thepoint coordinates were averaged. The 2D-to-3D model transformationdescribed in Section 4 was done in MATLAB. Then, we computed theerror for each portrait by using equation (3).

    Error E for the 14 portraits is reported in Fig. 8. Portraits 10 and 14have the lowest values, so they may be considered the most faithfulreplicas of the 3D mesh. Portraits 2, 9 and 11 have the highest values dueto lower matches with the 3D reference model. Ideally, error E shouldreach 0 when a perfect portrait of the 3D model occurs.

    7. Discussion

    This study looked at the resemblances between portraits (the mostfaithful) and a death mask (the most detailed) (Fatuzzo et al., 2016)attributed to the composer Vincenzo Bellini thereby comparinghistorical-artistic evaluations supported by the numerical data obtainedby means of an ad hoc application based on anthropometric landmarksand their projection from 3D to 2D.

    This methodology highlighted some similarities and differences be-tween the 14 portraits, found in many cases also by a visual comparison,confirming the reliability of the subsequent historical-artistic evaluations. Itshould be noted that the degree of those differences is based on the funda-mental error that the 16 landmarks were chosen from a death mask whichhad been made at least 36 h after Bellini’s death and these were comparedwith portraits of Bellini while he was alive. Notwithstanding somediscrepancy in the absolute error value, in reality this did not influence thecomparisons and therefore the evaluation of the relative error, especiallysince the death mask was the only term of comparison for each portrait.

    So, referring to the portraits in Fig. 1 and the histogram in Fig. 8, the

    7

    following historical/artistic evaluations can be compared. The highestdegree of error (ca. 0.1) is found in portrait 9 in which, unlike the others,the composer is quite young, he is seated and it shows the whole bust. So,the artist has tried to enhance the solemnity of Bellini according to thetypical iconographic norms for a dignitary to the detriment of the por-trait’s accuracy. Similarly, portraits 1, 2 and 5, which also show thewhole bust typical of the great in pose, reveal smaller errors (on average0.08) compared to portrait 9.

    Portraits 6 and 7 which are very similar and symmetrical were alsoanalysed to test the reliability of the methodology. They showed similarerror levels (ca. 0.07) which were on average high. So, it may be deducedthat from the different timelines of Bellini and the artist of portrait 6(Adolfo Matarelli, 1832–1887) and the artistic style of portrait 7 (litho-graph by Girolamo Bozza), both portraits were presumably copied from ahitherto unknown artwork.

    Analogously, the very similar portraits 4 and 8 (even down to clothingand hairstyle) revealed comparable error values (ca. 0.08) despite thedifference in the forced expression in portrait 8 and in its symmetry.

    Portrait 11 differs from the others because of its unusual iconography.The foreground three-quarter profile is oriented at 1200 compared to thebust which is barely visible. Using this imbalance, the artist GiuseppeCammararo, has highlighted an introspective aspect. Presumably thepsychological interpretation prevailed to the detriment of relativeresemblance as shown by its high degree of error (ca. 0.085).

    Portraits 3 and 12 reveal the same degree of error (ca. 0.065) despitetheir depicting different ages, hairstyles and clothing. This shows howanthropometric correspondence can resolve uncertainties or differenceswhich may emerge from facial comparisons. This can also be extended tocomparing portraits 1 and 5 (ca. 0.08 error) which also show a differencein age.

    The highest compatibility was between portrait 10 and 14 as shownby the histogram in Fig. 8. From a simple visual analysis and incompat-ible timelines of the artists (D’Agata and Di Bartolo) with Bellini, bothportraits would seem to be related to another hitherto unknown artwork,even though this aspect is marginal given they are being compared to a

  • G. Sequenzia et al. Digital Applications in Archaeology and Cultural Heritage 17 (2020) e00144

    death mask. It may be concluded that the anthropometric distances be-tween landmarks on the death mask prove compatible with themorphology of the eyes, nose and mouth in portraits 10 and 14 revealinga minimal error (ca. 0.05) compared with the other portraits. With aslightly higher error level (ca. 0.06), portrait 13 has similarities to 14 innon-sampled zones like hairstyle and clothing leading to the presumptionof a historical/artistic connection between the two artworks. Finally, theportrait which reveals the highest matching score (i.e., 10) is mappedonto the 3D mask model to give an idea of the original 3D face we ex-pected Vincenzo Bellini to have in light of our study, as shown in Fig. 9.

    8. Conclusion

    This study aimed to find the most realistic portrait within a set of 14portraits. Specifically, 14 paintings of Vincenzo Bellini were comparedfor most resemblance to a 3D model of his death mask. Preliminarily, thepaintings weremanually labelled with a set of 16 landmarks, chosen fromthose proposed in the literature by using an ad hoc developed application(Image Marker Pro). To assess the quality of the annotations, a statisticalanalysis was performed to evaluate the precision of the operators androbustness of the landmarks. For each portrait, a linear 3D-to-2D trans-formation was done to minimise the distance between the 2D landmarksin the picture and the projected landmarks of the 3D mask. Then, thesedistances were normalized by considering the scale of the portrait, andthen the final dissimilarity scores with the mask were obtained. Theanalytical results found two portraits particularly compatible with the 3Ddeath mask model, allowing scholars to carry out historical-artistic as-sessments. Finally, the portrait of most resemblance wasmapped onto thedeath mask, thus obtaining a 3D reconstruction of the possible face ofVincenzo Bellini. Future developments of this work will concernanthropometric studies of Bellini’s skull, today held in Catania at themain cathedral. Moreover, we are planning to perform an accurate 3Dreconstruction of Bellini’s face based on the findings of this study.

    CRediT authorship contribution statement

    Gaetano Sequenzia: Conceptualization, Methodology, Validation,Formal analysis, Investigation, Resources, Writing - original draft,Writing - review & editing, Supervision, Project administration. DarioAllegra: Conceptualization, Methodology, Software, Validation, Formalanalysis, Investigation, Data curation, Writing - original draft, Writing -review & editing, Supervision. Gabriele Fatuzzo: Conceptualization,Validation, Formal analysis, Investigation, Resources, Data curation,Writing - original draft. Filippo Luigi Maria Milotta: Software, Valida-tion, Investigation, Writing - original draft, Visualization. FilippoStanco: Conceptualization, Validation, Formal analysis, Visualization.Salvatore Massimo Oliveri: Conceptualization, Validation, Formalanalysis, Visualization.

    Acknowledgement

    The Authors would like to thank Prof. Giovanni Celano for hisprecious help for the statistical analysis on landmarks annotation.

    References

    Allegra, D., Anthimopoulos, M., Dehais, J., Lu, Y., Stanco, F., Farinella, G.M.,Mougiakakou, S., 2017. A multimedia database for automatic meal assessmentsystems. In: International Conference in Image Analysis and Processing, in LectureNotes in Computer Science, vol. 10590. Springer.

    Caple, J., Stephan, C.N., 2016. A standardized nomenclature for craniofacial and facialanthropometry. Int. J. Leg. Med. 130 (3), 863–879.

    Charlier, P., Froesch, P., Huynh-Charlier, I., Fort, A., Hurel, A., Jullien, F., 2014. Use of 3Dsurface scanning to match facial shapes against altered exhumed remains in a contextof forensic individual identification. Forensic Sci. Med. Pathol. 10 (4), 654–661.

    Charlier, P., Froesch, P., Benmoussa, N., Morin, S., Augias, A., Ubelmann, Y., et al., 2019.Computer-aided facial reconstruction of “mary-magdalene” relics following hair andskull analyses. Clin. Med. Insights Ear, Nose Throat 12, 1179550618821933.

    8

    Cignoni, P., Callieri, M., Corsini, M., Dellepiane, M., Ganovelli, F., Ranzuglia, G., 2008.MeshLab: an Open-Source Mesh Processing Tool. Sixth Eurographics Italian ChapterConference, pp. 129–136.

    Farkas, L.G., 1994. Anthropometry of the Head and Face. Raven Pr.Fatuzzo, G., Sequenzia, G., Oliveri, S.M., 2016. Virtual anthropology and rapid

    prototyping: a study of Vincenzo Bellini’s death masks in relation to autopsydocumentation. Dig. Appl. Archaeol. Cult. Heritage 3 (4), 117–125.

    Gallo, G., Allegra, D., Atani, Y.G., Milotta, F.L.M., Stanco, F., Catanuto, G., 2016. Breastshape parametrization through planar projections. In: International Conference onAdvanced Concepts for Intelligent Vision Systems, Lecture Notes in ComputerScience, vol. 10016. Springer, pp. 135–146.

    Gupta, S., Markey, M.K., Bovik, A.C., 2010. Anthropometric 3D face recognition. Int. J.Comput. Vis. 90 (3), 331–349.

    Gupta, A., Mithun, N.C., Rudolph, C., Roy-Chowdhury, A.K., 2018, July. Deep learningbased identity verification in renaissance portraits. In: 2018 IEEE InternationalConference on Multimedia and Expo (ICME). IEEE, pp. 1–6.

    Hayes, S., Rheinberger, N., Powley, M., Rawnsley, T., Brown, L., Brown, M., et al., 2018.Variation and likeness in ambient artistic portraiture. Perception 47 (6), 585–607.

    Johnson, C.R., Hendriks, E., Berezhnoy, I.J., Brevdo, E., Hughes, S.M., Daubechies, I.,et al., 2008. Image processing for artist identification. IEEE Signal Process. Mag. 25(4), 37–48.

    Li, J., Yao, L., Hendriks, E., Wang, J.Z., 2011. Rhythmic brushstrokes distinguish vanGogh from his contemporaries: findings via automated brushstroke extraction. IEEETrans. Pattern Anal. Mach. Intell. 34 (6), 1159–1176.

    Liu, F., Wang, Z., 2019. PolishNet-2d and PolishNet-3d: deep learning-based workpiecerecognition. IEEE Access 7, 127042–127054.

    Liu, F., Zeng, D., Zhao, Q., Liu, X., 2016. Joint face alignment and 3D face reconstruction.In: European Conference on Computer Vision, Lecture Notes in Computer Science,vol. 9909. Springer.

    Liu, F., Zhao, Q., Liu, X., Zeng, D., 2020. Joint face alignment and 3D face reconstructionwith application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 42 (3),664–678.

    Lu, Y., Allegra, D., Anthimopoulos, M., Stanco, F., Farinella, G.M., Mougiakakou, S., 2018.A multi-task learning approach for meal assessment. In: Proceedings of the JointWorkshop on Multimedia for Cooking and Eating Activities and Multimedia AssistedDietary Management.

    Marcolin, F., Vezzetti, E., 2017. Novel descriptors for geometrical 3D face analysis.Multimed. Tool. Appl. 76 (12), 13805–13834.

    Milotta, F.L.M., Allegra, D., Stanco, F., Farinella, G.M., 2015. An electronic travel aid toassist blind and visually impaired people to avoid obstacles. In: InternationalConference on Computer Analysis of Images and Patterns, Lecture Notes in ComputerScience, vol. 9257. Springer.

    Montgomery, D.C., 2013. Statistical Quality Control – A Modern Introduction, seventh ed.Wiley.

    Munaro, M., Lewis, C., Chambers, D., Hvass, P., Menegatti, E., 2016. RGB-D humandetection and tracking for industrial environments. In: Advances in IntelligentSystems and Computing, vol. 302. Springer.

    Rudolph, C., Chowdhury, A.R., Srinivasan, R., Kohl, J., 2017. FACES: faces, art, andcomputerized evaluation systems–A feasibility study of the application of facerecognition technology to works of portrait. Artibus Hist.: Art Anthol. (75), 265–291.

    Short, L.J., Khambay, B., Ayoub, A., Erolin, C., Rynn, C., Wilkinson, C., 2014. Validationof a computer modelled forensic facial reconstruction technique using CT data fromlive subjects: a pilot study. Forensic Sci. Int. 237, 147-e1.

    Srinivasan, R., Roy-Chowdhury, A., Rudolph, C., Kohl, J., 2013a, October. Recognizingthe royals: leveraging computerized face recognition for identifying subjects inancient artworks. In: Proceedings of the 21st ACM International Conference onMultimedia. ACM, pp. 581–584.

    Srinivasan, R., Roy-Chowdhury, A., Rudolph, C., Kohl, J., 2013b. Quantitative modelingof artist styles in Renaissance face portraiture. In: Proceedings of the 2ndInternational Workshop on Historical Document Imaging and Processing. ACM,pp. 94–101.

    Srinivasan, R., Rudolph, C., Roy-Chowdhury, A.K., 2015. Computerized face recognitionin renaissance portrait art: a quantitative measure for identifying uncertain subjectsin ancient portraits. IEEE Signal Process. Mag. 32 (4), 85–94.

    Stork, D.G., 2009, September. Computer vision and computer graphics analysis ofpaintings and drawings: an introduction to the literature. In: International Conferenceon Computer Analysis of Images and Patterns. Springer, Berlin, Heidelberg, pp. 9–24.

    Stork, D., Johnson, M., 2010. Recent Developments in Computer-Aided Analysis ofLighting in Realist Fine Art. Image Processing for Art Investigations. Museum ofModern Art.

    Szeliski, R., 2010. Computer Vision: Algorithms and Applications. Springer Science &Business Media.

    Tang, H., Wang, Q., Chen, H., 2019. Research on 3D Human Pose Estimation Using RGBDCamera. IEEE International Conference on Electronics Information and EmergencyCommunication, pp. 538–541.

    Tian, Y., 2014. RGB-D sensor-based computer vision assistive technology for visuallyimpaired persons. In: Shao, L., Han, J., Kohli, P., Zhang, Z. (Eds.), Computer Visionand Machine Learning with RGB-D Sensors. Advances in Computer Vision andPattern Recognition. Springer.

    Tyler, C.W., Smith, W.A., Stork, D.G., 2012, February. In search of Leonardo: computer-based facial image analysis of Renaissance artworks for identifying Leonardo assubject. In: In Human Vision and Electronic Imaging XVII, vol. 8291. InternationalSociety for Optics and Photonics, p. 82911D.

    Wang, H., He, Z., He, Y., Chen, D., Huang, Y., 2019. Average-face-based virtual inpaintingfor severely damaged statues of Dazu Rock Carvings. J. Cult. Herit. 36, 40–50.

    http://refhub.elsevier.com/S2212-0548(19)30080-3/sref1http://refhub.elsevier.com/S2212-0548(19)30080-3/sref1http://refhub.elsevier.com/S2212-0548(19)30080-3/sref1http://refhub.elsevier.com/S2212-0548(19)30080-3/sref1http://refhub.elsevier.com/S2212-0548(19)30080-3/sref2http://refhub.elsevier.com/S2212-0548(19)30080-3/sref2http://refhub.elsevier.com/S2212-0548(19)30080-3/sref2http://refhub.elsevier.com/S2212-0548(19)30080-3/sref3http://refhub.elsevier.com/S2212-0548(19)30080-3/sref3http://refhub.elsevier.com/S2212-0548(19)30080-3/sref3http://refhub.elsevier.com/S2212-0548(19)30080-3/sref3http://refhub.elsevier.com/S2212-0548(19)30080-3/sref4http://refhub.elsevier.com/S2212-0548(19)30080-3/sref4http://refhub.elsevier.com/S2212-0548(19)30080-3/sref4http://refhub.elsevier.com/S2212-0548(19)30080-3/sref5http://refhub.elsevier.com/S2212-0548(19)30080-3/sref5http://refhub.elsevier.com/S2212-0548(19)30080-3/sref5http://refhub.elsevier.com/S2212-0548(19)30080-3/sref5http://refhub.elsevier.com/S2212-0548(19)30080-3/sref6http://refhub.elsevier.com/S2212-0548(19)30080-3/sref7http://refhub.elsevier.com/S2212-0548(19)30080-3/sref7http://refhub.elsevier.com/S2212-0548(19)30080-3/sref7http://refhub.elsevier.com/S2212-0548(19)30080-3/sref7http://refhub.elsevier.com/S2212-0548(19)30080-3/sref8http://refhub.elsevier.com/S2212-0548(19)30080-3/sref8http://refhub.elsevier.com/S2212-0548(19)30080-3/sref8http://refhub.elsevier.com/S2212-0548(19)30080-3/sref8http://refhub.elsevier.com/S2212-0548(19)30080-3/sref8http://refhub.elsevier.com/S2212-0548(19)30080-3/sref9http://refhub.elsevier.com/S2212-0548(19)30080-3/sref9http://refhub.elsevier.com/S2212-0548(19)30080-3/sref9http://refhub.elsevier.com/S2212-0548(19)30080-3/sref10http://refhub.elsevier.com/S2212-0548(19)30080-3/sref10http://refhub.elsevier.com/S2212-0548(19)30080-3/sref10http://refhub.elsevier.com/S2212-0548(19)30080-3/sref10http://refhub.elsevier.com/S2212-0548(19)30080-3/sref11http://refhub.elsevier.com/S2212-0548(19)30080-3/sref11http://refhub.elsevier.com/S2212-0548(19)30080-3/sref11http://refhub.elsevier.com/S2212-0548(19)30080-3/sref12http://refhub.elsevier.com/S2212-0548(19)30080-3/sref12http://refhub.elsevier.com/S2212-0548(19)30080-3/sref12http://refhub.elsevier.com/S2212-0548(19)30080-3/sref12http://refhub.elsevier.com/S2212-0548(19)30080-3/sref13http://refhub.elsevier.com/S2212-0548(19)30080-3/sref13http://refhub.elsevier.com/S2212-0548(19)30080-3/sref13http://refhub.elsevier.com/S2212-0548(19)30080-3/sref13http://refhub.elsevier.com/S2212-0548(19)30080-3/sref14http://refhub.elsevier.com/S2212-0548(19)30080-3/sref14http://refhub.elsevier.com/S2212-0548(19)30080-3/sref14http://refhub.elsevier.com/S2212-0548(19)30080-3/sref15http://refhub.elsevier.com/S2212-0548(19)30080-3/sref15http://refhub.elsevier.com/S2212-0548(19)30080-3/sref15http://refhub.elsevier.com/S2212-0548(19)30080-3/sref16http://refhub.elsevier.com/S2212-0548(19)30080-3/sref16http://refhub.elsevier.com/S2212-0548(19)30080-3/sref16http://refhub.elsevier.com/S2212-0548(19)30080-3/sref16http://refhub.elsevier.com/S2212-0548(19)30080-3/sref17http://refhub.elsevier.com/S2212-0548(19)30080-3/sref17http://refhub.elsevier.com/S2212-0548(19)30080-3/sref17http://refhub.elsevier.com/S2212-0548(19)30080-3/sref17http://refhub.elsevier.com/S2212-0548(19)30080-3/sref18http://refhub.elsevier.com/S2212-0548(19)30080-3/sref18http://refhub.elsevier.com/S2212-0548(19)30080-3/sref18http://refhub.elsevier.com/S2212-0548(19)30080-3/sref19http://refhub.elsevier.com/S2212-0548(19)30080-3/sref19http://refhub.elsevier.com/S2212-0548(19)30080-3/sref19http://refhub.elsevier.com/S2212-0548(19)30080-3/sref19http://refhub.elsevier.com/S2212-0548(19)30080-3/sref20http://refhub.elsevier.com/S2212-0548(19)30080-3/sref20http://refhub.elsevier.com/S2212-0548(19)30080-3/sref20http://refhub.elsevier.com/S2212-0548(19)30080-3/sref21http://refhub.elsevier.com/S2212-0548(19)30080-3/sref21http://refhub.elsevier.com/S2212-0548(19)30080-3/sref21http://refhub.elsevier.com/S2212-0548(19)30080-3/sref22http://refhub.elsevier.com/S2212-0548(19)30080-3/sref22http://refhub.elsevier.com/S2212-0548(19)30080-3/sref22http://refhub.elsevier.com/S2212-0548(19)30080-3/sref22http://refhub.elsevier.com/S2212-0548(19)30080-3/sref22http://refhub.elsevier.com/S2212-0548(19)30080-3/sref23http://refhub.elsevier.com/S2212-0548(19)30080-3/sref23http://refhub.elsevier.com/S2212-0548(19)30080-3/sref23http://refhub.elsevier.com/S2212-0548(19)30080-3/sref24http://refhub.elsevier.com/S2212-0548(19)30080-3/sref24http://refhub.elsevier.com/S2212-0548(19)30080-3/sref24http://refhub.elsevier.com/S2212-0548(19)30080-3/sref24http://refhub.elsevier.com/S2212-0548(19)30080-3/sref24http://refhub.elsevier.com/S2212-0548(19)30080-3/sref25http://refhub.elsevier.com/S2212-0548(19)30080-3/sref25http://refhub.elsevier.com/S2212-0548(19)30080-3/sref25http://refhub.elsevier.com/S2212-0548(19)30080-3/sref25http://refhub.elsevier.com/S2212-0548(19)30080-3/sref25http://refhub.elsevier.com/S2212-0548(19)30080-3/sref26http://refhub.elsevier.com/S2212-0548(19)30080-3/sref26http://refhub.elsevier.com/S2212-0548(19)30080-3/sref26http://refhub.elsevier.com/S2212-0548(19)30080-3/sref26http://refhub.elsevier.com/S2212-0548(19)30080-3/sref27http://refhub.elsevier.com/S2212-0548(19)30080-3/sref27http://refhub.elsevier.com/S2212-0548(19)30080-3/sref27http://refhub.elsevier.com/S2212-0548(19)30080-3/sref27http://refhub.elsevier.com/S2212-0548(19)30080-3/sref28http://refhub.elsevier.com/S2212-0548(19)30080-3/sref28http://refhub.elsevier.com/S2212-0548(19)30080-3/sref28http://refhub.elsevier.com/S2212-0548(19)30080-3/sref29http://refhub.elsevier.com/S2212-0548(19)30080-3/sref29http://refhub.elsevier.com/S2212-0548(19)30080-3/sref30http://refhub.elsevier.com/S2212-0548(19)30080-3/sref30http://refhub.elsevier.com/S2212-0548(19)30080-3/sref30http://refhub.elsevier.com/S2212-0548(19)30080-3/sref30http://refhub.elsevier.com/S2212-0548(19)30080-3/sref31http://refhub.elsevier.com/S2212-0548(19)30080-3/sref31http://refhub.elsevier.com/S2212-0548(19)30080-3/sref31http://refhub.elsevier.com/S2212-0548(19)30080-3/sref31http://refhub.elsevier.com/S2212-0548(19)30080-3/sref32http://refhub.elsevier.com/S2212-0548(19)30080-3/sref32http://refhub.elsevier.com/S2212-0548(19)30080-3/sref32http://refhub.elsevier.com/S2212-0548(19)30080-3/sref32http://refhub.elsevier.com/S2212-0548(19)30080-3/sref33http://refhub.elsevier.com/S2212-0548(19)30080-3/sref33http://refhub.elsevier.com/S2212-0548(19)30080-3/sref33

    A method for similarity assessment between death masks and portraits through linear projection: The case of Vincenzo Bellini1. Introduction1.1. Related works

    2. Case study2.1. Vincenzo bellini iconography2.2. The vincenzo bellini death masks

    3. Choice and description of landmarks4. Proposed approach4.1. Fixing origin4.2. 3D landmark projection4.3. Similarity assessment

    5. Landmark annotation5.1. Image marker PRO

    6. Experimental setup7. Discussion8. ConclusionCRediT authorship contribution statementAcknowledgementReferences