livemail: personalized avatars for mobile entertainment · 2019-02-25 · livemail: personalized...
Post on 12-Mar-2020
Embed Size (px)
LiveMail: Personalized Avatars for Mobile Entertainment
Miran MosmondorEricsson Nikola Tesla, Krapinska 45, p.p. 93, HR-10 002 Zagreb
Tomislav KosuticKATE-KOM, Drvinje 109, HR 10 000 Zagreb
Igor S. PandzicFaculty of electrical engineering and computing, Zagreb University, Unska 3, HR-10 000
LiveMail is a prototype system that allows mobile subscribers to communicate using personalized 3D face modelscreated from images taken by their phone cameras. The user takes a snapshot of someone's face - a friend, famousperson, themselves, even a pet - using the mobile phone's camera. After a quick manipulation on the phone, a 3Dmodel of that face is created and can be animated simply by typing in some text. Speech and appropriate animationof the face are created automatically by speech synthesis. Animations can be sent to others as real 3D animatedmessages or as short videos in MMS. They can be used as fun messages, greeting cards etc. The system is based ona client/server communication model. The clients are mobile devices or web clients so messages can be created, sentand received on the mobile phone or on a web page. The client has a user interface that allows the user to input afacial image and place a simple mask on it to mark the main features. The client then sends this data to a server thatbuilds a personalized face model. The client also provides an interface that lets the user request the creation ofanimated messages using speech synthesis. It is planned to have several versions of the client: Symbian, midlet-based, web-based, wap-based, etc. The server is responsible for sending messages adjusted to the capabilities of thereceiving platform. The server, Symbian client, midlet-based client and the web client have been implemented asprototypes. We present the system architecture and the experience gained building LiveMail.
1. IntroductionMobility and the Internet are the two most dynamic
forces in communications technology today. In parallelwith the fast worldwide growth of mobile subscriptions,the fixed Internet and its service offerings have grown ata rate far exceeding all expectations. The number ofpeople connected to the Internet is continuing toincrease and GPRS and WCDMA mobile networks areenabling connectivity virtually everywhere and at anytime with any device. With advances in computer andnetworking technologies comes the challenge of offeringnew multimedia applications and end user services inheterogeneous environments for both developers andservice providers.
The goal of the project was to explore the potential ofexisting face animation technology  for innovative
and attractive services for the mobile market, exploitingin particular the advantages of technologies like MMSand GPRS. The new service will allow customers totake pictures of people using the mobile phone cameraand obtain a personalized 3D face model of the personin the picture through a simple manipulation on themobile phone. In this paper we present architecture ofLiveMail system. We describe how unique personalizedvirtual characters are created with our face adaptationprocedure. Also, we describe clients that areimplemented on different platforms, most interestinglyon mobile platform, since 3D graphics on mobileplatforms is still in its early stages. Various differentnetwork and face animation techniques were connectedinto one complex system and we presented the mainperformance issues of such system. Also, the systemuses a MPEG-4 FBA standard that could ultimatelyenable video communication at extremely low
MobiSys ’05: The Third International Conference on Mobile Systems, Applications, and ServicesUSENIX Association 15
bandwidths, and work presented in this paper couldbring us one step further in that direction.
The paper is organized as follows. In the next sectionwe give a brief introduction on used standards andtechnologies. Next, we present overall systemarchitecture continuing with more details on server andclient implementation in the following sections. Finally,system performance evaluation is given in the section 6.
2.1. 3D modellingCreating animated human faces using computer
graphics techniques has been a popular research topicthe last few decades , and such synthetic faces, orvirtual humans, have recently reached a broader publicthrough movies, computer games, and the world wideweb. Current and future uses include a range ofapplications, such as human-computer interfaces,avatars, video communication, and virtual guides,salesmen, actors, and newsreaders .
There are various techniques that producepersonalized 3D face models. One of them is to use 3Dmodeling tool such as 3D Studio Max or Maya.However, manual construction of 3D models using suchtools is often expensive, time-consuming and itsometimes doesn’t result with desirable model. Otherway is to use specialized 3D scanners. In this way facemodels can be produced with very high quality usingthem in a mobile environment is not practical. Also,there have been methods that included use of twocameras placed at certain angle and algorithms forpicture processing to create 3D model . Some othermethods, like in , use three perspective images takenfrom a different angles to adjust deformable contours ona generic head model.
Our approach in creating animatable personalizedface models is based on face model adaptation ofexisting generic face model, similar to . However, inorder to achieve simplicity on a camera-equippedmobile device, our adaptation method uses a singlepicture as an input.
2.2. Face animationCreated personalized face model can be animated
using speech synthesis  or audio analysis (lipsynchronization). Our face animation system isbased on the MPEG-4 standard on Face and BodyAnimation (FBA) . This standard specifies a set of
Facial Animation Parameters (FAPs) used to control theanimation of a face model. The FAPs are based on thestudy of minimal facial actions and are closely related tomuscle actions. They represent a complete set of basicfacial actions, and therefore allow the representation ofmost natural facial expressions. The lips are particularlywell defined and it is possible to precisely define theinner and outer lip contour. Exaggerated values permitto define actions that are normally not possible forhumans, but could be desirable for cartoon-likecharacters.
All the parameters involving translational movementare expressed in terms of the Facial AnimationParameter Units (FAPU) (Figure 1. ). These units aredefined in order to allow interpretation of the FAPs onany facial model in a consistent way, producingreasonable results in terms of expression and speechpronunciation. They correspond to fractions of distancesbetween some key facial features (e.g. eye distance).The fractional units used are chosen to allow enoughprecision.
Figure 1. A face model in its neutral state and defined FAP units(FAPU) (ISO/IEC IS 14496-2 Visual, 1999)
The FAP set contains two high level FAPs forselecting facial expressions and visemes, and 66 lowlevel FAPs. The low level FAPs are expressed asmovement of feature points in the face, and MPEG-4defines 84 such points (Figure 2. ). The feature pointsnot affected by FAPs are used to control the static shapeof the face. The viseme parameter allows renderingvisemes on the face without having to express them interms of other parameters or to enhance the result ofother parameters, insuring the correct rendering ofvisemes. A viseme is a visual correlate to a phoneme. Itspecifies shape of a mouth and tongue for minimalsound unit of speech. In MPEG-4 there are 14 staticvisemes that are clearly distinguished and included inthe standard set. For example, phonemes /p/, /b/ and /m/represent one type of viseme. Important thing for theirvisualization is that the shape of the mouth of a speakinghuman is not only influenced by the current phoneme,but also by the previous and the next phoneme. InMPEG-4, transitions from one viseme to the next are
MobiSys ’05: The Third International Conference on Mobile Systems, Applications, and Services USENIX Association16
defined by blending only two visemes with a weightingfactor. The expression parameter allows definition ofhigh-level facial expressions.
The FAPs can be efficiently compressed and includedin a Face and Body Animation (FBA) bitstream for lowbitrate storage or transmission. An FBA bitstream canbe decoded and interpreted by any MPEG-4 compliantface animation system , and a synthetic, animatedface be visualized.
2.3. 3D graphics on mobile devicesImportant aspect of our face animation system is 3D
graphics on mobile phones. The last few years have seendramatic improvements in how much computation andcommunication power can be packed into such a smalldevice. Despite the big improvements, the mobileterminals are still clearly less capable than desktopcomputers in many ways. They run at a lower speed, thedisplays are smaller in size and have a lower resolution,there is less memory for running the programs and forstoring them, and the battery only last for a short time.
Figure 2. Face feature points (ISO/IEC IS 14496-2 Visual, 1999)
Rendering 3D graphics on handheld devices is still avery complex task, because of the vast computationalpower required to achieve a usable performance. Withthe introduction of color displays and more powerfulprocessors, mobile phones are becoming capable of
rendering 3D graphics at interactive frame rates. Firstattempts to implement 3D graphics accelerators onmobile phones have already been made. MitsubishiElectric Corp. announced their first 3D graphics LSIcore for mobile phones called Z3D in March 2003. Alsoother manufacturers like Fuetrek, Sanshin Electric,Imagination Technologies and ATI published their 3Dhardware solution for mobile devices a few monthsafter.
Beside hardware solutions, other important thing for3D graphics on mobile devices is availability of open-standard, well-performing programming interfaces(APIs) that are supported by handset manufacturers,operators and developers alike. These are OpenGL ES(OpenGL for Embedded Systems) and Java Mobile 3Dgraphics (also known as JSR 184) that have emerged inlast several months.
3. System architectureThe basic functions of the LiveMail service are
simple creation of personalized face model, transmissionand display of such virtual character on variousplatforms. The creation of such personalized face modelconsists of recognition of characteristic face lines on thetaken picture and adjustment of generic face model tothose face lines. The most important face lines are sizeand position of face, eyes, nose and mouth. Speechanimation of the character is created from the input text.More details on the each module implementation aregiven in next chapters while this chapter describessystem architecture.
Personalization of virtual characters and creation ofanimated messages with speech and lips synchronizationis a time-consuming process that requires morecomputational power then the phone provides. Oursystem architecture is defined by this limitation.Capabilities of mobile devices have improved in last fewyears, but they are still clearly not capable of suchdemanding computing. Thus, our system is notimplemented on one platform, rather is divided inseveral independent modules. The system is based on aclient/server communication model. The basic task ofthe server is to perform computationally expensiveprocesses like personalization of virtual character andcreation of animated messages with speech and lipssynchronization. The client side is responsible fordisplaying the animated message of the personalizedvirtual character and handling user requests for thecreation of a new personalized virtual character and itsanimated message through proper user interface (Figure3. ). In this way LiveMail clients are implemented on
MobiSys ’05: The Third International Conference on Mobile Systems, Applications, and ServicesUSENIX Association 17
various platforms: Symbian-based mobile devices,mobile devices with Java support (Java 2 MicroEdition), WAP and Web interfaces.
Figure 3. Simplified use-case scenario
The client application consists of a user interfacethrough which users can create new personalized virtualcharacters and send animated messages, preview createdmessages and view received animated messages. Whencreating the personalized character the interface allowsthe user to input a facial image and place a simple maskon it to mark the main features. After selecting mainface features the client sends this data to the server thatbuilds a personalized face model (Figure 4. ).
Figure 4. Personalized 3D face model creation
When creating animated messages, the user selects avirtual character, inputs text and addresses the receiver.The client application then sends a request for creationof the animated message to the server, which thensynthesizes the speech and creates matching facialanimation using the text-to-speech framework. The
Figure 5. Creating animation message through client user interface
animated message is then adjusted to the capabilities ofthe receiving platform. For mobile devices that cannotdisplay even medium quality 3D animation, animatedmessage it is converted to a short video or animated GIFand sent by MMS (Figure 5. ).
4. The serverThe server is a combination of a light HTTP server
and an application server (Figure 6. ). The HTTP serverprovides clients with user interface and receivesrequests, while the application server processes clientrequests: creates new personalized 3D face models andanimation messages. The user interface is dynamicallygenerated using XSL transformations from an XMLdatabase each time client makes a request. The databaseholds information about user accounts, their 3D facemodels and contents of animated messages. TheLiveMail server is a multithreaded application and thelight HTTP server can simultaneously receive manyclient requests and pass them to the application serverfor processing. The application server consists of manymodules assigned for specific tasks like: 3D face modelpersonalization, animation message creation and more.During the process of 3D face model personalizationand animated message creation there are resources thatcannot be run in multithread environment. Thereforethey need to be shared among modules. Microsoft’sText to speech engine, which is used for speechsynthesis and as a base of 3D model animation, is asample of a shared resource. To use such a resource allapplication server modules need to be synchronized.
Figure 6. Simplified server architecture
MobiSys ’05: The Third International Conference on Mobile Systems, Applications, and Services USENIX Association18
The server’s primary task is virtual characteradaptation. The client’s request for new virtual charactercreation holds entry parameters for adaptation process:the picture of person whose 3D model is created and thecharacteristic facial points in that picture (creating facemask). The server also has a generic 3D face model thatis used in adaptation. Based on these inputs, the serverdeforms and textures the generic model in such a waythat it becomes similar to the face in the picture, andthus produces the new 3D model ready for animation.The model is stored on the server for later use in VRMLformat.
The virtual character adaptation algorithm starts bymapping characteristic facial points from a facial pictureto characteristic points in corresponding generic 3D facemodel. This initial mapping is followed by three mainstages of adaptation:
The first stage is normalization. The highest and thelowest point of generic 3D face model are modulatedaccording to the characteristic facial points from thefacial picture. We translate the generic 3D face model tohighest point from facial picture (point 11.4 on Figure 2.). The size of the translated model does not match thesize of the facial picture mask so we have to scale it. Wecalculate vertical ratio of generic model and facialpicture mask and move every point of the generic modelin corresponding ratio. We distinguish vertical andhorizontal scaling. In horizontal scaling we look at thehorizontal distance between every point to highest pointfrom facial picture, relative to face axis symmetry.Vertical scaling is easier, as there is no axis symmetry.
The second stage is processing. We distinguishtexture processing and model processing. With Nexisting points we create a net of triangles that cover allnormalized space. It’s important to notice that beyondthe face the net must be uniform. Based on knownpoints and known triangles the interpolator algorithm isable to determine the coordinates of any new point inthat space using interpolation in barrycentriccoordinates. The interpolator used here is described in. We forward characteristic points received fromclient’s request to an interpolator. The interpolator(based on triangles net) will determine location of otherpoints that we need to create new model.
The third stage of the algorithm is renormalization. Itis practically the same algorithm as in first segment,except we roll back model from normalized space backto space where it was before starting of algorithm.
5. The clientThe animation itself is created on the server that
provides users with transparent access, meaning variousclient types could be used. After personalized facemodel with proper animation is created, it is send backto the client, where it can be previewed. Multi-platformdelivery, and the capability to implement support forvirtually any platform is one of the contributions of thissystem. Our strategy is to use a bare-minimum faceanimation player core. This core can be easily ported toany platform that supports 3D graphics.
The face animation player is essentially an MPEG-4FBA decoder. When the MPEG-4 Face AnimationParameters (FAPs) are decoded, the player needs toapply them to a face model. Our choice for the facialanimation method is interpolation from key positions,essentially the same as the morph target approachwidely used in computer animation and the MPEG-4Face Animation Tables (FAT) approach . FATsdefine how a model is spatially deformed as a functionof the amplitude of the FAPs. Interpolation was theearliest approach to facial animation and it has beenused extensively. We prefer it to procedural approachesand the more complex muscle-based models because itis very simple to implement, and therefore easy to portto various platforms; it is modest in CPU timeconsumption; and the usage of key positions (morphtargets) is close to the methodology used by computeranimators and could be easily adopted.
The way the player works is the following. Each FAP(both low- and high-level) is defined as a key position ofthe face, or morph target. Each morph target is describedby the relative position of each vertex with respect to itsposition in the neutral face, as well as the relativerotation and translation of each transform node in thescene graph of the face. The morph target is defined fora particular value of the FAP. The position of verticesand transforms for other values of the FAP are theninterpolated from the neutral face and the morph target.This can easily be extended to include several morphtargets for each FAP and use a piecewise linearinterpolation function. However, currentimplementations show simple linear interpolation to besufficient in all situations encountered so far. The vertexand transform movements of the low-level FAPs areadded together to produce final facial animation frames.In case of high-level FAPs, the movements are blendedby averaging, rather than added together.
Due to its simplicity and low requirements, the faceanimation player is easy to implement on a variety of
MobiSys ’05: The Third International Conference on Mobile Systems, Applications, and ServicesUSENIX Association 19
platforms using various programming languages.Additionally, for the clients that are not powerfulenough to render 3D animations, the animations can bepre-rendered on the server and sent to the clients asMMS messages containing short videos or animatedGIF images. In next chapters, we describe the followingimplementations of the client: the Symbian client, Javaapplet-based web client, and a generic mobile phoneclient built around J2ME, MMS and WAP. The first twoimplementations are full 3D clients, while the last oneonly supports pre-rendered messages.
5.1. Symbian clientOur mobile client is implemented on Symbian
platform as a standalone C++ application. After taking aphoto with camera, the user needs to adjust the maskwith key face part outlined (Figure 7. ). The mask isused to define 26 feature points on the face that are then,together with picture sent to the server for faceadaptation, as described previously.
Figure 7. Symbian user interface and mask adjustment
After creation of personalized face model, it is sentback to the user where it can be previewed (Figure 8. ).The face animation player on Symbian platform formobile devices is based on DieselEngine. TheDieselEngine is collection of C++ libraries that helpsbuilding applications with 3D content on variousdevices. DieselEngine has low-level API (ApplicationProgram Interface) that is similar to Microsoft DirectXand high level modules had to be implemented. Themost important is a VRML parser that is used to convert3D animatable face model from VRML format toDiesel3D scene format (DSC). Other modules enableinteraction with face model like navigation, picking andcentering.
Figure 8. 3D face model preview on Symbian platform
5.2. Java applet-based web clientA parallel web service is also offered as an interface
to the system that allows users to access all thefunctionality: creating new personalized virtualcharacters and composing messages that can be sent ase-mail. The system keeps the database of all the createdcharacters and sent messages for a specific useridentified by e-mail and password.
As another way of creating new personalized virtualcharacters, the Java applet is used with the samefunctional interface described earlier. Compared tomobile input modules, the Java Applet provides moreprecision in defining feature points since the adjustmentof the mask is handled in higher resolution of thedesktop computer. Also the cost of data transfer, whichis significantly cheaper compared to mobile devices,makes it possible to use higher resolution portraitpictures in making better looking characters.
Figure 9. Mask adjustment on Java applet-based web client
New messages can be made with any previouslycreated characters. The generated animation is stored onthe server and the URL to the LiveMail is sent to thespecified e-mail address. The html page URL isproduced dynamically. The player used to viewLiveMails is a Java applet based on the Shout3D
MobiSys ’05: The Third International Conference on Mobile Systems, Applications, and Services USENIX Association20
rendering engine. Since it is a pure Java implementationand requires no plug-ins, the LiveMail can be viewed onany computer that has Java Virtual Machine installed.
5.3. Generic mobile client builtaround J2ME, MMS and WAP
The face animation player is easy to implement on avariety of platforms using various programminglanguages. However, for the clients that are not powerfulenough to render 3D animations we an offer alternative:the animations can be pre-rendered on the server andsent to the clients as MMS messages containing shortvideos or animated GIF images.
Figure 10. Composing LiveMail message on web client
A LiveMail WAP service is also provided for mobilephones. It provides the functionality of creating andsending LiveMail with previously generatedpersonalized characters.
User interface through which personalized face modelis created is also implemented on Java 2 Micro Edition(J2ME) platform. However, this platform alone does notdefine access to native multimedia services like, forexample, camera manipulation and picture access. So,additional J2ME package, called Mobile Media API(MMAPI), was used.
6. System performance evaluationIn this section system performance evaluation is
presented. First of all, the size of create newpersonalized virtual character request greatly depends onsize of the face image. Other request data (user name,virtual character name, face mask parameters) is alwaysapproximately 1K. With average picture of 120x160pixels request size is 15K. Request to create animatedmessage carries few data and is always smaller than 1K.
6.1. ServerThe described adaptation algorithm is very time
consuming. Our measurements show that 98% of entireprocess is spent on the adaptation algorithm. Theremaining 2% is spent on model storage and previewimages for client user interface. With a generic 3D facemodel constructed of approx. 200 polygons adaptationalgorithm takes 7.16 seconds on an AMD Athlon XP2800+ (Figure 11. ). Each adaptation algorithm startedby client’s request is run in a separate thread somultiprocessor systems can handle simultaneous clientrequests much faster.
Time to create new virtual character
1 2 5 10
number of simultaneous client connections
AMD AthlonXP 2800+512 DDRRAMPCMark04Score 3125
Figure 11. Time to create a new virtual character in respectto number of simultaneous client connections
The second most valuable server’s task is creation ofanimated messages. Its execution time depends onmessage text length (Figure 12. ). Also, Microsoft Textto Speech engine cannot process simultaneous text tospeech conversions so all client requests are handled oneat the time (while all others wait).
Time to create animated message
5 10 15 20 25
number of words in message(average w ord length: 6 characters)
AMD AthlonXP 2800+512 DDRRAMPCMark04Score 3125
Figure 12. Time to create animated message inrespect to number of words in message
6.2. ClientOn the other side, required bandwidth for displaying
animated message on Symbian client is approx 0.3 kbit/s
MobiSys ’05: The Third International Conference on Mobile Systems, Applications, and ServicesUSENIX Association 21
for face animation, and that is considerably less thansending raw video in any format. The reason for this isthat FBA bitstream is built upon model based coding aswas described previously. If we take analogy with voice,it uses the same principle like in GSM codec; it sendsonly parameters that are used on decoding side by amodel to reproduce a sound, or in our case a faceanimation.
In addition to face animation, other bandwidthrequirements for displaying animated message onSymbian client are 13 kbit/s for speech andapproximately 50K download for an average facemodel. The Web client requires additional 150Kdownload for the applet.
The Symbian client implementation was tested onSony Ericsson P800 mobile device with various staticface models. Interactive frame rates were achieved withmodels containing up to several hundreds polygons. Thegeneric face model used in our LiveMail system usesapproximately two hundred polygons and itsperformance is shown in Figure 13. After animation hasstarted (in this case in 21st second), frame rate drops toaverage of 10 frames per second (FPS), but this is stillrelatively high above the considered bottom boundaryfor interactive frame rates on mobile devices.
0 5 10 15 20 25 30 35 40time (s)
Figure 13. Generic face model performances on SE P800 mobiledevice
Web client showed performance of 24-60 fps withtextured and non-textured face models of up to 3700polygons on a PIII/1000. This performance issatisfactory for today’s mobile PC user connecting to theInternet with, for example, GPRS. More details on thisimplementation and performance can be found in .
7. ConclusionsIn this paper we have introduced a system that can be
used as a pure entertainment application. Users deliver
personalized, attractive content using simplemanipulations on their phones. They create their own,fully personalized content and send it to other people.By engaging in a creative process - taking a picture,producing a 3D face from it, composing the message -the users have more fun, and the ways they use theapplication are only limited by their imagination.
LiveMail is expected to appeal to younger customerbase and to promote services like GPRS and MMS. It isexpected to directly boost revenues from these servicesby increasing their usage. Due to highly visual andinnovative nature of the application, there is aconsiderable marketing potential. The 3D faces can bedelivered throughout various marketing channels,including the web and TV, and used for brandingpurposes.
Besides entertainment aspects, there has been a greatdeal of research work. Barriers were crossed with theface animation player on mobile platform and with 3Dface model personalization on the server. We haveconnected various different network technologies andface animation techniques in one complex system andpresented the experiences gained building such asystem. Also, the system uses a MPEG-4 FBA standardthat could ultimately enable video communication atextremely low bandwidths. Although emphasis was onthe entertainment, work presented in this paper couldbring us one step closer to that goal.
8. AcknowledgmentsWe would like to acknowledge Visage Technologies
AB, Linköping, Sweden, for providing the underlyingface animation technology used for the system describedin this paper.
References F.I. Parke, K. Waters, Computer Facial animation,
A.K.Peters Ltd, 1996., ISBN 1-56881-014-8 Igor S. Pandzic, Robert Forschheimer (editors),
MPEG-4 Facial Animation - The standard,implementations and applications, John Wiley &Sons, 2002, ISBN 0-470-84465-5.
 M. Escher, I. S. Pandzic, N. Magnenat-Thalmann,Facial Deformations for MPEG-4, Proc. ComputerAnimation 98, Philadelphia, USA, pp. 138-145,IEEE Computer Society Press, 1998.
 F. Lavagetto, R. Pockaj, The Facial AnimationEngine: towards a high-level interface for thedesign of MPEG-4 compliant animated faces, IEEE
MobiSys ’05: The Third International Conference on Mobile Systems, Applications, and Services USENIX Association22
Trans. on Circuits and Systems for VideoTechnology, Vol. 9, No. 2, March 1999.
 ISO/IEC 14496 - MPEG-4 International Standard,Moving Picture Experts Group,http://www.chiariglione.org/mpeg/standards/mpeg-4/mpeg-4.htm
 3D Arts, DieselEngine SDK,http://www.3darts.fi/mobile/de.htm
 T. Fuchs, J. Haber, H.-P. Seidel, MIMIC - ALanguage for Specifying Facial Animations,Proceedings of WSCG 2004, 2-6 Feb 2004, pp. 71-78.
 Z. Liu, Z. Zhang, C. Jacobs, M. Cohen, RapidModeling of Animated Faces From Video, InProceedings of The Third International Conferenceon Visual Computing (Visual 2000), pp 58-67,September 2000, Mexico City
 H. Gupta, A. Roy-Chowdhury, R. Chellappa,Contour based 3D Face Modeling From AMonocular Video, British Machine VisionConference, 2004.
 Pelachaud, C., Badler, N., and Steedman, M.,Generating Facial Expressions for Speech,Cognitive, Science, 20(1), pp.1–46, 1996.
 Igor S. Pandzic, Jörgen Ahlberg, Mariusz Wzorek,Piotr Rudol, Miran Mosmondor, FacesEverywhere: Towards Ubiquitous Production andDelivery of Face Animation, Proceedings of the2nd International Conference on Mobile andUbiquitous Multimedia, Norrkoping, Sweden, 2003
 Igor S. Pandzic, Life on the Web, Software FocusJournal, John Wiley & Sons, 2001, 2(2):52-59.
 P. Hong, Z. Wen, T. S. Huang, Real-time speechdriven Face Animation, in I. S. Pandzic, R.Forchheimer, Editors, MPEG-4 Facial Animation -The Standard, Implementation and Applications,John Wiley & Sons Ltd, 2002.
 W.Lee, N.Magnenat-Thalmann, Fast HeadModeling for Animation, Journal Image and VisionComputing, Volume 18, Number 4, pp.355-364,Elsevier, March, 2000
 Igor S. Pandzic, Facial Motion Cloning, GraphicalModels journal.
MobiSys ’05: The Third International Conference on Mobile Systems, Applications, and ServicesUSENIX Association 23