master thesis proposals - unipi.itcompass2.di.unipi.it/didattica/win18/doc/proposte di...
TRANSCRIPT
University of Pisa and Scuola Superiore Sant’Anna
Corso di Laurea Magistrale in Informatica e Networking Master Program in Computer Science and Networking
MASTER THESIS PROPOSALS This documents contains proposals of Master Thesis in Computer Science and Networking. Most of them are mainly descriptions of research or application areas. Thus, it is possible that some issues have been already chosen in the past, but could still be available. Students are invited to contact the supervisors directly.
Paolo Ferragina
Energy-aware computing
In a recent study we have proposed a new methodology to estimate the energy consumption of an
algorithm, and we have successfully validated it on several simple algorithms and architectures. This
is a key issue nowadays because “the cost of power and cooling is likely to exceed that of hardware”
[Google, 2007]. The main goal of this master thesis is therefore to deepen the study onto this
methodology by analyzing more sophisticated algorithms that process large datasets and thus work
on many memory levels. This study will also investigate the impact that Solid-State disks and GPUs
can have in the design of those algorithms and in their energy-profile.
Fabrizio Di Pasquale, Marco Vanneschi
FPGA-based Implementation of Pulse Coding Techniques for Distributed Optical Fiber Sensors
This thesis will be mainly focused on software implementation of pulse coding techniques, actually
Simplex coding, in order to improve the performance of distributed optical fiber sensors.
Optical fiber sensors are attracting a significant interest for their many fields of applications, ranging
from civil and geo-technical engineering to the oil & gas industry, from energy management to
railway, highway and structural health monitoring. The use of optical pulse coding techniques allows
one to extend the sensing range of the sensors while keeping meter or sub-meter scale spatial
resolutions.
In order to boost the performance of pulse coding in real applications, coding and decoding
algorithms must be carefully designed implemented, optimizing required resource allocation and
minimizing processing overheads which could potentially cancel out the provided signal-to-noise
ratio enhancement.
To this end this thesis will address the study, development and implementation of coding algorithms,
for instance Simplex coding, on an FPGA-based architecture aimed at laser triggering.
During the design stage, accurate algorithm optimization and performance simulation steps are
critical points due to the limited available resources in the FPGA together with the high expected
processing rate for the coded pulse generation (exceeding 100 MHz).
2
Fabrizio Di Pasquale, Marco Vanneschi
Multi-Core Approach for Real-Time Decoding in Optical Fiber Sensor Systems
This thesis will deal with the development, testing and subsequent implementation of a decoding
algorithm under a multi-core architecture to provide real-time detection capabilities applied to a
optical fiber sensing system.
Optical fiber sensors are attracting a significant interest for their many fields of applications, ranging
from civil and geo-technical engineering to the oil & gas industry, from energy management to
railway, highway and structural health monitoring. The use of optical pulse coding techniques allows
one to extend the sensing range of the sensors while keeping meter or sub-meter scale spatial
resolutions.
In optical fiber sensor systems employing optical pulse coding, the decoding process applied to
coded traces constitutes a fundamental step in the reconstruction of the physical fiber parameters,
and is critical in providing real-time sensing capabilities. The ideal decoding process typically
involves application of different linear-algebra operations (such as matrix inversions, scalar products
and so forth) on a large data stream resulting from analog-to-digital sampling within stringent
temporal constraints (typically a fraction of second) and with limited available computational
resources. Presently, due to the amount of involved calculations in decoding, the large processing
time (> 20 sec) hinders an acceptable sensing performance. In this context, the thesis will be aimed at
developing and testing a decoding algorithm (based for instance on Simplex codes) under a multi-
core processor architecture, enabling msec-order data throughput and real-time sensing capabilities.
Marco Vanneschi, Piero Castoldi
On-chip optical interconnection structures for multi/manycore architectures
The rapid development of multi/manycore technologies offers the opportunity for highly parallel
architectures implemented on a single chip. While the first, low-parallelism multicore products have
been based on simple interconnection structures (single bus, very simple crossbar), the emerging
highly parallel architectures will require complex, limited-degree interconnection networks. This
thesis studies this trend according to the general theory of interconnection structures for parallel
machines, and investigates some solutions in terms of performance, cost, fault-tolerance, and run-
time support to shared-memory and/or message passing programming mechanisms.
Marco Vanneschi, Stefano Giordano
Computational and Cost Model for High-performance Pervasive Computing
High-performance Pervasive Computing is a new paradigm aiming to exploit the potentials of
heterogeneous, ubiquitous and mobile computing and communication infrastructures in order to
achieve high-performance especially for real-time, critical applications, e.g. emergency management,
homeland security.
We consider distributed platforms characterized by general-purpose processing nodes (e.g. servers,
clouds), but also specialized/embedded processing nodes (e.g. wearable devices, sensors, smart-
phones), and different heterogeneous wired and/or wireless networks. All these resources, even those
apparently (till yesterday) limited in processing power and capacity, are used to actively take part to
the distributed high-performance computation.
The computational model provides mechanisms for the explicit definition of the application control
logic, that decides and manages adaptive reconfigurations in response to various events. An
associated cost model is defined to formalize and carry out the most suitable control strategies for
achieving the desired QoS level (e.g. response time, energy saving, precision of computed results,
and other execution metrics and their proper combinations).
3
The applications feature the full integration of communication and processing aspects, in such a way
as to overcome the classical “black-box” or “best-effort” approach in which the two layers are
characterized by specific and independent control actions concerning only their local information.
The integrated approach to autonomic applications is an unavoidable feature in order to target the
desired QoS levels both for high-performance distributed processing and information flows.
This thesis investigates and experiments a version of the computational and cost model for the most
critical mechanisms on real-life high-performance pervasive applications.
Marco Vanneschi
High-performance implementation of signal processing applications for multicore architectures - in collaboration with SELEX SISTEMI INTEGRATI, Rome
In the context of research programmes and University of Pisa – Selex SI collaborations, this thesis
deals with the efficient and scalable implementations of the most critical parts of signal processing
applications, especially in radar environments. Parallel versions of numerical and non-numerical
algorithms, for which high-performance and real-time requirements are a must, are investigated,
implemented according to the parallel programming models of the research group, and evaluated.
Complex real-life applications, consisting in the integration of several algorithmic modules, are
studied and evaluated.
Marco Vanneschi
Theses on high-performance systems and tools for advance financial processing at GBG Lab
in collaboration with LIST SpA – GBG Lab, Pisa
Data Stream Processing and Complex Event Processing are general computational paradigms
characterizing advanced financial processing applications (brokerage, market-making, algo-trading,
financial markets products). From a modeling viewpoint, these paradigms consider stream-based
computations in which complex patterns of data and events are processed on-line and “on the fly”
(i.e. information are not present in persistent data structures, instead they flow in continuous
streams). New areas of research are stimulated by this paradigm, including algorithms, parallel
models and fault tolerence. In GBG Lab we are working in new high-performance systems and tools
for Data Stream Processing and Complex Event Processing, exploiting all the current and future
technologies for multi-/many-core components.
A series of theses are available at GBG Lab on these topics.
Marco Vanneschi, Marco Danelutto
Parallel programming model and cost model for shared-memory multicore architectures
One of the main trend in multi/manycore technology consists in highly-parallel on-chip architectures
with shared memory and complex memory hierarchies. The efficient exploitation of such
architectures requires the ability to define an accurate cost model for the architecture, in particular
for the caching hierarchies. Based on this cost model, a high-level parallel programming model
should be able to effectively deal with the shared memory paradigm at low overhead. This thesis
investigates these problems starting from the group experiences on high-level parallel programming,
and taking into account existing shared memory multicore products and their trends.
4
Marco Vanneschi, Marco Danelutto
New programming model for highly-parallel architectures including GPU subsystems
Though GPUs have recently received some attention in high-performance applications, the
programmability of systems including GPU SIMD coprocessors and SISD/MIMD architectures is
largely an open research and technological problems. Typical issues are the high-level view of the
different architectural style, and the load balancing and communication optimization problems. This
thesis investigates and experiments solutions to the parallel programming models starting from the
group experiences on high-level parallel programming, and taking into account existing GPU
products and their trends.
Marco Danelutto
Autonomic management of structured parallel computations
Behavioural skeletons have been introduced some years ago to study possibilities offered by co-
design of parallelism exploitation patterns and non functional concern autonomic management in
component frameworks. The main goal of this thesis is to investigate the feasibility of the usage of
behavioural skeletons in the field of high performance network processing. In particular, we want to
investigate suitable reactive and proactive policies suitable to handle typical hot spots in network
management which are related to peaks in the (monitored/managed) network traffic. The thesis may
involve work on the Behavioural skeletons prototypes currently available.
Marco Danelutto
Macro data flow computing models for multi/many cores
Data flow has been considered as a viable computing model to support efficient execution of highly
parallel computations since the ‘80. The research on data flow architectures did not survive the
impressive development achieved in commodity processors. We propose to investigate the
possibility to implement macro data flow interpreters targeting currently available multicores via
FastFlow, the fine grain parallel library developed at the Pisa and Torino Dept. of Computer Science.
The candidate will develop simple prototypes supporting macro data flow computations and will
then evaluate alternative implementation mechanisms and policies. “Embedded” fault tolerance will
be considered a primary non functional concern. The possibility to use different “task parallel”
libraries, such as openMP or TBB will be considered, in case FastFlow fails supporting macro data
flow for some reason.
Nicola Tonellotto
Multi-criteria Job Scheduling for Cloud Computing Platforms
This thesis aims at designing and evaluating a multi-criteria job scheduling policy for scheduling a
continuous stream of batch jobs on large-scale cloud computing platforms. The scheduling policy
will have as objective to meet a set of Quality of Service (QoS) requirements requested by both the
submitted jobs and installations (providers). Typical QoS requirements are the time at which user
wants to receive results and to optimize the exploitation of hardware and software resources.
Nicola Tonellotto
Landmark Recognition via Parallel Clustering
The goal of this thesis is to design and implement solutions to recognize the most visited places (i.e.
landmarks) of a city given a collectionof photos that were shot by tourists visiting that city. This can
be achieved by clustering those photos, i.e. grouping similar ones together, on the basis of their
5
visual descriptors. The expected outcome of this thesis, is to provided a parallel software (e.g.
exploiting map-reduce) that translates a set of user photos in a sequence of visited landmarks.
Nicola Tonellotto, Marco Danelutto
Managing Terabytes using Algorithmic Skeletons
The aim of this thesis is to design and evalute algorithms index huge amounts of data exploiting
technologies beyond the MapReduce approach. This study will be devoted to analyse existing
algorithms and implement new solution exploiting cloud- and skeleton-base technologies
implemented in Java.
Nicola Tonellotto, Stefano Giordano
Performance Analysis of Reliable Multicast Delivery in Mobile Clouds
The aim of this thesis is evaluating the performance of reliable multicast file delivery in the next-
generation mobile cloud scenarios. This study will be devoted to design and implement a simulator
for reliable multicast protocols (e.g. NORM) in C++, taking into account the distruptive paradigm in
land mobile computer networks.
Nicola Tonellotto, Stefano Giordano
Traffic Scheduler for Private/Public Communications in Clouds
The aim of this thesis is to design a traffic scheduler for flow control of multicast and unicast
delivery of data, in batch and stream mode. This study will include the analysis of existing solutions
for the dissemination of private data in distributed environments, the design of a solution supporting
multicast delivery of data in these environments and a proof-of-concept implementation and
evaluation.
Linda Pagli
Distributed maintenance of a Spanning Tree
Low cost and high reliability of a network are conflicting parameters. Network survivability can be strengthened by increasing connectivity, but this also increases the network cost. We know that the network with minimum cost is a spanning tree of the network graph, but such a network will not even survive to a single edge failure, hence to enhance survivability some redundancy must be introduced. The problem of spanning tree maintenance in presence of faults in a distributed environment using only local knowledge will be considered. Starting from 1-fault tolerant spanning tree known solutions, we want to develop distributed efficient protocols able to maintain the spanning tree in presence of k –1 consecutive failures, with minimal extra storage.
Giuseppe Attardi
Audio Graffiti
Social services that allow people to share information are gaining great popularity. Adding
localization to these services allows providing access to information of interest which is dependent
on the user location. The goal of this master thesis is to participate in the design and development of
Audio Graffiti, a service for sharing audio messages created on mobile devices which are virtually
placed next to monuments in the city of Florence. The project is funded by the Comune di Firenze
and will exploit a wireless infrastructure being built by the municipality. Several aspects will be
6
addressed, including developing an iPhone app to access to the service, novel localization
techniques, a distributed system for data collection and transmission and anti-spam techniques.
Piero Castoldi, Isabella Cerutti, Nicola Andriolli
Architetture e scheduling a basso consumo di energia negli optical interconnect/switch
While energy efficiency has been one of the main design requirements in battery-run mobile computers, the non-portable devices and their interconnection have been exempted from this requirement so far. Indeed, the ultimate goal of interconnection networks has always been low latency and high throughput. Unfortunately, electronic interconnection networks drain a considerable amount of power at peak utilization, while large amount of power is wasted at low utilization level. Optical communications already demonstrated its capability for ultra-high transmission, this thesis targets the design of a scalable optical interconnection architectures able to achieve both a high throughput at peak utilization and a low-power consumption compared to the utilization.
Piero Castoldi, Isabella Cerutti, Marco Di Natale
Applicazioni delle tecnologie di telecomunicazione e delle tecnologie informatiche alla rete di distribuzione elettrica (smart grid) in ambito domestico o veicolare.
The next-generation electricity grid, known as the “smart grid” or “intelligent grid” is expected to address the major shortcomings of the existing grid. In essence, the smart grid needs to provide the electricity company with full visibility and pervasive control over their assets and services. To allow this communication and data management play an important role for balancing and managing energy production, consumption and storage. This thesis will study architectures and techniques for creating a smart energy grid choosing suitable scenarios like (i) domestic or industrial customers (ii) electric vehicular scenarios.
Piero Castoldi, Isabella Cerutti
Realizzazione tramite Field Programmable Gate Arrays (FPGAs) di protocolli di controllo di reti ottiche passive atti al risparmio energetico. In the literature it has been shown that allowing Optical Network Units (ONUs) of Passive Optical Networks (PONs) to dinamically switch to stand-by when idle (i.e., sleep-mode), sensibly reduces the energy consumption of the network customer edge. The aim of this thesis is to implement a testbed to experimentally assess the aforementioned benefits. The tetsbed will involve the implementation, after the ONU Clock and Data Recovery Circuit, of the protocol to control the ONU and to control the ONU sleep mode in a Field Programmable Gate Array (FPGA). Requirements for this thesis are good programming skills, understanding of combinatorial networks, and network protocols.
Laura Ricci, Massimo Coppola
Consistency Models for Distributed Multiuser Virtual Environments
Massively Multiuser Environments (MME) are virtual worlds where multiple participants share the
same virtual environment and interact with it and among themselves via avatars, that are virtual
representation of users. Even if most commercial MME (World of Warcraft, Second Life,..) exploit a
client server architecture, the investigation of a distributed MME architecture integrating P2P and
clouds is currently an active research area. The replication of the virtual world over a set of nodes
requires the definition of a proper consistency model. Strong consistency models adopted in
7
client/server MME are not suitable for distributed MME because of their implementation overhead.
This thesis will investigate existing optimistic consistency models proposed in the literature in order
to propose a proper consistency model for distributed MME. The benefits of the proposed solution
will be assessed through the development of a prototype.
Laura Ricci, Massimo Coppola
Locality Aware Service Mapping for Distributed Virtual Environments
Several P2P applications require the dynamic mapping of a set of services to peers so that several
constraints on hardware capabilities, security and stability of peers are satisfied. For instance in a
Distributed Virtual Environment (DVE) where the workload is paired with the egions of the virtual
world, a dynamic election of the 'best peer' for the management of a egion is required. This thesis
will investigate the problem of the definition of a locality-aware mapping of the DVE's regions to the
peers of an heterogeneous network. The main aim of the locality-aware mapping is the minimization
of the latencies between the peer providing the service and the set of peers accessing it. The
candidate will exploit Network Coordinates Systems (NCSs) as tool to predict network latencies in a
distributed fashion. The thesis will also evaluate different distributed paradigms (Voronoi Networks,
gossip P2P networks) to select an optimal mapping.
Massimo Coppola
QoS control and resource selection in Federated Clouds
QoS control over virtual resources is one of the features and active research issues of nowadays Cloud
Computing platforms. Federated Clouds, where resources from different Cloud providers can be
merged into a single pool, are an emerging evolution. The thesis will investigate algorithms for
matching application constraints and needs when allocating resources out of federated Clouds, taking
into account the computation, communication, trustworthy and security features provided by different
Cloud providers.
Massimo Coppola
Software Virtual Machines integration in Cloud platforms
Most of the emerging Cloud Computing platforms leverage processor-level virtual machines,
providing the parallel programmer with the abstraction of many freshly-installed isolated systems. The
thesis will survey existing approaches which provide richer functionalities and API to the programmer,
and then explore the integration of state-of-the-art process-level virtual machines (CLI - mono) with
the XtreemOS distributed operating system.
Massimo Coppola
Distributed JIT compilation for multicore cpus.
Starting from current experiments in building a CLI bytecode virtual machine which is able to
dynamically offload the task of JIT compilation to other linked virtual machines, the thesi(s) will
explore the different settings where the technique may lead to better efficiency, higher cache
utilization and/or smaller memory footprint of the overall system, with main focus on homogeneous
and heterogeneous multicore CPUs.
8
Stefano Giordano, Vittorio Miori
Learning from experience to anticipate inhabitants’ needs in an invisible way
The embedded intelligence provides a vision of Internet of Things oriented to Ambient Intelligence
paradigm, which can be defined as the ability to collect and analyze the digital traces left by people
when they interact with the environment and with intelligent devices, to acquire knowledge about
life and human behavior.
The thesis work should apply these concepts within the domestic environment, where devices
(sensors and actuators) are already "things" with their own intelligence.
According to the principles of the semantic information processing (Web 3.0) will be implemented
an application based on machine learning systems (eg. Data Mining techniques). The result will be
an intelligent universal ecosystem who learns from the behavior and habits of the people and being
able to adapt itself to the environmental context and to anticipate user needs.
The developed system could be aimed at improving the comfort inside homes and it could try to
anticipate or to prevent potential health hazards (especially for elderly, disabled or sick).
9
Theses proposed by Paolo Pagano, Matteo Petracca, and Marco Di Natale
All theses (apart from thesis proposal C.1) will be supervised by Dr. Paolo Pagano, Dr. Matteo Petracca, and
prof. Marco Di Natale.
Thesis proposal C.1 will be supervised by Dr. Paolo Pagano, Dr. Matteo Petracca, and prof. Ernesto
Ciaramella.
N.B. The hardware set to be used in the following proposals is described in the following on-line resources:
http://rtn.sssup.it/index.php/hardware/seed-eye
http://www.ipermob.org/files/documents/OO3/OO3-3-5v1.0.pdf
A) Computer vision in embedded systems
WSN are usually deployed through a set of embedded devices, equipped by
constrained resources (like computing capabilities and resident memory)
and interconnected by a low rate, unreliable wireless network.
Multimedia sensing is a challenging perspective to give “Eyes and Ears” to sensor devices and create added-
value to distributed applications in ad-hoc networks.
Thesis A.1)
One of the basic services provided by a Camera Sensor Network is that of notifying end-users or
other machines about the occurrence of (pre-defined) events. A simple application consists in
detecting the appearance of an entity in the foreground of an image frame having extracted the
background from a comparison with previous frames.
This thesis consists in the design and implementation of a background modeling algorithm for
embedded systems; the performance will be rated in a real testbed against state-of-the-art techniques
(from literature and previous work on the same subject) in respect of processing time, sensitivity,
probability of raising false alarms.
For reference:
L. Tessens, M. Morbee, W. Philips, R. Kleihorst, H. Aghajan, "Efficient approximate foreground
detection for low-resource devices", In Distributed Smart Cameras, 2009.ICDSC 2009. Third
ACM/IEEE International Conference on, pp. 1-8, 2009.
For the severe constraints in the computing resources every application based on multimedia streaming poses
a set of conceptual andimplementation problems related to information compression and algorithm
optimization.
Thesis A.2)
This thesis consists in the design and implementation of a compression algorithm for embedded
systems; a comparative analysis will be performed on a real IEEE802.15.4 WSN testbed for state-of-
the-art techniques like JPG, JPG-LS, JPG2000 considering the local processing time (at the sender
and the receiver node) and the network bandwidth utilization.
Thesis A.3)
The dogma of signal processing maintains that a signal must be sampled at a rate at least twice its
highest frequency in order to be represented without error. However, in practice, we often compress
the data soon after sensing, trading off signal representation complexity (bits) for some error
10
(consider JPEG image compression in digital cameras, for example). Clearly, this is wasteful of
valuable sensing resources. Over the past few years, a new theory of "compressive sensing" has
begun to emerge, in which the signal is sampled (and simultaneously compressed) at a greatly
reduced rate (see on-line resources at: http://dsp.rice.edu/cs).
This thesis consists in the design and implementation of compressive sampling techniques for
optimizing the total energy dissipated in a distributed multimedia application for WSN.
For extending the local storage resources and the acquisition overhead in a sensing node, it is possible to
couple a microcontroller-based platform with an FPGA.
Thesis A.4)
This thesis consists in the design and implementation of a background modeling algorithm for
embedded systems by configuring an FPGA. A performance comparison (in terms of time overhead
and feasibility of advanced logic) with a “pure-C” implementation will be used to rate the validity of
this approach.
A very challenging application is that of tracking a moving object by means of a set of cameras hosting a
distributed collaborative application.
Thesis A.5)
This thesis consists in the design and implementation of a calibration technique aimed at aligning the
sensing peripherals after an imprecise installation phase.
For reference:
Stauffer C; Kinh Tieu; , "Automated multi-camera planar tracking correspondence modeling,"
Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society
Conference on , vol.1, no., pp. I-259- I-266 vol.1, 18-20 June 2003
doi: 10.1109/CVPR.2003.1211362.
B) Localization techniques
In the “Smart Ambients” domain a prominent application is that of locating and tracking moving objects.
For the unavailability of the GNSS signal in indoor scenarios, usual trends consider the deployment of a
WSN to set up a mesh of geo-referenced anchors and propose a diversified set of algorithms (usually
classified in “range-based” and “range-free”) as localization techniques.
Thesis B.1)
This thesis consists in the design and implementation of a sensor fusion service aimed at estimating a
statistical variable from its analytical dependence on simple sensor data. The experimental validation
activity will be coupled with a simulation study to isolate the error sources limiting the service
accuracy.
A competing and complementary approach is that of relying on inertial sensors to estimate the motion and
quantify the displacement of the mobile entity.
Thesis B.2)
This thesis consists in the modeling of the motion of a mobile entity by real-time processing of the
data retrieved by inertial sensors (like accelerometers and gyroscopes) integrated as electronic
components in embedded devices. The candidate will be asked to design and implement a
synchronization service to re-align the estimated position to the ground truth retrieved by a device of
different technology (e.g. RFID or GNSS signal transponder).
11
C) Networking
Vehicular Ad-hoc NETworks (VANET) are a special instance of nomadic networks where the mobile
entities (i.e. cars) are expected to exchange information (vehicle to vehicle -- V2V) via the wireless channel.
VANET technologies are considered as the basis of Intelligent Transport Systems; yet another challenge is
that of interconnecting vehicular equipment with the road-side network (vehicle to infrastructure -- V2I).
Thesis C.1)
Although Radio Frequency is the preferred technology for enabling V2I and V2V communication
(see for instance the M/453 mandate by the European Commission to ETSI and CEN), alternative
communication means are being promoted by the scientific community. A notable example is
offered by Visible Light Communication.
At the lab, the implementation of the functionality of ITS stations (standardized by ETSI) in tiny
devices (like WSN) communicating in compliance with the IEEE802.15.4 standard, is in progress.
This thesis will extend the physical layer of the ITS stack to permit the encoding of application layer
messages using VLC. The experimental validation activity will be focusing on the metrics relevant
for safety-critical applications (i.e. overhead in sending and receiving).
Wireless Multimedia Sensor Networks (WMSNs) can be considered a next generation of sensor networks in
which multimedia capabilities, both video and audio, have been enabled on tiny mote devices. Main
challenge from a network point of view must be considered the QoS, to be reached through simple or
complex data protection techniques (cross-layer approaches).
Thesis C.2)
Speech communications in WMSNs have been recently proposed to support emergency situations. In
such a context the network is required to change at runtime its functionality in order to support the
new service. Moreover, a network reorganization is required to support QoS.
The thesis will define and implement a Bandwidth allocation protocol targeted to Speech
communications on WMSNs. The main goal of the protocol will be the research of a trade-off
between QoS and required transmission bandwidth by using a speech coder with multiple bitrates
(e.g., G726 - 16, 24, 32 , 40 kbit/s).
Thesis C.3)
We propose a thesis where the candidate is expected to port the GSM AMR speech coder to a
microcontroller architecture. A comprehensive perform evaluation will be done in a real WSSN
scenario considering multiple data protection techniques.
The Internet of Things (IoT) vision has recently drawn the attention of the research community thanks to the
wide diffusion of the Internet to new, miniaturized, and low-cost smart objects. The main idea in the IoT
concept is to interconnect different kinds of common objects, each one addressable for exchanging data
through a single world-wide network. In this regard a significant and promising trend is given by the
integration of the Wireless Sensor Networks (WSNs) with the Internet.
Thesis C.4)
WSNs consist of low-cost autonomous sensor devices which interact with each other in a wireless
distributed system. Each sensor has very limited battery capacity, limited processor capability and
limited storage capacity. Multicast paradigm reduces the communication costs for applications that
send the same data to multiple recipients: instead of sending via multiple unicasts, multicasting
minimizes the link bandwidth consumption, sender and router processing, and delivery delay.
The thesis will design and implement a Multicast protocol for Wireless Sensor Network. The main
goal is to define a native multicast support for 6LoWPAN network (IPv6 over Lossy and Low
Power Network). The experimental validation activity will be performed in a real scenario.
12
D) Abstraction of Wireless Sensor Networks
As applications become more and more interconnected and interdependent, the number of objects, users and
devices tends to increase. This poses the problem of the scalability of the communication and object
management algorithms, and increases the complexity of administration. Ubiquitous Computing is a vision
of the near future, in which an increasing number of devices embedded in various physical objects will be
participating in a global information network.
To set up a common ground of abstraction, a middleware layer should hide the heterogeneity of the network
and the complexity of services and applications.
A prominent objective is that of implementing a code execution service at the node level by implementing a
virtual machine capable of executing scripts or bytecodes.
Thesis D.1)
This thesis consists in the design and implementation of a network service for the rapid prototyping
of functions and primitives.
Leveraging the previous work on the same subject (i.e. the PyMite implementation on the
microcontroller board), the candidate is asked to apply tools and techniques to a real-world case
study (e.g. distributed vision algorithms).
Thesis D.2)
When dealing with a heterogeneous set of resources (notably sensor devices at the collection layer,
storage, and computation units) the usual trend is that of setting up a virtual architecture capable of
providing “high-level” services by interoperating the active devices integrated therein.
This thesis will consider the implementation of a set of collection layer services in a “grid” middleware (e.g.
the gLite platform developed within the European EGEE project and largely used by CERN and other public
research institutions).