what if annotations were reusable: a preliminary discussion

Post on 01-Nov-2014

2.000 Views

Category:

Education

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Paper presented in ICWL09 Manouselis, N., & Vuorikari, R. (2009). What if annotations were reusable: a preliminary discussion. In M. Spaniol et al. (Ed.), Advances in Web-Based Learning - ICWL 2009, Lecture Notes in Computer Science (Vol. 5686, pp. 255–264). Berlin Heidelberg: Springer-Verlag.

TRANSCRIPT

What if annotations werereusable: a preliminary

discussion- ICWL 2009 -

Nikos Manouselis, Greek Research & Technology Network

Riina Vuorikari, European Schoolnet

Outline

• Problem statement

• A possible solution: a conceptual modeland pilot

• Further discussion points

Problem scenario 1

The same learning resource, that’smetadata has been federated, is tagged

on different portals: LeMill, LRE, andKoolielu

Problem scenario 2

Problem scenario 2

The same movie is found and rated inthree different application contexts: amovie recommender system, an e-

commerce site, and an educational portal

Problem description

• Implicit and explicit feedback is hard toacquire (e.g. LRE)– Ratio: search-bookmark 5.5:1– Ration: play-bookmark 3.2:1– Ratio: play-rate 2.7:1

• This feedback is needed to better guide users(e.g. recommendations, social navigation)

• Social information can make users moreefficient! (Vuorikari and Koper, 2009)

An idea!

A structured way to representdifferent types of user feedback

fromdifferent contexts

could prove of particular value.

Hypothetical questions?

• Can we represent and store userfeedback in a structured, interoperableand reusable format?

• Is it safe to assume that the userfeedback can be used in anotherapplication domain, e.g. a newrecommender system in a differentapplication context?

Capturing user feedback

To be reusable, the user feedback should• be in a structured and interoperable

format• reflect the annotation scheme (e.g. tag,

rating, multi-rating)• incorporate some information about the

context in which it has been collected

model

Capturing user feedback

Different frameworks exist:• Contextualized Attention Metadata

Framework• Attention Profiling Markup Language (APML)• User Labor Markup Language (ULML)• Microformat, e.g. Google support

Vuorikari, R. & Berendt, B. (2009). Study on contexts in trackingusage and attention metadata in multilingual technology enhancedlearning.

Capturing user feedback (e.g.CAM)

Capturing user feedback(example of microformats)

Pilot in Organic.Edunet

• A registry of annotation schemes forlearning resources

• To store annotation schemes, ratingsand tags from different environments onlearning resources

• To be used to create better services forusers (e.g. recommendation, socialnavigation)

Pilot in Organic.Edunet

Conceptual work in Aspect

• IMS LODE Information for LearningObject Exchange (ILOX)– different facets:

• for LOM,• for folksonomies,• annotations• ratings• ...

Lots of issues...

• How to identify the same item in manydifferent contexts (no persistent IDs)?

• Is it necessary to encode the informationabout the user who annotated?– Tags are interesting BECAUSE of (user,item,tag)

triple• Do users really find an annotation from

another context useful?• ...

thanks! for your attention

comments? questions?

http://aspect-project.org/http://lreforschools.eun.org

www.organic-edunet.eu

top related