using transparency in visualizationsummit.sfu.ca/system/files/iritems1/12059/etd6970_bcheung.pdf ·...
TRANSCRIPT
USING TRANSPARENCY IN VISUALIZATION
by
Billy Chi-kai Cheung B. Comm., University of Alberta, 2001
THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE
In the School of Interactive Arts and Technology
Faculty of Communication, Art and Technology
© Billy Chi-kai Cheung 2011 SIMON FRASER UNIVERSITY
Fall 2011
All rights reserved. However, in accordance with the Copyright Act of Canada,
this work may be reproduced, without authorization, under the conditions for Fair Dealing. Therefore, limited reproduction of this work for the purposes of private
study, research, criticism, review and news reporting is likely to be in accordance with the law, particularly if cited appropriately.
ii
APPROVAL
Name: Billy Chi-kai Cheung Degree: Master of Science Title of Thesis: Using Transparency in Visualization
Examining Committee:
Chair: ________________________________________ Halil Erhan Assistant Professor, School of Interactive Arts and Technology
________________________________________ Lyn Bartram Senior Supervisor Assistant Professor, School of Interactive Arts and Technology
________________________________________ Maureen Stone Supervisor Adjunct Professor, School of Interactive Arts and Technology
________________________________________ Tom Calvert External Examiner Professor Emeritus, School of Interactive Arts and Technology
Date Defended/Approved: ______________________________________
Last revision: Spring 09
Declaration of Partial Copyright Licence The author, whose copyright is declared on the title page of this work, has granted to Simon Fraser University the right to lend this thesis, project or extended essay to users of the Simon Fraser University Library, and to make partial or single copies only for such users or in response to a request from the library of any other university, or other educational institution, on its own behalf or for one of its users.
The author has further granted permission to Simon Fraser University to keep or make a digital copy for use in its circulating collection (currently available to the public at the “Institutional Repository” link of the SFU Library website <www.lib.sfu.ca> at: <http://ir.lib.sfu.ca/handle/1892/112>) and, without changing the content, to translate the thesis/project or extended essays, if technically possible, to any medium or format for the purpose of preservation of the digital work.
The author has further agreed that permission for multiple copying of this work for scholarly purposes may be granted by either the author or the Dean of Graduate Studies.
It is understood that copying or publication of this work for financial gain shall not be allowed without the author’s written permission.
Permission for public performance, or limited permission for private scholarly use, of any multimedia materials forming part of this work, may have been granted by the author. This information may be found on the separately catalogued multimedia material and in the signed Partial Copyright Licence.
While licensing SFU to permit the above uses, the author retains copyright in the thesis, project or extended essays, including the right to change the work for subsequent purposes, including editing and publishing the work in whole or in part, and licensing other parties, as the author may desire.
The original Partial Copyright Licence attesting to these terms, and signed by this author, may be found in the original bound copy of this work, retained in the Simon Fraser University Archive.
Simon Fraser University Library Burnaby, BC, Canada
STATEMENT OF ETHICS APPROVAL
The author, whose name appears on the title page of this work, has obtained, for the research described in this work, either:
(a) Human research ethics approval from the Simon Fraser University Office of Research Ethics,
or
(b) Advance approval of the animal care protocol from the University Animal Care Committee of Simon Fraser University;
or has conducted the research
(c) as a co-investigator, collaborator or research assistant in a research project approved in advance,
or
(d) as a member of a course approved in advance for minimal risk human research, by the Office of Research Ethics.
A copy of the approval letter has been filed at the Theses Office of the University Library at the time of submission of this thesis or project.
The original application for approval and letter of approval are filed with the relevant offices. Inquiries may be directed to those authorities.
Simon Fraser University Library
Simon Fraser University Burnaby, BC, Canada
Last update: Spring 2010
iii
ABSTRACT
Over the last two decades, there have been a growing number of applications for
transparency in visualization. Transparency is a visual feature that provides
solutions to certain fundamental visualization problems. Currently, there is
insufficient research regarding the benefits and the limitations of using
transparency in visualization. The lack of research on this topic becomes more
apparent when we compare the amount of research done towards applying
colour in visualization.
This thesis attempts to connect the research in perceptual transparency and the
use of transparency in visualization. The first part of this thesis reviews prior
research in perceptual transparency; different types of existing visualizations
were analyzed using research from perceptual transparency. The final part of this
study applies transparency in a grid structure; the study is built on previous
research with Just Attendable Difference (JAD) for reference structure,
examining factors of grid colours, image type and density of the data structure.
Keywords: Visualization; Transparency; Perception;
iv
ACKNOWLEDGEMENTS
This has been a long, fulfilling journey. I'd like to express my gratitude to the
many great people I've met and who have helped me along the way.
Lyn, you are the best supervisor a graduate student could hope for. Thank you
for giving me so much freedom, patience, and unconditional support
Maureen, thank you for being my greatest listener. I am especially appreciative of
all the extra effort you've given me as we bounced ideas back and forth.
I want to thank my workplace, Athabasca University for giving me the flexibility
and support to pursue this study. Thank you to all my colleagues and fellow
students at SIAT. You've made this such an enjoyable journey that I wish it didn't
have to end.
Thank you, Sam, for your kindness and supervision during my internship at
Siemens. Thank you, Peter, for being a great host and life teacher during my stay
in Princeton (and my apologies for the mess I left). Thank you Uncle Dai, Auntie
Mei-ling and Cheng's family for generous support during my stay in Vancouver.
Finally, I want to thank my parents, my brother, my sisters, and Xu for your love.
v
TABLE OF CONTENTS
Approval ............................................................................................................................ ii Abstract ............................................................................................................................ iii Acknowledgements .......................................................................................................... iv Table of Contents .............................................................................................................. v List of Figures .................................................................................................................. vii List of Tables .................................................................................................................... xi
1: Introduction AND RESEARCH QUESTIONS ............................................................. 1 1.1 Motivation .................................................................................................................. 1
1.1.1 Ubiquitous Use of Transparency ................................................................... 1 1.1.2 Lack of Knowledge About Applying Transparency ........................................ 2 1.1.3 Need for Research of Transparency as an Element in Generic
Visualization .................................................................................................. 5 1.2 Terminologies ............................................................................................................ 5 1.3 Research Question .................................................................................................... 6 1.4 Scope of this Thesis .................................................................................................. 6 1.5 Thesis Outline ........................................................................................................... 7
2: BACKGROUND IN PERCEPTUAL TRANSPARENCY ............................................... 9 2.1 Metelli’s Model for Transparency .............................................................................. 9 2.2 Subtopics in Perceptual Transparency .................................................................... 10 2.3 Balanced and Imbalanced Transparency ................................................................ 11 2.4 Achromatic Luminance Relationship ....................................................................... 13 2.5 Chromatic Transparency ......................................................................................... 14 2.6 Figural Condition and X-Junctions .......................................................................... 17 2.7 Ordering of Layers and X-Junctions ........................................................................ 22 2.8 4-regions vs. 6-regions Transparency ..................................................................... 23 2.9 Illusory Perceptual Transparency ............................................................................ 23 2.10 Other Cues .............................................................................................................. 26 2.11 Chapter Summary ................................................................................................... 27
3: Transparency in Visualization and Design ............................................................. 28 3.1.1 Applying Transparency in Visual Communication ....................................... 29 3.1.2 Applying Transparency in Interface Design ................................................. 29
3.2 Using Transparency to Show Occlusion ................................................................. 33 3.2.1 Occlusion to Show Compositing with Plots ................................................. 34 3.2.2 Occlusion to Show Layers ........................................................................... 37 3.2.3 Transparency and Cluttering ....................................................................... 41
3.3 To Enable Transformations of Techniques ............................................................. 45
vi
3.3.1 To Cluster Plots into Layers ........................................................................ 46 3.3.2 To Transform and Enable Direct Comparison ............................................. 51
3.4 To Represent Values or Meanings .......................................................................... 55 3.4.1 Representing Orders by Plot Sequence ...................................................... 55 3.4.2 Varying Alpha Values Across Layer ............................................................ 55 3.4.3 Using the Symbolic Meaning of Transparency ............................................ 59
4: Study in Just Attendable Difference ........................................................................ 61 4.1 Background ............................................................................................................. 61 4.2 Study Design and New Independent Variables ....................................................... 64
4.2.1 Choice of Grid Colour .................................................................................. 64 4.2.2 Choice of Image Type and Density ............................................................. 64
4.3 Instructions to Participants ...................................................................................... 67 4.4 Pilot Run, Adjustment and Final Design .................................................................. 69 4.5 Hypotheses ............................................................................................................. 71 4.6 Overall Results ........................................................................................................ 72
4.6.1 Image Type .................................................................................................. 79 4.6.2 Image Type and Density .............................................................................. 79
4.7 Grid Colour .............................................................................................................. 82 4.8 Density .................................................................................................................... 84 4.9 Range between Faint Task and Strong Task .......................................................... 86 4.10 Range between Sparse and Dense Analysis .......................................................... 89 4.11 Discussion ............................................................................................................... 93
4.11.1 Summary ..................................................................................................... 93 4.11.2 Cautionary Result in Starting Alpha at 0.5 ................................................... 95 4.11.3 Density and Cluttering ................................................................................. 96
5: Summary .................................................................................................................... 98 5.1 Contributions ........................................................................................................... 99
Appendices .................................................................................................................. 107 Appendix A Printed Instructions Given to Participants .................................................. 107
Reference List .............................................................................................................. 108
vii
LIST OF FIGURES
Figure 1: Bipartite background and the overlay coded as ABPQ regions and the implied t and α value (Metelli 1974) ............................................................... 10
Figure 2: Sub-topics in perceptual transparency ............................................................. 11 Figure 3: DKL colour space: the vertical axis is achromatic; the centre of the
space (G) is neutral grey; The LM axis represent the Long and Medium wavelength sensitive cones (from +red to -blue-green); the S axis represent the Short wavelength sensitive cones (from -yellow-green to +purple); colours on any horizontal plane carry the same level of luminance. Retrieved from www.psychopy.org/general/colours.html .............. 16
Figure 4: Different figural conditions and their effects on perceptual transparency (Singh & Hoffman, 1998). ................................................................................ 18
Figure 5: In the first experiment, occlusion of x-junction is eliminated by adding annulus. The change is shown from (a) to (b). This experiment is also applicable to the effect of adding border to the transparent layer on top. In a follow up experiment (c), the X-junctions are occluded only by small dots (Kasrai & Kingdom, 2002). ............................................................. 19
Figure 6: In Experiment 2, kinks are added to abrupt changes in the continuity of regions (Kasrai & Kingdom, 2002). ................................................................. 20
Figure 7: In Experiment 3, Clover like layer is applied to isolate the effect of X-junctions and layer continuity (Kasrai & Kingdom, 2002). ............................... 21
Figure 8: The illusory effect that the dot is of different colour is strength (two dots seems more different) if the overlaying band is straight and if the classical simultaneous contrast rule is held (Logvinenko et al. 2005). ............ 22
Figure 9: Ordering determines by the achromatic order. Another theory focuses on the polarity change for contours. The first set shows a bi-stable transparency with a non-reversing junction. The second set shows a unique transparency with a single reversing junction. The third set shows no transparency with a double reversing junction (Anderson 2001). .............................................................................................................. 23
Figure 10 The Munker-White illusion, and the corresponding T-Junctions that arise in this image. (Anderson, 1997) ............................................................. 24
Figure 11: Contour lines induced phenomenal transparency and layer ordering (Grieco & Roncato, 2005). .............................................................................. 25
Figure 12: Fuchs Transparency (Masin 1998). ................................................................ 26
viii
Figure 13: Examples of simple usages of transparency: (a) overprinting illustration by Martin Fewell; (b) multiple exposure photography by Liad Cohen; (c) highlighting and shadowing feature in Mac OS X. ......................... 28
Figure 14: A simple model of attention splitting between foreground and background when applying transparency to interface design (Harrison et. al 1996). ..................................................................................................... 30
Figure 15: (a) Simple use of transparency to fix occlusion and to maintain plot integrity (one panel of the visualization created by Tim Ellis. Retrieved from www.tableausoftware.com/ public/gallery/topic/Business-and-Real-Estate in June 2011). (b) An example of bubble plot with no transparency applied, screen capture of one panel from the Euro Explore Demo, retrieved from www.ncomva.se/flash/explorer/euro/ on Sept 2011. ....................................................................................................... 34
Figure 16: Making data objects transparent as a solution to over-plotting (Few, 2008) ............................................................................................................... 35
Figure 17: One-dimension Scatter Plot ........................................................................... 36 Figure 18: (a) Horizon graph Reijner (2008). (b) A mock-up of the same graph
using transparency and overlay ...................................................................... 37 Figure 19: To enable association of location. (a) Flood inundation mapping using
GIS (source: water-and-earth.com). (b) A continuous quantitative data overlay on top of a geographical map (source: GeoIQ software fortiusone.com). (c) A similar application of data overlay with transparency in eye-tracking software (source: crazyegg.com). ..................... 39
Figure 20: Study of grid transparency as a reference layer (Bartram & Stone, 2010). .............................................................................................................. 40
Figure 21: Visual cluttering caused by uncontrolled order for transparent plots. The Z-axis orders for plots carry no specific meaning. Retrieved from www.spiegel.de/flash/flash-24861.html ........................................................... 42
Figure 22: From unordered, to order with bias, to controlled order. The first row shows clusters of circles with the same amount of region being overlaid. Circles within a cluster carry the same alpha value. The second row shows an additional circle at the middle of the cluster with occlusion from the other circles. The second cluster with a high alpha value gives the biased impression that the middle circle is on top. White borders are added to each circle in the third row. The second cluster with fewer transparent circles gives the most detailed order and effect. .............................................................................................................. 44
Figure 23: Bubble maps with controlled order to reduce cluttering. Retrieved from www.nytimes.com/interactive/2009/04/07/us/20090407-immigration-occupation.html ............................................................................................... 45
Figure 24: Non-exclusive clustering of plots into planes (Collins, Penn, & Carpendale, 2009). ......................................................................................... 47
Figure 25: Simple parallel coordinates plots with transparency Wegman and Luo (1996) .............................................................................................................. 47
Figure 26: Parallel Set designed by Kosara, Bendix and Hauser (2006). ....................... 49
ix
Figure 27: Using transparency (left) and without using transparency (right) in parallel set. The parallel set on the left is created with Parallel Set Software from eagereyes.org. The figure on the right is a hypothetical recreation of the set with opaque colours. ...................................................... 50
Figure 28: Clustering plots into shapes and order for comparison (Wang, Giesen, McDonnell, Zolliker & Mueller, 2008). The one on left has the blue layer on top. The one on the right has the red layer on top. ........................... 51
Figure 29: Stacking with Z-index as an alternative for small multiples or simple stack graph (source: gapminder.org). ............................................................. 53
Figure 30: Example of a polar bar plot from MatPlotLib (source: matplotlib.sourceforge.net/examples/pylab_examples/polar_bar.html). ......... 54
Figure 31: “Interactive graphic: Japan’s deadly seismic history” created by Peter Aldhous. Map data © OpenStreetMap (and) contributors, CC-BY-SA. Screens captured from www.newscientist.com/blogs/shortsharpscience/2011/03/interactive-graphic-japans-dea.html on Jun 23rd, 2011. .................................................... 56
Figure 32: Hotmap Tool using translucency to encode the frequency of queries on different locations (Fisher, 2007). ............................................................... 57
Figure 33: Using stacking to use transparency in creating a 12-step scale. ................... 58 Figure 34: Correa, Chan and Ma (2009) use transparency and size to encode
two measures of uncertainty. In (a), plots with higher uncertainty are shown with higher transparency. This approach hides the uncertain plot. In (b), the same data set is used, but plots with certainty are shown with higher transparency in order to highlight the uncertain plots. ................................................................................................................ 60
Figure 35: Screen capture showing the training session of prior JAD study (Bartram & Stone, 2010). ................................................................................ 63
Figure 36: Four image types for data layer and their two variations in density. .............. 67 Figure 37: One of the training screens with a red grid, showing the experimental
setup. .............................................................................................................. 69 Figure 38: Mean and error plot of alpha for (a) Sparse, and (b) Dense condition. .......... 74 Figure 39: Error bar plot for image type on the X-axis, separated into four
quadrants with Task and Density. When running ANOVA on individual quadrant, The mean alpha is significantly different only at the lower-left quadrant (Faint task with Dense condition) ..................................................... 81
Figure 40: Interaction effect between Density and Image Type for Faint task ................ 82 Figure 41: Box plot showing how the distribution of alpha affected by Grid Colour,
Density, and Task ........................................................................................... 83 Figure 42: The slope of the black line show the change of mean alpha from
Dense to Sparse. Other than the big difference between Abstract Dense and Abstract Sparse, the differences for other Image Types between two levels of Density are small and consistent. ................................ 86
Figure 43: Range of Alpha between Strong and Faint task ............................................. 88
x
Figure 44: Range of mean alpha between Dense and Sparse conditions. ..................... 90 Figure 45: Examining the image type, task, and grid colour for the range of alpha
between sparse and dense plot. ..................................................................... 92 Figure 46: Error bar plot, and the suggested range of alpha for grid design. .................. 94
xi
LIST OF TABLES
Table 1 Mean alpha for all 24 conditions ......................................................................... 75 Table 2 ANOVA results with all 24 conditions ................................................................. 76 Table 3 ANOVA results with all conditions except the Abstract-Dense Condition ........... 77 Table 4 ANOVA results without the Abstract Image Type (Dense and Sparse) .............. 78 Table 5 One-way ANOVA tests on Image Type, with 4 separate cases of Density
and Task ......................................................................................................... 80 Table 6 One-way ANOVA tests on Density, with six separate cases of Image
Type and Task ................................................................................................ 85
1
1: INTRODUCTION AND RESEARCH QUESTIONS
Transparency is being used ubiquitously in visualization. We don't yet
know enough about how best to apply transparency in visualization
1.1 Motivation
1.1.1 Ubiquitous Use of Transparency
Transparency is a visual feature that provides solutions to certain
fundamental visualization problems. Fixing occlusion of plots is one obvious
reason that transparency is being used in visualization. Occlusion often implies
that parts of plot details are hidden. On a different scale, transparency enables
overlaying of planes for direct comparison. In many cases of stacking data, we
can find the use of transparency.
A quick survey of the papers published at the 2010 IEEE Information
Visualization Conference shows that 20 out of 36 papers reported the use of
transparency to varying degrees. Among these 20 papers, two are surveys of
existing applications and do not have further details; four of them use
transparency, but do not mention how it is being used; six of them use and
mention the word transparency from one to five times within the paper. The final
four papers in which transparency is found have detailed explanations on the
settings of alpha values or formulas to calculate opacity. Transparency is being
2
applied to: plot lines, parallel plots, visual connections between instances, shape
overlays, and map overlays.
1.1.2 Lack of Knowledge About Applying Transparency
Designers have been using overlaying techniques since long before the
era of desktop publishing. For example, composing photos in dark rooms
requires overlaying images. Overprinting is an example of overlaying flat spot
colours. Since the layer function was introduced in Adobe Photoshop 3.0,
overlaying has become more controllable for image composing needs.
For visualization designers, the benefits of overlaying transparent layers
are found at various levels. Overlaying allows for direct comparison of details.
Stacking up plots in the second-and a-half dimension maximizes the use of the
screen’s real estate. Transparency also allows designers to show occluded
regions.
Designers know there are limitations when applying transparency to
visualization. There are a limited number of layers one can overlay. There are the
issues of false colour and fragments of shapes that can result from each
additional object. Even seasoned designers usually need to adjust the level of
transparency and the host colour multiple times before finding a satisfactory
result. In the first few attempts, the resulting transparency colour can either be
too faint and not distinguishable, or the overall result may produce a murky mess.
Designers resort to trial and error to create a more satisfactory result, or else
may use other ways to visualize the same set of data or information without
3
transparency. These descriptions highlight one challenge to applying
transparency: transparency is a coefficient of a host colour and is always
interacting with other objects on a canvas. If there is a chief designer and a junior
assistant working together, it is not likely the chief designer can order the junior
assistant to use a certain percentage of transparency with a certain colour
without seeing the result. It is not even possible for the chief designer to tell the
level of transparency when it is applied.
Moving beyond visualizations with static data, the complexity of using
transparency increases in two directions. In visualizations with dynamic data,
manual adjustment is not always available. Things can work nicely with one
dataset, and break with another dataset. The move from static to dynamic
visualization supports the need to study the limitations of transparency.
Scalability and application for dynamic settings remain two of the core
visualization research challenges (Chen, 2005). Furthermore, when we consider
visualization platforms like Tableau™ and Prefuse (prefuse.org) we need to
learn transparency from another perspective. On these platforms, the role
between designer and user is blurred. The user will need to decide if
transparency is suitable for certain data types or dimensions. The ultimate goal is
for the system to suggest whether or not transparency should be used for a
particular data dimension. Before we reach that level of automation, we need to
identify the common uses and the benefits in using transparency. Not only do we
need to learn more about how to use transparency in specific techniques, we
need to develop knowledge that is generic and applicable to different scenarios.
4
If we look at how often another visual element, colour, is studied in
visualization, we can safely say we know relatively less about how transparency
is being applied in the domain. Theories and principles of using colour properly in
visualization date back 25 years (Ware 1988) (Levkowitz & Herman, 1992).
Today, using colour in visualization is still an actively discussed topic (Silva,
Santos & Madeira, 2011). Transparency, although it is being used more and
more in visualization, does not receive the same attention from researchers.
Over years of usage in visualization, researchers have come to agree that
using rainbow colours for continuous data is a bad idea (Rogowitz & Treinish,
1998) (Coninx, Bonneau, Droulez, & Thibault 2011); this agreement
demonstrates a departure from applying hue computationally to perceptually. In
today’s computer languages, from C++ to CSS, transparency is the alpha value
within the RGB colour space. If language is the building blocks of knowledge, it is
not surprising that the applications of transparency start computationally.
However, we have not seen a lot of research on how transparency is perceived
in an applied setting. In other words, we need to learn how transparency is being
seen from the user perspective. In addition to user perception, transparency is
usually a coefficient of the host RGB colour. The accuracy of RGB colour itself
suffers from the lack of calibration and the colour being shown depends on the
gamma setting of the monitor (Stone 2003).
5
1.1.3 Need for Research of Transparency as an Element in Generic Visualization
Despite the relatively small amount of research available, a quick keyword
search of transparency and visualization returns two main groups of research
papers: the first group is about the development of algorithms in 3D renderings of
enclosed objects; the second group is about using transparency as part of a
specific visualization technique. There are fewer research papers covering the
topic of using transparency as an element in generic visualization. Again, if we
compare the amount of research in applying colour to generic visualization, there
is a shortage of research, and thus knowledge on this topic. Recently, Few
(2008) summarized nine rules for choosing colour for visualization in various
generic scenarios. A comparable set of rules for applying transparency in generic
transparency would be more beneficial than rules for applying it in a specific
technique.
1.2 Terminologies
There are other terminologies related to transparency, for example,
translucent, opacity and alpha. Alpha refers to a value in the colour code ranges
from 0.0 to 1.0, where 0.0 represents a fully transparent colour, and 1.0
represents a fully opaque colour. A low alpha value means a more transparent
colour.
6
1.3 Research Question
How is transparency already being used in visualization? Can we
generalize some properties in the application of transparency? What are the
factors that affect the use of transparency in generic visualization?
A more open and general, if not too ambitious question is: can we
understand the application of transparency in visualization, to the extend we
understand the application of colour in visualization?
To answer these questions, we need to start with a background study on
perceptual transparency. The topic has been studied extensively in the fields of
perceptual psychology, neuropsychology, and vision. These background or
foundational pieces of knowledge are essential to the application of transparency
in visualization; there are many variables in perceptual transparency and in the
factors that can lead us to see things as transparent or not. To examine the
research question from the other end, we need to see examples of existing uses
of transparency. A survey and taxonomy of these visualization techniques is
needed. A study on factors will be a first step to understanding the application
empirically.
1.4 Scope of this Thesis
The techniques studied in this thesis are limited to 2D or 2.5D
visualization. Also, this thesis does not include the study of transparency in
combination with motion, animation, and interaction. Combining these elements
no doubt improves perceptual transparency, enhances the integrity of overlaying
7
plots, and extends the applications in visualization. However, the research
focusing on transparency may be obfuscated by the additional factors. In other
words, this thesis studies only the use of transparency in non-interactive displays
visualization.
1.5 Thesis Outline
This thesis is organized into five chapters. The motivation and research
questions are listed in Chapter 1. In order to understand transparency, a
background on the research in perceptual transparency is provided in Chapter 2.
These research findings provide the foundational knowledge to further study the
application of transparency in visualization. More importantly, this chapter
outlines the major and minor factors through which we have come to perceive
transparency. Chapter 3 applies the properties of perceptual transparency into a
series of functions and measures that are specific for visualization. Existing
applications are used to illustrate different functions for applying transparency. As
we will see, the goals for applying transparency in visualization are beyond the
simple showing and seeing of plots.
In order to pinpoint some of the factors commonly found in visualization
and how they affect the use of transparency, an empirical study was designed
and carried out. The study extends from a prior research in Just Attendable
Difference (JAD) (Bartram & Stone, 2010). The framework of this study is closely
related to the study of transparency in visualization, as the goal is to identify an
effective range for alpha in a dynamic setting. The result from the JAD research
is also about the use of transparency in the reference layer and the factors found
8
on the data layer. In this new JAD study, colour of the reference structure, and
the type and density of the data structure are examined.
The last chapter summarizes the new and old ideas mentioned in this
thesis. Finally, the last section provides a list of topics that are relevant but
excluded from this thesis, and directions that should be studied further.
9
2: BACKGROUND IN PERCEPTUAL TRANSPARENCY
Transparency is a property of many physical substances. For example,
Adelson and Anandan (1990) categorize dark filters, specular reflections, puffs of
smoke, gauze curtains, and cast shadows as physical phenomena that produce
transparency. Transparency is a perceptual phenomenon for humans; it is the
fusion of the colour and the arrangement of shapes that gives a non-illusory
perceptual transparency. Since researchers began using a physical device, an
episcotister, to explain colour mixing, as well as the algebraic blending model
based on Talbot's Law to predict the existence of perceptual transparency
(Metelli, 1974), the topic has been re-examined repeatedly by scientists from the
realms of perception, cognition, neuropsychology, and vision.
2.1 Metelli’s Model for Transparency
The foundational research and model suggested by Metelli (1974) is a
balanced, achromatic, 4-region setting for perceptual transparency. The research
primarily examines the luminance relationship between the ABPQ regions
(Figure 1), the implicit t and alpha value (α), as well as the stratification of
surfaces. Region A and B form a bipartite background. Region P and Q form the
layer on top of the background. When transparency is perceived, the areas p and
q split and appear to consist of two surfaces. Although the model was questioned
later regarding its use of reflective rather than luminance values on regions, and
10
the assumption of balanced transparency, the model was validated many times
as highly accurate by other researchers.
Figure 1: Bipartite background and the overlay coded as ABPQ regions and the implied t and α value (Metelli 1974)
2.2 Subtopics in Perceptual Transparency
There are many sub-topics within the research on perceptual
transparency. The rest of this chapter examines some of these sub-topics. Figure
2 tries to capture how one topic relates to another. Most of the studies are trying
to develop a model that can predict the conditions required before we see
something as a transparency.
11
Figure 2: Sub-topics in perceptual transparency
2.3 Balanced and Imbalanced Transparency
In Metelli’s original algebraic model, the calculation of p and q assumes
the same α value. This means that the two regions, p and q, are under the same
coefficient of transparency. This is known as balanced transparency. Without this
assumption, the unknown t and α cannot be deduced by only two formulae (1)
and (3). This case is known as imbalanced transparency. Imbalanced
transparency also implies that the figural condition is not fulfilled, as there is a
change of opacity right along the border between a and b. Here are the Metelli
equations modified for imbalanced transparency.
p = α a + (1 – α) t (1)
12
q = α’b + (1 – α’) t (2)
if α = α’
α = p – t / a – t = q – t / b – t = p – q / a – b (3)
Beck, Prazdney & Ivry (1984) published some criticisms of the original
Metelli model, and questioned the validity of equation 3. More specifically, the
experimental results from Beck’s paper showed that α could be negative or
greater than 1. Beck et al. further suggested that if figural cues strongly suggest
transparency, then contradictory indications from the pattern of intensities might
be overridden. They also challenged the assumption that α = α’ is invalid. Metelli,
Da Pos & Cavedon (1985) have responded to the argument that the original
model is incompatible with imbalanced transparency.
Further research on imbalanced transparency has been done. Fukuda &
Masin (1994) ran an experiment asking subjects to weight the transparency
between p and q using two ratings that added up to 100. The result supported
that, even when transparency is not balanced, transparency is still reported by
the subjects. Tommasi (1999) developed a new formula that can predict if
perceptual transparency exists, whether it is balanced or imbalanced. The
experiment uses a bipartite background; both balanced and imbalanced stimuli
are used. Subjects see a change in one of ABPQ region, and are asked to rate
the transparency from 1 to 99 for different regions. A model of expected α’ is
calculated as formula (4), which takes into account the changes between one of
the regions. The experiment’s result shows that the formula (4) is substantially
valid.
13
α’ = cPQ / cAB (4)
The difference in experience design makes it difficult to judge which
formula better predicts perceptual transparency. For example, Fukuda & Masin’s
(1994) study design asks the subject to rate the transparency for P and Q
separately. Subjects probably tend to give a different rating to the regions P and
Q due to the visible difference of luminance. For Tommasi’s experiment, only one
of the P and Q is changed. The existence of the unchanged side has little or no
effect on the experiment. It is not the figural setting or luminance ratio that
supports the imbalanced argument, but the change in luminance that gives a
sample of cues that suggests perceptual transparency.
While different models for perceptual transparency have empirical results
to support their arguments, it is safe to say the alpha values of these models are
not as absolute as they seem. This is especially true when subjects are asked to
rate directly or indirectly the alpha value. Also, a 4-regions set-up may not
provide enough cues, or luminance samples for the occluded and un-occluded
regions, to firmly suggest the existence of perceptual transparency.
2.4 Achromatic Luminance Relationship
Since the proposal of Metelli’s original algebraic model, there has been
more research done to validate its accuracy, and newer models have improved
the prediction of the presence of perceptual transparency. Masin (2006) did a
review on different models of perceptual transparency. In the paper, Masin first
examined different models of transparency from Metelli, Singh & Anderson,
14
Fukuda & Masin, and Tommasi, as mentioned above, and developed a new
model (formula 5). The experiment shows that other than |a – b|, which is non-
essential in the presence of perceptual transparency, all the other luminance
contrasts between adjoining areas contribute to the perceptual transparency.
These three luminance contrasts include |p – q|, |a – p|, and |q – b|.
(5)
2.5 Chromatic Transparency
There has been relatively less research done on chromatic perceptual
transparency. It was natural to think that the theories derived from achromatic
luminance could be applied to chromatic transparency. Research focused on
chromatic transparency confirmed that the convergence, or additive model
Metelli suggests is highly relevant to chromatic transparency (Chen & D’Zmura,
1998; Da Pos, 1999).
In one experiment (Chen & D’Zmura, 1998), researchers asked the
subject to adjust half of the occluded region, equivalent to region P in an ABPQ
setting, to create a transparent layer. Similar to the convergence model, the
observer’s choice of colour lays along a region rather than a point in colour
space. However, there were a number of exceptional cases. One observation
from the experiment found that subjects avoided setting changeable regions to
possess complementary hues for its top and bottom halves. This colour
opponency was first reported by Da Pos et al. back in 1989––the transparent
τ = κ2p - q
2 p - q + a - p + q - b
( a - p + p - q )( q - b + p - q )
15
overlay must share hue characteristics. The general additive model works fine,
except that when two colours are opposing each other there is a need to model a
slightly different way to mix colour.
As colour requires a three-dimensional space to be described, the choice
of colour space is an important first step for research in chromatic transparency.
Experiments run by Colantoni, D’Zmura, Knoblauch & Laget (1997) show that a
contrast-reduction parameter is needed to fit into the colour-matching data. Using
the DKL (Derrington, Krauskopf, Lennie) colour space (Figure 3), a model that
takes the reduction of contrast, as well as the shift of colours, is developed from
the original convergence model from Metelli.
b = (1 - α ) a + α s (6)
Six models using DKL space are tested: affine, convergence, general
convergence, linear, diagonal, and the translation models. Another five models
using LMS (Long Medium Short) cone excitations are developed and tested:
affine, convergence, linear, translation, and von Kries scaling (D’Zmura, Rinner &
Gegenfurtner, 2000).
The Affine model provides the best detection, thus the least residual error,
between the subject’s adjustments to the model’s calculation. This applies to
both the DKL and LMS colour space. The affine model takes both the colour shift
and contrast reduction into account.
16
Figure 3: DKL colour space: the vertical axis is achromatic; the centre of the space (G) is neutral grey; The LM axis represent the Long and Medium wavelength sensitive cones (from +red to -blue-green); the S axis represent the Short wavelength sensitive cones (from -yellow-green to +purple); colours on any horizontal plane carry the same level of luminance. Retrieved from www.psychopy.org/general/colours.html
An affine transformation or an affinity between two vector spaces consists
of a linear transformation followed by a translation:
b = Ma + t (7)
where a is a three-dimensional vector of reference-colour
coordination; M is a 3 by 3 matrix that describes a linear
transformation; t is a three-dimensional translation vector.
Faul and Ekroll (2002), on the other hand, suggest that a filter-like
subtractive model, which refers to situations as structured opaque surface as
seen through a light-transmitting object, can predict chromatic transparency
better. Faul and Ekroll also suggest that separating the calculation of luminance
and chromatic mixing will improve the model’s ability to predict chromatic
17
transparency. Using complex simulations to create the light filter effect, Faul and
Ekroll’s model successfully produces results that preserve the contrast direction
and contrast reduction. However, there are also a few critical cases in which the
filter model does not produce the chromatic transparency (Da Pos, 1999).
2.6 Figural Condition and X-Junctions
Although identified by Metelli (1974) as one of the two main conditions to
enable perceptual transparency, the topic on figural condition receives less
heated debate when compared to the luminance ratio. In the classical ABPQ 4-
regions setting, the phenomenal segmentation of the homogeneous filter layer
from the background is effortless. Imbalanced transparency, on the other
extreme, makes it impossible to detect unless more occluded and occluding
regions with at least one different luminance ratio are provided.
Some researchers examine the figural condition by altering the border
between P and Q. Singh and Hoffman (1998) identify and examine the role of
genericity and the minima rule (Figure 4).
18
Figure 4: Different figural conditions and their effects on perceptual transparency (Singh & Hoffman, 1998).
Another type of examination for figural conditions is to modify the effect of
X-junctions and observe the effect on perceptual transparency. X-junction is an
important monocular cue for transparency. It is region where the edge of the
transparent layer meets the change of colour at the background. In other words,
it is the small region that all the abpq regions meet (Figure 5a). Kasrai and
Kingdom (2002) conducted a series of experiments in this direction. The first
experiment aimed to hide the X-junctions by removing the annulus (Figure 5b).
Although the local X-junctions had been destroyed, the pairing of collinear
19
contours is still possible. The width of the annulus probably had an effect. In
order to find out if the result was caused by the removal of the X-junctions, rather
than the shape or size of the annulus, another experiment was conducted by
covering the three X-junctions only by small circles (Figure 5c). The result was
similar: the removal of X-junctions significantly reduced the ability to judge
transparency. Also revealed was that the performance for the stimuli containing
polarity reversals (no transparency should be perceived) were more significant
than stimuli with no polarity-reversal (transparency should be perceived).
Figure 5: In the first experiment, occlusion of x-junction is eliminated by adding annulus. The change is shown from (a) to (b). This experiment is also applicable to the effect of adding border to the transparent layer on top. In a follow up experiment (c), the X-junctions are occluded only by small dots (Kasrai & Kingdom, 2002).
20
The authors try to explain the improved result when kinks are present. The
segmentation of regions can be caused by reflectance changes between different
surfaces, or by illumination changes by factors like shadow, light, or transparent
filters. For the latter, the contours are more likely to be straight or smoothly
varying (having a high continuity along contours). "X and ψ junctions are salient
properties of transparent stimuli. Jagged contours with sudden changes in
direction are more likely to be attributed to reflectance changes than to changes
due to a transparent filter."
Figure 6: In Experiment 2, kinks are added to abrupt changes in the continuity of regions (Kasrai & Kingdom, 2002).
21
Figure 7: In Experiment 3, Clover like layer is applied to isolate the effect of X-junctions and layer continuity (Kasrai & Kingdom, 2002).
Logvinenko, Adelson, Ross & Somers (2005) also have an interesting
study regarding the edge interpretation. The paper examines the snake pattern
and the effect of different types of overlaying bands (Figure 8). For our research,
this paper can be viewed as the integrity of the pattern at the back, rather than
the transparency layer on top. The paper suggests that we have a tendency to
treat darker overlaying bands as shadows. By adding curve, which our vision and
instinct tends to treat as an over-paint pattern (reflectance) rather than shadow,
the shadow-related illusory effect is weakened.
22
Figure 8: The illusory effect that the dot is of different colour is strength (two dots seems more different) if the overlaying band is straight and if the classical simultaneous contrast rule is held (Logvinenko et al. 2005).
2.7 Ordering of Layers and X-Junctions
The ordering of transparent layers is one of the few commonly accepted
theories in the field of perceptual transparency. The arrangement of these four
colours affects the perceived ordering. Studies have identified that the order is
determined by the luminance arrangement around the X-junction. In Figure 9, the
first order among the four regions gives a bi-stable transparency; the second
order gives unique transparency; and the third order gives no perceptual
transparency.
23
Figure 9: Ordering determines by the achromatic order. Another theory focuses on the polarity change for contours. The first set shows a bi-stable transparency with a non-reversing junction. The second set shows a unique transparency with a single reversing junction. The third set shows no transparency with a double reversing junction (Anderson 2001).
2.8 4-regions vs. 6-regions Transparency
Unlike the 4-regions layout Metelli uses, the rationale for using two more
regions is to reduce all possible range of α and t combinations into one. These 6-
regions experiments usually hold five of the regions static, and ask the subject to
predict the luminance of the sixth one. The 6-regions layout is also used in
research on colour and figural conditions in transparency (Fulvio, Singh &
Maloney, 2006) as shown in Figures 5, 6 and 7.
2.9 Illusory Perceptual Transparency
Some researchers focus on the illusory perceptual transparency, which is
characterized by the lack of physical occlusions. Perceptual transparency is
reported when the arrangement of colours follows a mesh-like pattern, showing
colour from bottom layers within the holes of the top layer, but without any
interaction or luminance changes between the layers. Figure 10 shows the
24
classic Munker-White illusion. The illusion is originally about the different
brightness perceived for the same grey colour at the top and bottom region.
Another illusionary effect is the two non-existent horizontal bands, which are
perceived in front of the black vertical bars.
According to Anderson (1997): “When two aligned contours undergo a
discontinuous change in the magnitude of contrast, but preserve contrast
polarity, the lower contrast regions are decomposed into two casual layers.
Figure 10 The Munker-White illusion, and the corresponding T-Junctions that arise in this image. (Anderson, 1997)
“This follows from the simple fact that transparent surfaces (or an
illumination change caused by a shadow) can only reduce the contrast of an
underlying contour; the contrast polarity of edges must be preserved.”
Grieco and Roncato (2005) push the effects further by adding a perimeter
to a cross shape to induce phenomenal transparency. The phenomenal layer
ordering is also controllable by using a different contour. Grieco and Roncato
coined this phenomenon transparency of the contoured surface. In Figure11, all
25
the filled colours in the square at the middle are the same. The only difference is
the added white contour in (b) and a black contour in (c). As a result, the white
contour induces an illusionary cross on top of the square (b), whereas the black
contour gives illusionary transparency to the square and makes it appear on top
of the cross. Another finding from the same paper points out that only a very thin
contour can induce the illusion of transparency layers.
Figure 11: Contour lines induced phenomenal transparency and layer ordering (Grieco & Roncato, 2005).
There is more research showing how perceptual transparency can occur
without the presence of any junctions. The Fuchs phenomenon was documented
back in 1923. The illusion is a three-surface setting that gives perceptual
transparency (Masin 1984). As illustrated in Figure 12, transparency is perceived
when the middle square changes the colour from n to z, simultaneously
alongside the appearance of the ellipse. Masin (1998) found that perceptual
transparency is stronger when the luminance difference between |y – z|
decreases as the luminance difference between the |x – y| increases..
26
Figure 12: Fuchs Transparency (Masin 1998).
2.10 Other Cues
There are research papers studying the application of blurriness,
translucence, motion, and time to enhance perceptual transparency. However,
the nature of these visual cues is more at the enhancement level. The luminance
ratio and figural conditions need to be fulfilled first, and have a much stronger
effect on the presence of perceptual transparency.
27
2.11 Chapter Summary
This chapter reviewed some of the key concepts in perceptual
transparency research. The common goal for these models is to predict whether
we will see things as overlapping layers rather a number of independent colour
patches if certain factors are presented. After the review, it is clear that there are
many different scenarios in which transparency can be perceived. There is no
single factor alone, but a combination of multiple factors, needed to give
perceptual transparency,
In the next chapter, we will see the application of transparency in
visualization. The primary goal for applying transparency in these designs are
different. In general, transparency is used in visualization to maintain plot details
and integrity. Whether we see things as transparent or not is less important in
most cases, but whether we see plots or shape as one piece is. A majority of the
examples use the Metelli's model for colour blending. The designers have also
figured out the use of borders to enhance plot integrity. We can also explain
some strategies used in cluttering reduction using the theories related to x-
junctions.
28
3: TRANSPARENCY IN VISUALIZATION AND DESIGN
This chapter examines the applications of transparency in visualization.
Knowledge of perceptual transparency outlined in the previous chapter is used
as the foundation. Functions of transparency applied in visualization are
categorized into different taxonomies.
Figure 13: Examples of simple usages of transparency: (a) overprinting illustration by Martin Fewell; (b) multiple exposure photography by Liad Cohen; (c) highlighting and shadowing feature in Mac OS X.
Visualization was once a subset of practice under graphics design. Before
examining how transparency is used in visualization, it is useful to briefly review
29
how the elements are used in less restrictive and more expressive practices
within graphic design.
3.1.1 Applying Transparency in Visual Communication
Before the age of computer graphics, transparency was usually applied in
the overprinting technique used by the print industry, and in the multiple
exposures technique used by photographers. In moving pictures, video montage
and superimpose are also examples of the use of transparency. Another simple
use of transparency is highlighting or shadowing, the purpose of which is to
emphasize certain areas and to de-emphasize the rest. In the age of computer
graphics, these applications of transparency become more controllable and can
be used precisely in software for photo retouching, illustration, and video
authoring.
3.1.2 Applying Transparency in Interface Design
There are many examples of transparency use in interface design. In most
point-and-click GUI, there is a need to show a contextual menu according the
items being clicked. Furthermore, there is a need to maintain the spatial relation
between the menu and the item being selected. The use of See Through Tools
(Bier, Stone, Pier, Buxton & DeRose, 1993) is one earlier example of applying
transparency to an interface design. These transparent tools are movable and
organized into a virtual layer. Harrison et. al (1995; 1996) studied the use of
transparency in interface design. The focus of these experiments was to discover
the optimal transparency for a simple overlaid menu, as illustrated by the model
30
in Figure 14. The study highlights the need to balance both focused and divided
attention amongst layers.
Figure 14: A simple model of attention splitting between foreground and background when applying transparency to interface design (Harrison et. al 1996).
These applications in interface design provide the analysis for using
transparency in visualization in certain foundations—for example, the role of
transparency in facilitating attention switching. However, the well-defined order of
appearance for the menu, the user-controlled motion, and the strong learning
effects make these research findings limited to the domain of interface design,
with fewer applications in the visualization domain.
Similar to the use of transparency in interface design, the main usage of
transparency in visualization is to address occlusion. This is the first level of
using transparency. Designers apply transparency to solve the problem of
occlusion. The users, on the other hand, rely on the presence of transparency to
see individual plots or layers.
In Section 3.2, different examples of using transparency to fix occlusion
are discussed; section 3.2.1 specifically looks at showing occlusions with plots,
31
and section 3.2.2 focuses on showing occlusion with layers. Plots are defined as
a collection of graphic symbols to represent a data set. With our focus on 2D
visualization, a plot is a representation binding two values to the X- and Y-axis.
Layer, on the other hand, is a representation of an area or collection of areas.
Another difference between occlusion with plots and occlusion with layers is that
plot-based visualization may or may not cause occlusions depending on the
values, whereas layer-based visualization must result in occlusions. Section
3.2.3 examines a side effect of overlaying transparency objects; when plots or
layers are stacked on top of each other, a visual order is produced. These
unnecessary visual orders between plots can be a source of cluttering. Strategies
for suppressing the visual order are discussed in this section.
Transparency allows overlay, which also allows certain visualization
techniques to become useful. This rationale for using transparency is one level
above of using it to show occlusion. As occlusion is no longer an issue when
transparency is applied, some visualization techniques can now be used; without
the use of transparency, alternative techniques are necessary. In other words,
transparency and overlay enable the transformation from one visualization
technique to another.
One transformation is to convert a sequence of plots into layers, which
provide continuity and can show pattern explicitly. Pros and cons for this type of
transformation are discussed in Section 3.3.1. Another type of transformation is
to stack up layers for direct comparison. More specifically, stacked (along the Y-
axis) graphs and small multiples of visualizations can be transformed into an
32
overlay graph with the use of transparency. Section 3.3.2 gives examples and
analysis of this type of transformation.
Theoretically, the alpha value in transparency can be used to encode
value. At this third level, transparency and the result from its applications can be
used to represent values and meanings. In Section 3.4.1, we will revisit the visual
order produced by the use of transparency and examine a hypothetical case of
using visual order for time sequencing.
Some visual representations or data encoding may not be done effectively
by an alternative visual cue, but by transparency. Similar to the cultural practice
of using the colour red to present higher temperatures, transparency has some
inherent properties that carry special meaning. Section 3.4.3 explores the type of
data that can be represented symbolically by transparency. A sub-section further
discusses the possibility of varying and mapping the alpha value to data value.
We can categorize different types of usage in the following structure:
i. To Reduce Occlusions a. To reduce occlusions for plots b. To reduce occlusions for layering c. To reduce cluttering from occlusions
ii. To Enable Transformations of Techniques a. To cluster plots into layers b. To stack and allow direct comparison
iii. To Represent Value and Meanings a. To represent order by plot sequence b. To represent values by varying alpha c. To encode values with the semiotic of transparency
33
3.2 Using Transparency to Show Occlusion
Occlusion occurs when proximity is used for encoding data. It can happen
in using in 1D, 2D or 3D. It is an issue when the plot in front is blocking the plot at
the back for no functional reason. In most cases, occlusion is undesirable. Figure
15a shows how a simple application of transparency helps to maintain the
integrity of bubble plots when occlusions are present. With transparency applied,
the centres for each bubble are visible. Fig 15b shows an example when no
transparency is used and occlusions occur. The bubbles, especially those in the
same colour coding, merge together into a flat shape, despite the use of a
border. It is also unknown if there are any plots that are being fully covered.
In the next three sub-sections, two types of occlusion are examined: the
first sub-section examines occlusion with plots; the second sub-section examines
occlusion with layers. The third sub-section discusses the ordering of results from
overlays of either plots or layers.
34
(a) (b)
Figure 15: (a) Simple use of transparency to fix occlusion and to maintain plot integrity (one panel of the visualization created by Tim Ellis. Retrieved from www.tableausoftware.com/ public/gallery/topic/Business-and-Real-Estate in June 2011). (b) An example of bubble plot with no transparency applied, screen capture of one panel from the Euro Explore Demo, retrieved from www.ncomva.se/flash/explorer/euro/ on Sept 2011.
3.2.1 Occlusion to Show Compositing with Plots
The issue of occlusion amplifies when there is a large amount of data.
This problem is also known as over-plotting. Few (2008) suggests six solutions to
fix the over-plotting issue. Using transparency is one solution. Figure 16 shows
an example over-plotting with transparency. The result of using transparency to
fix occlusion is the cluster(s) of more saturated areas of plots. With a generic, low
alpha value applied to the host colour of the plots, overlapped objects can be
seen. In this case, there is only one effective layer of data and the analysis of
35
2.5D and layer ordering is not relevant. It is the build-up of (false) colours that
make this simple application useful. The building up of plots can be viewed as
the visual mode for the data. When multiple, semi-transparent, generic plots are
overlaid, the order among plots is always bi-stable. This works well for the
example in Figure 16 because no order should be added and all plots should
have the same visual priority in this visualization.
Figure 16: Making data objects transparent as a solution to over-plotting (Few, 2008)
In the example above, a very low alpha value (highly transparent) is used.
The advantage for using a low alpha value is that more overlays can be seen.
However, the plots with no overlapping are very light in colour, and have a low
contrast against the white background. If a higher alpha value (less transparent)
is used, the un-occluded plots are more visible while the clustered area become
36
opaque faster. A visual comparison for different alpha values will be presented in
Section 3.6.
3.2.1.1 Cases with the Absence of X-Junctions
In the highly over-plotted case, the X-junctions are of less value, as seeing
the overall distribution is relatively more important than seeing the details inside
the cluster. The orders of the plots are not visible in this case. However, for
visualizations that require the showing of details for each object, the presence of
X-junctions will be helpful, if not essential, for seeing individual plot.
Figure 17: One-dimension Scatter Plot
Figure 17 shows a 1D scatter plot created by Tableau to illustrate one
issue for using transparency in plots. In this example, there are many
occurrences of perfectly overlapped plots, thus, overlapping without X-junctions.
Values 20 and 15 both have multiple plots overlaid. Without any junctions, it is
difficult to judge the occurrence for these values. Without the presence of
junctions as the visual cue, the change in saturation is the only cue that can hint
the number of occlusion. This single visual cue is weak in conveying the data; the
change in contrast is also low when a low alpha value is use with a small number
of overlappings. On the other hand, the contrast is low when a relatively high
37
alpha value is used with a large amount of overlappings. The chance of suffering
from perfect occlusion gets higher if the data are all integers, or if the values are
grouped into bins.
Figure 18 shows a horizon graph suggested by Reijner (2008). Although
the author of this technique did not use alpha but saturation for the overlaying, a
horizon graph is a good candidate for applying transparency. It is potentially
perceived as overlapping with transparency, even though an opaque colour scale
is used in this example. By definition, all the overlappings are fully occluded and
no junction is produced. The depth of the 2.5D used in this technique carries the
meaning of the multiplier.
(a)
(b)
Figure 18: (a) Horizon graph Reijner (2008). (b) A mock-up of the same graph using transparency and overlay
3.2.2 Occlusion to Show Layers
Cartographers and modern GIS use map overlay to show location-based
data. Marks in the form of contour lines, hachures, shading, or tinting are placed
on top of a map to give additional geographic or demographic data (ESRI, 2011).
38
By definition, map overlay involves overlapping of more than one layer. A simple
example may have one data layer and a map background. A more complex
example may have multiple data layers overlaying and a map background.
Transparency is required in order to show both the data layer(s) and the
background underneath.
Figure 19 shows three samples of transparency used in map overlay.
Figure 19a shows two data layers, red and purple, on top of a map. The junctions
and the blending of two simple colours help keep individual polygons intact. This
particular example uses a generic alpha value at a medium level for the data
layer. The resulting plot is less transparent and reduces the strength of the false
colour created. More importantly, the use of a medium alpha value highlights the
data layer rather than the reference layer. The opposite approach, where a low
alpha value is applied, can otherwise enhance the concept of union for the
overlapped regions. In the example, the choice of hue further enhances the
visual priority of the data layer over the reference layer.
(a)
39
(b)
(c)
Figure 19: To enable association of location. (a) Flood inundation mapping using GIS (source: water-and-earth.com). (b) A continuous quantitative data overlay on top of a geographical map (source: GeoIQ software fortiusone.com). (c) A similar application of data overlay with transparency in eye-tracking software (source: crazyegg.com).
Figure 19b shows a layer of continuous quantitative values on top of a
map. Hue and transparency are used together in a replicated manner to encode
one dimension of data. Finally, Figure 19c illustrates a different example for the
reference layer. In this case, the screen real estate is put together with the eye
movement data layer.
40
Figure 20: Study of grid transparency as a reference layer (Bartram & Stone, 2010).
Figure 20 shows a different example of using transparency in
visualization. In this case, transparency is applied to the grid to blend it with the
image below. Note that if it were just a light gray, it would sometimes be lighter
than the underlying map, sometimes darker. The transparency makes it always
darker. The map is the data layer and the grid is the reference layer. The fine-
tuning of visual strength or priority for the reference layer is a topic in some prior
research (Bartram & Stone, 2010). As a reference structure, an overly strong grid
interferes with the data structure; an overly light grid makes it difficult, if not
impossible, to switch the attention to where it is needed. There are a number of
factors that may or may not affect how transparent the reference layer should be.
A detailed examination of the prior research, and an extension of that research,
is described in Chapter 4 of this thesis.
41
3.2.3 Transparency and Cluttering
Transparency preserves the details for occluded plots. The showing of
additional details by utilizing the z-axis is usually beneficial. However, applying
transparency can also contribute to the problem of cluttering by making more
detail visible. False colour and false shapes are two artefacts that worsen the
cluttering issue. However, the example in Figure 17 illustrates the issue for the
lack of false shapes when perfect overlapping happens.
In this section, we examine cluttering reduction strategies related to the
application of transparency. Cluttering reduction techniques that remove details
of plot are not discussed in this section, as it is contradictory with the purpose of
applying transparency. Rosenholtz et al. (2005) define clutter as the state in
which excess items lead to degradation of performance at some task. Applying
and fine-tuning opacity itself is one of the clutter reduction techniques (Ellis &
Dix, 2007).
42
Figure 21: Visual cluttering caused by uncontrolled order for transparent plots. The Z-axis orders for plots carry no specific meaning. Retrieved from www.spiegel.de/flash/flash-24861.html
One artefact resulting from the use of transparency is the bias in ordering
when overlaying more than two layers. When we overlay a number of
transparency objects, certain alpha values and occlusion cause an object to
appear on top of others. Perceptual transparency research points out that
overlapping two shapes with the same colour and transparency gives a bi-stable
order. However, the rule does not hold when more than two objects are
overlapping. In figure 21, there are instances of transparent bubble plots
overlapping each other. Some bubbles are visually on top of the other and create
a stacking order. The order does not carry any meaning but introduces excessive
details that contribute to the overall cluttering.
43
To a certain extent, the perception of ordering in transparent plots is
controllable. Research in perceptual transparency identifies rules in luminance
ratios that suggest either a bi-stable layer order or a unique order for layers
(Kitaoka, 2005). These findings are useful when applying transparency in
visualization. When data layers are from the same group, they should receive the
same amount of attention. Without the bi-stable ‘treatment,’ the layer appearing
on top receives more attention and may skew the interpretations of the data.
In Figure 22, circles within a cluster are identical and carry the same
alpha value. The first row shows no particular order effect. In the second row, a
circle is added at the middle. The middle circle in the second cluster appears to
be in front of the rest. In the third row, a white border is added to each circle.
Borders enhance the perceptual transparency. The order becomes apparent for
the middle cluster in the third row.
Research from Fuchs transparency suggests that only the border and the
figural condition are needed to give perceptual transparency. The use of
transparency of the contoured surface also explains the increase in perceptual
transparency with only contour (Grieco & Roncato, 2005). The different examples
in Figure 22 illustrate that the alpha value, the figural configuration, and the
application of border are all factors that affect the visibility of the order.
44
Figure 22: From unordered, to order with bias, to controlled order. The first row shows clusters of circles with the same amount of region being overlaid. Circles within a cluster carry the same alpha value. The second row shows an additional circle at the middle of the cluster with occlusion from the other circles. The second cluster with a high alpha value gives the biased impression that the middle circle is on top. White borders are added to each circle in the third row. The second cluster with fewer transparent circles gives the most detailed order and effect.
3.2.3.1 Manipulating Order for Legibility
There are examples that prove that imposing an order to plots helps to
reduce visual cluttering and improve the integrity of plots. Figure 21 shows a
bubble map with black borders and without any particular order. The occlusions
make some of the bubble behind less obvious than others. The uncontrolled
visual order also introduces unnecessary variability among plots for no meaning.
k = 1 alpha = 0.5
k = 0.5 alpha = 0.8
k = 1 alpha = 0.2
k = 1, border k = 0 alpha = 0.5
k = 0.5, border k = 0 alpha = 0.8
k = 1, border k = 0 alpha = 0.2
45
Figure 23 is an example of a bubble map where the orders for plots are
controlled. The bigger plots are drawn first. This strategy causes the smaller plots
to appear on top and become easier to see. By manipulating the order, the
overall visualization minimizes the partial overlay and reduces the cluttering.
Although this imposed ordering does not carry meaning, the approach reduces
visual cluttering.
Figure 23: Bubble maps with controlled order to reduce cluttering. Retrieved from www.nytimes.com/interactive/2009/04/07/us/20090407-immigration-occupation.html
3.3 To Enable Transformations of Techniques
In this section, we will examine several common visualization techniques
and how applying transparency enables a change of the design and provides
benefits.
46
3.3.1 To Cluster Plots into Layers
In Section 3.2.1, we see how transparency is used to show composition of
plots in an over-plotted graph. When the plot density is high, the plots become a
continuous layer. Assuming that more than one colour is used for encoding
categorical data, the forming of a layer per colour makes it easy to aggregate and
compare between groups. To extend this idea, we can apply clustering to a
scattered plot even if the plots are not dense. There are many types of visual or
statistical methods in clustering. The simplest way could be to expand the plot
size and allow them to merge. Figure 24 shows an example of clustering plots
into planes. Clustering is done with a contour discovery algorithm, which
connects groups of plots while minimizing the visual overlay. With transparency,
the resulting clustered areas do not have to be exclusive to one another. The
coloured planes provide a redundant cue to colour-coded plots, and provide an
explicit plane to group data together. Furthermore, plots can also belong to more
than one cluster.
47
Figure 24: Non-exclusive clustering of plots into planes (Collins, Penn, & Carpendale, 2009).
Figure 25: Simple parallel coordinates plots with transparency Wegman and Luo (1996)
48
Figure 25 illustrates another example of using transparency to show the
build-up of plots, in this case, a parallel coordinates plot. Line crossings are
inevitable in any meaningful use of parallel coordinates. Reducing the over-
plotting becomes a new challenge when using this technique. With a massive
data set, and enabled by a state of the art SGI hardware add-on, Wegman and
Luo (1998) apply transparency to the blending and overlapping of lines. The user
can adjust the generic alpha setting to get an optimal result. From the
screenshots we can see a low alpha value is used to allow the building up of
lines. While parallel coordinates were used without the application of
transparency before, the availability of transparency plots allows for the building
up of plot lines to be seen.
Parallel sets are similar to parallel coordinates, except the relatively newer
technique is designed to visualize categorical data. Instead of drawing plotlines
from a point on one axis to another, parallel sets use ribbons, or bands of colour,
to display the connection between axes. Each band can be viewed as a flat
layout of the amount of plotlines from one category. In other words, the width of
the band along the axis gives an impression of how many data items are
visualized. The result is a dramatically less-cluttered visualization (Figure 26).
49
Figure 26: Parallel Set designed by Kosara, Bendix and Hauser (2006).
This technique cannot be made possible without the application of some
level of transparency to the plot bands. In Figure 27, (a) shows a simple parallel
set created with the Parallel Set 2.0 software (eagereyes.org); and (b) shows
how the set would look if opaque colours were used. Seeing the occlusion and
the overlaid connections is critical in a parallel set, and this is accomplished by
the use of transparency.
50
Figure 27: Using transparency (left) and without using transparency (right) in parallel set. The parallel set on the left is created with Parallel Set Software from eagereyes.org. The figure on the right is a hypothetical recreation of the set with opaque colours.
Figure 28 shows another approach for improving the visualization of
parallel coordinates. The technique was designed by McDonnell and Mueller
(2008). The main feature for this improved version of parallel coordinates is the
use of edge-bundling through splines. Plotlines are clustered into three planes,
and the result is a huge improvement for visual cluttering. Again, the planes
receive transparency treatment so that they can be seen intact. In related
research, the same group of researchers further tuned the colour blending to give
the three layers a particular visual order (Wang, Giesen, McDonnell, Zolliker &
Mueller, 2008).
Some interesting techniques have come out of their research. The authors
suggest varying the alpha value to enhance the ordering of layers. For layers that
should be on top, they are made less transparent. This paper also suggests an
51
adjustment to reduce false colour (Figure 28) by reducing the saturation of the
occluded area at the back. The saturation for the top layer is boosted, and
reduces the chance that the mixed colour will become greyish.
Figure 28: Clustering plots into shapes and order for comparison (Wang, Giesen, McDonnell, Zolliker & Mueller, 2008). The one on left has the blue layer on top. The one on the right has the red layer on top.
Within these improvements for the parallel coordinates technique, visual
cluttering is vastly improved by clustering plotlines into planes. And with the use
of transparency, occlusion is no longer an issue. In some sense, transparency
enables this type of transformation of technique.
3.3.2 To Transform and Enable Direct Comparison
Tufte (1990) suggests the use of small multiples to visualize higher
dimensions. With transparency, multiple plots can be stacked together along the
Z-index for direct comparison. By combining the different locations into one, it is
easier to compare the details between them.
Stack graph is another visualization technique that can be transformed by
utilizing the same axis and baseline to highlight the differences between
52
categorical data. A typical stack graph uses colour to show the different
composite of a total. Emphasis is put on the total, but not on the comparison of
individual compositions.
One example of this was found on Gapminder.org and is shown in figure
29. This visualization can be done in either small multiples or as a stack graph. It
serves as a good example of how transparency can be used, and the issues that
arise when plots or planes are stacked on the Z-index for comparison.
The benefit is clear: when multiple plots share the same plotting space,
comparisons can be done directly. This is especially true when comparisons of
details are needed. If done with small multiples, only one or zero axis can be
shared among graphs. For example, if time is used for the X-axis, putting multiple
graphs in a column allows for comparison. Adding grid lines to each set of small
multiples can further aid the comparison. However, the Y-axis cannot be
compared directly. The case is worse when small multiples are arranged in a 2D
matrix. There is no axis shared or aligned directly for graphs that are in neither
the same column nor row.
The data in Figure 29 can also be displayed in a stack graph. Stack graph
is good for showing the total and its composition. However, changes within one
category are obscured when the baseline is not flat, but affected by the slope of
another data group. With curvy and uncommon baselines, it is difficult to see the
trend, distribution, and even point estimates for plots that are stacked. By
stacking transparency layers on the Z-axis instead of the Y-axis, a common, flat
baseline can be maintained.
53
However, there are some limits for this transformation of techniques
enabled by transparency. There are also execution issues for the example in
Figure 29. First, this graph can be done with only lines. Second, the layers are
not transparent enough.
Figure 29: Stacking with Z-index as an alternative for small multiples or simple stack graph (source: gapminder.org).
In addition, stacking up layers along the Z-axis has certain potential
issues. If one layer is fully occluded, whether it is at front or from behind, the
colour coding for that layer will become meaningless. Second, stacking up layers
with similar distribution, which is more common than the other case, creates a
54
large portion of overlapping. Only small amounts of the un-occluded fringe can
be seen at its original colour code. In other words, unless different groups of data
have a different central tendency, the majority of plots will overlap. To illustrate
the two issues, we can look at another visualization technique that is different by
design and does not suffer either issue. Figure 30 is a polar-bar graph; there is
less of a chance that large portions of all plots will overlap at the same area. Not
that we can transform a small multiple to a polar-bar graph, but this graph
illustrates one example for stacking up plots along the Z-index. It is unlikely to
suffer from the problem of a majority of plots overlapping due to its design.
Figure 30: Example of a polar bar plot from MatPlotLib (source: matplotlib.sourceforge.net/examples/pylab_examples/polar_bar.html).
55
3.4 To Represent Values or Meanings
3.4.1 Representing Orders by Plot Sequence
As we have seen from the previous section, it is possible to show order
when borders are added. Plot order has the potential to present an additional
data dimension. We have tried to see if there are existing examples that make
use of this technique to visually encode the order. Assuming the plot
representing a more recent time is placed on top of the canvas, using a
transparent plot with borders can naturally show the plotting sequence. This
could convey an additional dimension in the visualization. Unfortunately, we
cannot find a good example of transparency overlay with a specific order, but we
can borrow an example to illustrate the point. Figure 31 depicts the potential
usage of showing plot order. Although the plot order is not maintained in this
example, we see how plot ordering can be seen. We speculate that the order of
transparency plots with borders may be able to visually convey additional data
dimensions such as time sequences.
3.4.2 Varying Alpha Values Across Layer
All the examples discussed in the previous sub-sections apply a single
alpha value to the plots. In the previous example, showing compositing with
transparency plots (Figure 16), the heavily plotted area is saturated and opaque.
When the density of plot increase, the details of individual plots become less
visible, and the aggregate distribution of all plots provide more value in the
visualization. The resulting alpha value in the aggregate scenario is no longer
fixed, but continuously changing due to various amounts of stacking.
56
Figure 31: “Interactive graphic: Japan’s deadly seismic history” created by Peter Aldhous. Map data © OpenStreetMap (and) contributors, CC-BY-SA. Screens captured from www.newscientist.com/blogs/shortsharpscience/2011/03/interactive-graphic-japans-dea.html on Jun 23rd, 2011.
If we overlay a dense scatter plot with transparency in Figure 16 on top of
another layer—a map, for example—the visualization becomes one similar to
Figure 32 (Fisher 2007). Not only does this illustrate a convergence from plot to
layer, but the example also shows the use of varying transparency to encode
data. A different set of criteria is needed to evaluate the use of transparency in
this type of scenario. In Figure 32, Fisher demonstrates the Hotmap Tool that
applies translucency and hue according to the number of queries on location. For
the heavily queried area, the data layer is opaque. One potential issue when
using varying levels of transparency in an overlay setting is that the details from
the background will be occluded. Prior knowledge about the map will be useful, if
57
not necessary, to interpret the visualization. In the example, viewers familiar with
the map of Seattle may be able to tell that the brightest spot is where the Seattle
Space Needle is located.
Figure 32: Hotmap Tool using translucency to encode the frequency of queries on different locations (Fisher, 2007).
It should be noted that the visualization in Figure 32, which has only one
data layer and one map as the background, could be done without using varying
transparency. Pang (2008) suggested using saturation or value in HSV colour
space as the alternative for encoding value with transparency for encoding.
In Figure 33, we created four 12-step scales by repeatedly stacking up the
same transparency block to illustrate the overall transparency after multiple
58
overlaying. On the right column, a graph shows the change of L* from the first to
the twelfth step. For the first row, an alpha value of 0.1 with 100% black is used.
The second row uses an alpha value of 0.2 with 100% black. The third row uses
an alpha value of 0.3 with 100% black. For this level of alpha, the colour
saturated quickly and the steps between the seventh and twelfth are visually
identical. Also, the black numbers at the background are no longer visible. On the
last row, a medium alpha (0.4) and a medium grey are used. The colour
becomes saturated and opaque at the seventh step.
Figure 33: Using stacking to use transparency in creating a 12-step scale.
The scales in Figure 33 illustrate that when using a very low alpha value
for stacking, the decrease in the lightness component (L*) loosely follows a linear
perceptual change. When a higher alpha value (less opaque) is used for
stacking, the decrease in L* follows a diminishing change.
59
3.4.3 Using the Symbolic Meaning of Transparency
The semiotic implications of transparency can be a strong reason to apply
transparency as a feature independent of its host colour. Transparency can be a
natural choice for simulating data types like uncertainty, distance, focus, lapsed
time, etc. For symbolic application, transparency can also be a choice for
visualizing concepts like inclusion, importance, fringe, and solidness. The Venn
diagram is an example of visualizing the concept of inclusion. The different
regions carry binary types of data regarding the union. For other types of data—
for example, uncertainty—the value could vary from zero to one. Transparency
can be used like other common visual cues in visualization to map continuous
values. Correa, Chan and Ma (2009) point out two opposite approaches for
visualizing uncertainty with transparency in Figure 34. The first approach (a) is to
map low alpha value to high uncertainty data. This works out well when data with
certainty are highlighted. The opposite approach (b) is to map high alpha value
with high uncertainty data. This approach helps with the discovering of
uncertainty and formulates questions about its distribution.
Transparency can be interpreted as completeness. Griethe and
Schumann (2005) use transparency as one of the graphical variables to
represent uncertainty. A closely related cue, blurriness, is also suggested for use
as a variable for visualizing information objects with uncertainty.
60
(a)
(b)
Figure 34: Correa, Chan and Ma (2009) use transparency and size to encode two measures of uncertainty. In (a), plots with higher uncertainty are shown with higher transparency. This approach hides the uncertain plot. In (b), the same data set is used, but plots with certainty are shown with higher transparency in order to highlight the uncertain plots.
61
4: STUDY IN JUST ATTENDABLE DIFFERENCE
In this chapter, a study is conducted to examine a common use of
transparency in visualization: a transparent grid layer and an underlying data
layer. Grid, as a reference structure, should not interfere with the data layer, but
visible enough to be useful. By applying some level of transparency to the grids,
the reference layer becomes attendable on-demand. This is a case that requires
an explicitly designed difference of visual saliency for different layers. This study
is based on and extended from previous studies on Just Attendable Difference
(JAD) (Stone & Bartram, 2009; Bartram & Stone, 2010).
4.1 Background
In the prior JAD studies grid lines and a scatter plot were used in the
experiment design. With a grid and scatter plot setting, the grid layer serves as
the reference structure. The grid is there in an assisting capacity and should use
a lower visual priority. Going beyond just a suggestion of having a strong contrast
between the reference structure and the content layer, JAD experiments seek a
range of alpha values that bound an effective grid. A reference structure-like a
grid can be considered effective if it provides utility while not negatively affecting
the visibility of the data layer and the overall level of cluttering.
The experiments included two key tasks. The first part, known as Faint
Grid, is similar to Just Noticeable Difference (JND) studies on human perception.
Subjects were asked to adjust the transparency, or alpha value, of the grid to as
faint as possible, yet still useful. The other task, known as the Strong Grid, asked
62
the subject to adjust the alpha value of the grid to its strongest, but before it
became intrusive. The study does not simply ask for the participant to adjust the
optimal transparency for grid. First, it is more difficult for one to define, articulate
and decide what is optimal. Without clear instruction, the response consists of a
higher degree of subjective preference. From the pilot run in the prior studies, the
Best task was carried out and the responses are less consistent across and
within individual participant. The two boundary tasks, on the other hand, are
easier to understand and perform. The results of the Strong and Faint tasks are
more consistent.
A number of factors were tested in the prior JAD studies: polarity of grid
and background colour, plot density, and grid spacing. All the visual elements
were achromatic.
63
Figure 35: Screen capture showing the training session of prior JAD study (Bartram & Stone, 2010).
For dark grids, from which a black grid emerges when the subject adjusts
the strength upward, different backgrounds have no significant effect for the
subject’s choice of grid strength. Density does affect the choice of grid strength in
both the faint and strong task. Moreover, the range of alpha between the faint
and strong task increases when the density of the content increases. In the light
grid study, in which a white grid emerges when the subject increases the strength
of the grid, both the background and the density have a significant effect on the
choice of grid’s alpha value. Although subjects adjust alpha according to the plot
density, the range of alpha between faint and strong tasks is mostly set at around
0.2. It is safe to deduce that 0.2 alpha is a good JAD for reference structure like
grids as it works for useful and yet unobtrusive for a wide range of data layer.
64
4.2 Study Design and New Independent Variables
To extend from the previous JAD study, three variables are examined.
They are chromatic grid colour, image type, and density. The instructions are the
same: subjects are asked to set a faint grid and a strong grid in two separate
tasks.
4.2.1 Choice of Grid Colour
Two new colours, red and blue, are added to the study. Both colours have
the same luminance value when alpha is at its maximum. Black is also used to
test the consistency with prior JAD studies. The RGB values for the colours used,
when alpha is set at 1, are: Blue (36, 104, 217), Red (210, 9, 4) and Black (0, 0,
0).
4.2.2 Choice of Image Type and Density
For image type, four types of content layers are selected: abstract image;
contour (Line) map; street (Fill) map; and Scatter plot. For each image type, two
images of different density are used.
We are interested in overlaying grids over maps made up with contour
lines. These lines are the data structure and each line represents a value. The
Line map poses a challenge as it uses the same visual elements with the grid.
The second type is a map filled mainly with shapes; for example, a street map.
The characteristics for street map vary in shapes at different greyscales.
We named this type of image ‘Fill.’ Again, for the sake of having the results
65
comparable to the previous JAD study, scatter plot is selected as one image
type.
The Fill images were taken from Openmap.org. Text labels were removed
in order to remove the visual distraction. We chose two locations in the UK,
making it less possible for the subject to identify and get distracted by the
locations, thus levelling the minimal prior knowledge required for all subjects. The
Line maps were taken from BC Minerals and Forestry Mineral Titles Online GIS
https://www.mtonline.gov.bc.ca. A glacier zone is chosen for the sparse version
of Line map for having less variation in altitude. A nearby mountain area is
chosen for the dense version of Line map. Special marks and labels are removed
as they introduce unnecessary load to the subject, visually and cognitively. The
weights of the lines are limited to two levels. The images for the scatter plot are
created by Tableau Software™.
In the case of abstract type, a flat colour and a Gaussian noise image are
used for sparse and dense images respectively. We are testing the imaginary
best and worst case scenarios for data structure. The best-case scenario, where
there is no data at all in the content layer, is represented by a flat colour. We also
included a series of flat colours at different luminance used in the previous JAD
study. The worst-case scenario is a screen filled with noise. This imaginary data
layer represents the most demanding case for the grid strength perceptually as
the single pixel grids line visually fused to the random black noise. The abstract
type of image allows us to identify and isolate the effects of minimally-useful and
almost-intrusive regardless of the data being visualized.
66
For other three image types (Line, Fill, Plot), two images of each type with
differences in content density are selected. Although there is no quantitative
foundation between the Dense and the Sparse, the difference is obvious.
A 2 (density) x 4 (image) x 3 (colour) factorial design yielded 24
experimental conditions. Each condition of the independent variables (IV) is
replicated three times. For each task, the subject adjusts the grid strength 72
times. The orders for the conditions are randomized for each subject and task.
An ordered list is put into Excel, and a random number is created and associated
with each of the 72 conditions. The two columns are then sorted based on the
random number column.
The order of the two tasks is counter balanced. Half of the subjects start
the study with the Faint task first; the other half starts the study with the Strong
task first.
67
Figure 36: Four image types for data layer and their two variations in density.
4.3 Instructions to Participants
Before the start of the study, participants read a standardized set of
instructions on paper (See Appendix A). We explain the two tasks: Faint and
Strong. For the Faint case, we ask the participant to adjust the grid as faint as
possible, up to the point that it is minimally useful. For the Strong case, we ask
68
the participant to adjust the grid to be as strong as possible, but to stop before it
becomes intrusive. We instruct the participant to adjust the grid so that it
becomes too strong and negatively affects the reading of the data layer, and then
to lower the transparency to find the point when one more step upward in
strength will cause the grid to become intrusive and clutter the data layer. To
further elaborate on the state of intrusiveness, we mention that if the grid
becomes a ‘fence’ on top of the data structure, it is considered intrusive. The
fence simile was suggested by a participant in a prior study, and the figure of
speech matched nicely with what we consider as intrusive.
We emphasized that there are no right or wrong answers, and there is no
time limit. The participants could stop at any time to take a break, or to terminate
their participation at will.
Participants were primarily students from design, computing and business
backgrounds. Participants are recruited from an electronic bulletin board, and by
asking students on campus to sign up for the study. During the recruitment,
participants are notified that this is a study on visual perceptions and grid design,
and that the study can take as long as an hour. We pay the participants $10 as
appreciation for their time.
69
Figure 37: One of the training screens with a red grid, showing the experimental setup.
4.4 Pilot Run, Adjustment and Final Design
A pilot experiment was run in December 2010, in which 18 students
participated. In the pilot, a beta version of the testing software was used. The
beta software was a rewrite of one used in the previous JAD studies. It is written
with Processing (processing.org), an open source-programming environment for
graphics and interactions. Several new features are built into the software. For
example, the Next button does not show up until a couple of inputs (clicks) are
received. This prevents accidentally moving onto the next set of conditions if the
70
participant clicks more than once on the next button. Grid colour is now a
variable. As the grid colour is one independent variable in this JAD study and it is
randomized, the beta version of the testing software starts the grid at 0.5 in order
to show the colour at the beginning of each condition.
At the end of the pilot, we notice the result from the Faint Grid task
matches with the findings from the prior JAD study. However, there is a great
discrepancy for the Strong Grid task. Most of the participants adjust the grid
strength upward for the Strong Grid task. The resulting means for the Strong Grid
task, regardless of image type and density, are close to 0.7. In the prior JAD
studies, the mean for the Strong Grid task was settled at around 0.3.
We suspected the combination of a different starting alpha and the same
instructions contributed to the vastly different results for the Strong Grid task. The
original instruction, asking the subjects to adjust the grid as strong as possible
before it became intrusive, may be suggestive for an upward only adjustment. As
we note during the study, most participants did not even try adjusting the grid
strength lower. There is only one exception of an individual adjusting the grid
strength downward, and the result roughly matches with what we had
experienced in prior studies.
In response to the possible design flaw in the study, and to provide direct
comparison with the previous studies, we changed the starting alpha value back
to zero in the final design of the study. The data collected in the pilot are
excluded from the analysis in this chapter.
71
The final study employs the same factorial design. Each participant does
two tasks: one for the Faint grid, and one for the Strong grid. In each task, a set
of 24 conditions (4 Image Types * 2 Density levels * 3 Grid Colours) is replicated
three times. A total of 72 responses are received for each task per participant. A
new group of 15 participants, approximately balanced for gender distribution, are
recruited for the final study. At the end of the study, a total of 2160 valid
responses are collected and analysed.
4.5 Hypotheses
Hypothesis 1: Grid colour would affect alpha setting. We expect the grid
with colour (red and blue) would result a lower alpha
setting than the black grid, as hue contributes an additional
cue to the segregation between the greyscale data layer
and the colour reference structure.
Hypothesis 2: Alpha would be affected by Density. A data structure with
higher density would result in a higher alpha setting. The
extreme case is an area of black and white noise, in which
the grid is not visible unless the alpha is at a very high
value. To deduce from the extreme case, we postulate that
higher density will require a higher alpha value.
Hypothesis 3: Alpha settings would vary with Image Type. We are
especially interested in the Line map, because it uses the
same visual elements (thin lines) as the grid, varying only
in curved vs. straight. We would expect that its setting
would be significantly different that the plots or the area
map.
Hypothesis 4: There would be more variability in the Strong settings than
the Faint ones. The Faint boundary is akin to the
72
psychophysical property of minimum visibility. The Strong
setting is more subjective.
Hypothesis 5: The range of the alpha will be affected by the Image Type
and Density. The range of alpha is the difference between
Faint and Strong Grid. More specifically, we expect a
higher range for higher density, to address the visual
interference in the dense image.
4.6 Overall Results
Table 1 shows the mean alpha for all 24 conditions. Four cells are
highlighted to show the extremes. The highest mean or strongest grid comes
from the Strong Task––Abstract––Dense––Blue grid condition (0.7738). The
lowest mean comes from the Faint Task––Abstract––Sparse––Black grid
(0.0611). If we set aside the Abstract image type and look at only the realistic
conditions, the highest mean comes from the Strong Task––Fill––Dense––Blue
grid condition (0.4247), while the lowest mean comes from the Faint Task––
Line––Sparse––Black grid condition (0.0790). The overall mean alpha for Faint
task is 0.114, sd.=0.06. Without the Abstract Image type, the overall mean alpha
for Strong task is 0.330, sd.=0.16. The higher standard deviation for Strong task
suggests more variability and match with our prediction in H4. The error bars in
Figure 38 also provide a visual presentation of the difference in variations
between the Faint and Strong task. The graph uses 95% confidence interval (CI).
73
(a) Sparse conditions
74
(b) Dense conditions
Figure 38: Mean and error plot of alpha for (a) Sparse, and (b) Dense condition.
75
Two obvious patterns emerge from the error plots in Figure 38a and 38b. For
image type, other than the obvious difference for the Abstract Dense condition,
the other results are very similar. For Grid colour blue, the result is consistently
getting a higher alpha value.
Table 1 Mean alpha for all 24 conditions
To further examine the results, we ran one-way ANOVA tests to find out if
there was evidence that individual factors contribute to the differences for mean
alpha, and two-way ANOVA tests to discover if there was any interaction
between independent variables (IV). We also ran a three-way ANOVA test to see
if there was any interaction effect produced by all three IVs together. The dataset
is split into two by Task. The test results in Table 2, 3, and 4 show all the three
IVs, namely Image Type, Density, Grid Colour, and their interaction. Test results
that are significant are put in bold.
The first three rows in Table 2 show that all the three factors are
significant, meaning the mean alpha are significantly different when data are
Faint Strong Black Blue Red Black Blue Red
Abstract Dense 0.3498 0.4651 0.3512 0.6403 0.7738 0.6480 Sparse 0.0611 0.0954 0.0773 0.2803 0.3515 0.2972
Fill Dense 0.1170 0.1863 0.1339 0.3169 0.4247 0.3155 Sparse 0.0794 0.1157 0.0938 0.2588 0.3709 0.2727
Line Dense 0.1160 0.1586 0.1203 0.3099 0.3865 0.3156 Sparse 0.0790 0.1126 0.0888 0.2708 0.3414 0.2821
Plot Dense 0.1012 0.1463 0.1141 0.3358 0.4214 0.3359 Sparse 0.0827 0.1163 0.0960 0.2915 0.3916 0.2989
76
grouped and compared by these IVs. For interaction between two factors, Image
Type * Grid Colour is not significant in both tasks. Image Type * Density is
significant. Density * Grid Colour is significant in the Faint Task but not significant
in the Strong Tasks. A three-way ANOVA test shows the interaction effects are
significant for the Faint task but not for the Strong task.
Table 2 ANOVA results with all 24 conditions
All Conditions (24) Task Faint Strong
One-way: Image Type F(3,1076) = 87.678, p < .001 F(3,1076) = 51.576, p < .001
One-way: Density F(1,1078) = 276.691, p < .001 F(1,1078) = 109.944, p < .001
One-way: Grid Colour F(2,1077) = 19.903, p < .001 F(2,1077) = 24.034, p < .001
Two-way: Image Type * Grid Colour F(6,1068) = .890, p < .501 F(6,1068) = .214, p < .972
Two-way: Image Type * Density F(3,1072) = 274.082, p < .001 F(3,1072) = 66.211, p < .001
Two-way: Grid Colour * Density F(2,1074) = 3.789, p < .023 F(2,1074) = .284, p < .753
Three-way: Image Type * Grid Colour * Density F(6,1073) = 2.509, p < .020 F(6,1073) = .009, p < .922
As we have seen from the resulting plot in Figure 38, the mean alpha
produced by the Abstract Dense condition are obviously different than the rest.
We suspect that the Abstract Dense condition may be skewing the ANOVA
results for image type. We isolated that condition out to run the ANOVA again.
Table 3 shows the ANOVA results for the dataset without the Abstract-Dense
77
condition. When using one-way ANOVA to test the factor Image Type, the Faint
task remains significant whereas the Strong task is not significant. For the other
two IVs, the results are significant in both tasks. For interaction between two
factors, the only pair of IV that is significant is Image Type * Density in the Faint
Task.
Table 3 ANOVA results with all conditions except the Abstract-Dense Condition
All conditions except Abstract-‐Dense (21) Task Faint Strong
One-way: Image Type F(3,941) = 14.810, p < .001 F(3,941) = 2.053, p < .105
One-way: Density F(1,943) = 107.356, p < .001 F(1,943) = 16.225, p < .001
One-way: Grid Colour F(2,942) = 39.328, p < .001 F(2,942) = 31.433, p < .001
Two-way: Image Type * Grid Colour F(6,933) = .492, p < .815 F(6,933) = .429, p < .860
Two-way: Image Type * Density F(2,938) = 3.527, p < .030 F(2,938) = .161, p < .851
Two-way: Grid Colour * Density F(2,939) = 2.802, p < .061 F(2,939) = .147, p < .864
Three-way: Image Type * Grid Colour * Density F(4,941) = .333, p < .856 F(4,941) = .038, p < .997
To further isolate the condition that may contribute to the significantly
different means, we remove the Abstract-Sparse conditions from the dataset. In
other words, all the data in this two-way ANOVA test represents some form of
realistic condition. Image Type is no longer a significant factor in either task.
Density and Grid Colour are still significant when tested individually. For
78
interaction effects, Image Type * Density are the only two factors that are
significant for their interaction effects. This means the effect on Image Type is not
the same at the two levels of Density. This interaction effect is further examined
in Section 4.6.2.
Table 4 ANOVA results without the Abstract Image Type (Dense and Sparse)
All conditions except the Abstract-‐Dense and Abstract-‐Sparse (18) Task Faint Strong
One-way: Image Type F(2,807) = 2.327, p < .098 F(2,807) = 2.299, p < .101
One-way: Density F(1,808) = 70.461, p < .001 F(1,808) = 15.370, p < .001
One-way: Grid Colour F(2,807) = 35.309, p < .001 F(2,807) = 31.668, p < .001
Two-way: Image Type * Grid Colour F(4,801) = .454, p < .769 F(4,801) = .475, p < .754
Two-way: Image Type * Density F(2,804) = 3.313, p < .037 F(2,804) = .173, p < .841
Two-way: Grid Colour * Density F(2,804) = 2.188, p < .113 F(2,804) = .067, p < .935
Three-way: Image Type * Grid Colour * Density F(4,805) = .313, p < .869 F(4,805) = .041, p < .997
The rationale for testing Abstract conditions is to isolate the effects of
minimally-useful and almost-intrusive regardless of the data being visualized.
However, from the series of ANOVA tests, the Abstract conditions are skewing
the real world results. In the rest of the analysis, we will exclude the two Abstract
conditions. We will only include the two conditions in the range analysis in
Section 4.9.
79
4.6.1 Image Type
The results from Table 4 show that the mean alpha among the three real-
world scenarios for Image Type, namely Fill, Line and Plot, are not significantly
different. The result refutes H3. In other words, different image types do not
affect the mean alpha at this stage of the analysis.
4.6.2 Image Type and Density
Image Type is not a significant factor by itself. However, it is significant
when interacting with Density. Without the Abstract image type, there is an
interaction between Image Type and Density for Faint task (F(2,804) = 3.313, p <
.037). Obviously density is contributing to the difference for mean alpha. To
further investigate, we split the data set using Task and Density, and run a two-
way ANOVA to test the three real world Image Types. The result in Table 5
shows that for Faint task, the Dense Image Type produce significant different
mean alpha (F(2,402)=4.405, p < .013). There is no significant difference in
mean alpha among Image Types, for Faint task with Sparse, Strong task with
Dense, nor Strong task with Sparse. Figure 39 shows the error plot that
demonstrates the differences of mean among different Image Type and Density
combinations. The upper two graphs show the Strong task and the lower two
graphs show the Faint task. The vertical scale is standardized at a range of 0.12
alpha. The lower left quadrant is the only combination of task and density to
which Image type is significant. Figure 40 further shows how the Image Type and
Density interact specifically for the Faint task. Although we use only two discrete
levels of density for each image type, we connect the sparse plot and the dense
80
plot to group as a redundant cue (in addition to colour) to group the two plots
visually. To refine our H3, the data shows that only for Faint task on Dense
condition is Image-type significant. In other cases, image type is not significant.
Table 5 One-way ANOVA tests on Image Type, with 4 separate cases of Density and Task
F Sig. Faint Dense 4.405 0.013 Sparse 0.296 0.744 Strong Dense 0.94 0.392 Sparse 1.639 0.195
81
Figure 39: Error bar plot for image type on the X-axis, separated into four quadrants with Task and Density. When running ANOVA on individual quadrant, The mean alpha is significantly different only at the lower-left quadrant (Faint task with Dense condition)
82
Figure 40: Interaction effect between Density and Image Type for Faint task
4.7 Grid Colour
The RGB values for the colours used, when alpha is set at 1, are: Blue
(36, 104, 217), Red (210, 9, 4) and Black (0, 0, 0). The equivalent HSV values for
the colours used are: Blue (217, 83, 85), Red (HSV 1, 98, 82) and Black (0, 0, 0).
The HSL values for the three colours are: Blue (217, 0.72, 0.50), Red (1, 0.96,
0.42) and Black (0, 0, 0).
Grid colour does affect the participant choice of the alpha strength. Across
the two Tasks, ANOVA tests show that the differences for Grid Colour are all
significant with p < .001. Tukey’s Post hoc tests show that Blue is getting a higher
alpha consistently. In the Faint task, where the responses are within the lower
range of alpha and have an overall mean of 0.11, Blue is significantly higher than
Red by .029 (sig. level < .000) and higher than Black by .042 (sig. level < .000).
83
Figure 41: Box plot showing how the distribution of alpha affected by Grid Colour, Density, and Task
To further isolate the effects within the Grid Colour group, we use one-way
ANOVA on the data set divided by Density and Task. Alpha for Blue is obviously
significantly higher than the Red and Black by a high margin. For Strong grid,
alpha for Blue is 0.076 higher than Black (sig. level < .002) in the Dense case,
and 0.086 more than Red (sig. level < .000) in the Sparse cases. For Faint grid,
alpha for Blue is 0.568 more than Black (sig. level < .000) in the Dense case, and
0.325 more than Red (sig. level < .000) in the Sparse cases.
Between Red and Black, the differences are unclear. With only cases split
based on the Density and Task, we see the Black has a higher mean alpha than
Red in the Strong grid task, whereas the reverse is observed for Light grid task.
The difference is small and the statistical differences are not significant. Across
84
the different Image Type-Density conditions, there is no significant difference
between Red and Black. A simple mean comparison shows that Red is higher
than Black for the majority of cases. It should be noted that this comparison is not
statistically significant.
Hue as an extra visual cue does not help in reducing the alpha. Using Red
does not help to reduce alpha compare to Black. On the other hand, Blue
requires a higher alpha when comparing to Red or Black grid, consistently and by
a large margin. One possible explanation is that the saturation for the Blue we
used is not as high as the Red we used. As a result, we can refute our H1, which
expects a lower alpha for colour grid.
4.8 Density
As expected, across all cases, alpha for the two types of density are all
significantly different. In Tables 2, 3, and 4, we see Dense conditions receive a
higher alpha setting when dataset is split between the two tasks. All of the two-
way ANOVA results point out that Density is significant with p < .000. As a result,
H2 is confirmed.
We further break down the dataset by Image Type and Task. By running a
one-way ANOVA on the six cases, five of the cases show that Density is a
significant factor. The only case that is not significant is Strong Task on Plot
(F(1,268)=3.134, p < .078). Table 5 summarizes the result for all six cases. In
general, the result suggests that the factor Density is more significant for Faint
task than Strong task.
85
Table 6 One-way ANOVA tests on Density, with six separate cases of Image Type and Task
F Sig. Faint Fill 38.363 0.000 Line 31.731 0.000 Plot 8.23 0.004 Strong Fill 7.942 0.005 Line 5.317 0.022 Plot 3.134 0.078
In Section 4.6.2, we have identified that only the Faint task with Dense
condition contributes to the significantly different mean alpha across the three
Image Types. Image Type was not significant in the three other cases. Here, the
one-way ANOVA on Density shows that it is a significant factor in almost all six
cases of Task and Image Type combinations. Comparing the two IVs, we can
deduce that Density is a more influential or predictable factor than Image Type to
explain the differences in mean alpha.
An extended examination for this IV, by looking at the range of alpha
between Dense and Sparse, will be discussed in the later part of this paper.
86
Figure 42: The slope of the black line show the change of mean alpha from Dense to Sparse. Other than the big difference between Abstract Dense and Abstract Sparse, the differences for other Image Types between two levels of Density are small and consistent.
4.9 Range between Faint Task and Strong Task
There is no existing rule of thumb for the alpha value of a grid that is just
attendable; however, we believe that value lies somewhere between the Strong
and Faint grid. By examining the range, we may get some insight for designing a
better grid that can be not turned off and on, but attended to and ignored by the
subject. We prepare the data by getting the mean from the three ‘attempts’ for
each of the 24 conditions for two tasks. We then subtract the mean of Faint task
from the mean of Strong task, per condition, per participant. At the end we had a
sample size of 360, including the Abstract Dense and Sparse conditions. Figure
87
43 is the boxplot for the data for all 24 conditions, clustered by Image Type,
Density and Grid Colour.
To examine if there is any effect for these IVs, we ran a two-way ANOVA
test. We took the Abstract data out to focus on the realistic cases first. For Image
Type, the factor is not significant for the differences of Mean alpha range
(F(2,268)=1.884, p<.154). Density is not significant either (F(1,269)=.159, p <
.691). Grid Colour is significant for the different mean ranges (F(2,268)=5.254, p
< .006). As evidence from the boxplot in Figure 43, the range of alpha between
tasks is higher for Blue across all image type and density. We took the data for
Blue Grid and re-ran the two-way ANOVA. Grid Colour is no longer significant
(F(1,179)=.105, p < .746). There is no interaction among IVs for all the different
datasets. Based on these significance levels, we refute H5, which postulates the
Image Type and Density will affect this range of alpha.
88
Figure 43: Range of Alpha between Strong and Faint task
This range measurement between Tasks provides us direct insight on
JAD. The mean from this set of data describes how much alpha it takes to go
from minimally useful to obtrusively too much. Without taking the data from the
Blue grid or the Abstract image type, the mean for this range is 0.198, sd. 0.116,
N=180. Adding the blue grid data skews the mean to 0.215, sd. 0.123, N=270. To
include all the conditions we test, the mean is 0.228, sd. 0.140, N=360. These
means tell us that it only takes an extra 0.2 above the JND equivalent for grid
(Faint task) before the participant finds the structure intrusive (Strong Task).
89
4.10 Range between Sparse and Dense Analysis
Although there is no quantitative foundation when we choose the sparse
and dense conditions, the results shed some light on JAD and on the impact of
image density on the use of alpha. The first observation is that after removing the
Abstract image type, the mean ranges between Sparse and Dense conditions is
about 0.04. The mean for Faint task and Strong task is surprisingly similar. For
Faint task, the mean range between Dense and Sparse conditions is 0.037, sd.
0.03, N=135. For Strong task, the mean range between Dense and Sparse
condition is 0.0423, sd. 0.06, N=135. This means for the real life cases, a more
dense data layer requires only a small increase in the alpha strength for the grid
layer. For the extreme case, the average range of alpha for Abstract-Sparse and
Abstract-Dense is at 0.344, sd. 0.118, N=90.
Figure 44 shows the ranges of mean alpha between Dense and Sparse
conditions. It should be noted that there are some outliers with negative range.
As the mean for this range is only around 0.04, the negative outliers could be
caused by the small amount of adjustment the participants are adjusting for with
a denser condition.
90
Figure 44: Range of mean alpha between Dense and Sparse conditions.
To further investigate, we use a two-way ANOVA to test the range of
alpha caused by the two levels of density. Without the Abstract conditions, Image
Type is found to be the only factor that is significant to the difference of alpha
range (F(2,267) = 4.280, p < .015). Grid colour is not significant. To our surprise,
Task is not significant either. This means for the pair of sparse and dense
conditions of the same image type, the difference between the Faint task and
Strong task is not significant. In other words, whether we are asking the subject
to adjust the grid as light as possible in Faint task, or asking the subject to adjust
the grid as strong as possible in the Strong task, the responses for the
differences of density are fairly constant at 0.04 more for the dense condition.
Figure 45 is a series of plots to illustrate the effects of Image Type on the
range of mean alpha on different Density. Figure 45a shows that the range for Fill
91
Image Type is highest, slightly lower for Line Image Type, lowest for the Plot
Image Type. Figure 45b shows the same data with Task on the horizontal axis.
The almost flat plot line for Fill and Line Image Type means the ranges have
almost no change for the range between the two tasks. Figure 45c show a further
breakdown of Figure 45a, with separate lines for different grid colours.
Unfortunately, we cannot standardize or even quantify the difference in
density within each Image Type; we can only postulate the Fill Image Type may
require a higher change in alpha when density increases. This range of mean
alpha between the two levels of density has been the only measurement for
which Image Type is a significant factor. This will be an interesting topic for future
study if we can modify the design of the study.
Nonetheless, the upward adjustment of as little as 0.04 for the denser
images, regardless of Image Type or Task, is a surprise finding. The 0.04 range
in alpha is statically significant, but actually not that perceivable visually.
Combined with the findings that result from prior and current JAD studies,
showing that density is one of the few significant factors that are significant to
grid strength, the dense range analysis from this section shows that adjustment
is significant but low in value. A generic alpha value within the JAD can serve
both the dense and sparse data layers.
92
(a) (b)
(c)
Figure 45: Examining the image type, task, and grid colour for the range of alpha between sparse and dense plot.
93
4.11 Discussion
4.11.1 Summary
The findings from this JAD study can be summarized as follows. When we
only look at the three types of image (Fill, Line and Plot,) the mean alpha for
Faint task is 0.12; the mean alpha for Strong task is 0.33. A middle value within
this range, for example, 0.2, should ensure that the grid is useful and not
intrusive. At the level of examining individual IV, the Image Type is not a
significant factor. Grid Colour is significant mainly due to the difference from the
conditions with Blue grid. There is no significant difference between the alpha
with Red grid or Black grid. Density is highly significant. Based on these
analyses, we can safely say using 0.2 alpha for the grid will work for the three
image types and two levels of density we tested. We need to do more study to
confirm why Blue grid receives a higher alpha in almost all cases.
There is an interaction effect between Image Type and Density. A two-way
ANOVA testing shows that the interaction is significant only for the Faint task on
Dense condition.
We further calculate and examine the ranges of mean alpha. The first
range we calculated is by subtracting the mean alpha of Faint Task from the
Strong Task. The resulting data measures how the participants adjust the grid
differently between the two tasks. The mean for this range measurement is at
0.2. This means the participants on average add 0.2 alpha to the grid, from
minimally usable to intrusive. We ran a two-way ANOVA to test if the three IVs
changed the mean. Image Type and Density are not significant. Grid Colour is a
94
significant factor. However, removing the Blue grid again makes this IV not a
significant factor.
Figure 46: Error bar plot, and the suggested range of alpha for grid design.
Another range we examined is the difference of mean alpha between
Dense and Sparse conditions. The mean for this measurement is at about 0.04.
This means on average the participants put an extra 0.04 alpha to the denser
image. We further run a two-way ANOVA test to examine the significance of IVs,
including the Task. Image Type is a significant factor. Grid Colour and Task are
95
not significant. It should be noted that only in the analysis for the range of mean
alpha between Density can we find image type to be significant.
Figure 46 shows the error plot with colour and density on the horizontal
axis. The yellow band covers the range of alpha with 0.2 ± 0.02, the
recommended alpha value for a grid based on the results of this study. The grid
using this alpha value should be useful and not intrusive, for the different Image
Type, Density, and Grid Colours we tested in this study.
4.11.2 Cautionary Result in Starting Alpha at 0.5
As we noticed from the pilot, Starting Alpha at 0.5 produces a much higher
alpha for the Strong task. This result may be caused by a flaw in the study
design. In the instructions, we asked the participants to adjust the grid as strong
as possible, but to stop before it became intrusive. The wording may have been
suggestive enough to make the participants think they should adjust the grid
upward to see the result.
On the other hand, the much higher result for this case may be evidence
that intrusiveness is a rather fuzzy state. Subjects may be able to feel if the grid
is intrusive, but they are not able to fix it by reducing the alpha. With the
instruction of making the grid stronger but not intrusive, when presented with a
grid that is already intrusive, the subject keeps turning the grid, making it stronger
until the grid was fully opaque (alpha approaching 1). When the subjects were
asked to find the starting state of an intrusive grid from 0, they usually stopped at
alpha value under 0.4.
96
Another possible explanation is that intrusiveness can be identified more
easily during manipulation, adjustment, and comparison. Our guess is, even
seasoned designers, may need to adjust the grid strength up and down before
finding a satisfactory level. With the alpha value starting at 0.5, participants could
not tell that the grid was already intrusive upon first sight. The task, which is
about tuning the grid, may lead the participant to put focus first on the grid. The
suggestive instructions only made the participants want to increase the grid’s
strength.
We think the factor starting alpha should be investigated in more detail in
future study, and the current two starting alpha value settings are not ready for a
meaningful comparison. In a nutshell, the result with Starting Alpha at zero
follows closely to the result from prior JAD studies, and the same result should
be easily replicable. The result with Starting Alpha at 0.5, especially for the
Strong Task, may be skewed by factors outside the model and the study design.
4.11.3 Density and Cluttering
It could be a common expectation for individuals to make a stronger grid
when the condition appears to be of increased density. The results may be
skewed and may include an effect of sense-making instead of response based
on perception only. This is especially true when we have only two levels of
density and they are quite easily distinguished and detected as such.
While the dense versions for each image type is an obvious step up in the
visual complexity of the data structure, they may not reach the level of becoming
97
cluttered. Research in visual cluttering that suggests over cluttered plot is
affecting both the aesthetic and efficiency, and the algorithm developed to reduce
such cluttering, may still apply.
98
5: SUMMARY
The research question for this thesis is to learn more about transparency
and its application in generic visualization. By reviewing the knowledge from the
research on perceptual transparency, we have identified and organized the
theories regarding the luminance ratio, figural conditions, relations between x-
junctions and layer order, the use of borders, and achromatic transparency. From
the designer perspective, we identified examples that applied elements of
perceptual transparency into visualizations. Examples are put into a taxonomy to
illustrate how the elements from perceptual transparency research can be
applied for functions required in visualization design. These generic functions
include the need to fix occlusion, the need to show orders, the need to reduce
cluttering, and the need to transform visualization technique from one to another.
Beyond the design stage in visualization, we conducted a study on JAD. The
study is designed to find a level of transparency for the reference layer that is
useful but not intrusive. The study examined three factors that may have an
impact on the interpretations of a finished visualization. Image Type and Density
are factors related to the type of data and the size of the dataset. Grid Colour is a
factor related to the design. We reconfirmed some findings from prior JAD study:
A low alpha value at around 0.2 is within the boundary of being too light and too
strong, applicable for among the different Image Type and Density we used in
the study. New findings from the study showed that the strength of grid in red or
99
black are not significantly different. We have also learned the sensitivity of alpha
value for different Density and Image Type.
5.1 Contributions
This thesis first attempted to bridge the disconnect between the perceptual
research in transparency, and the application of it in visualization. The
background chapter outlined the fundamental research in perceptual
transparency, and can be useful for designers to have a quick overview of these
theories.
We reviewed Metelli's original algebraic model that is still the prevalent
model of mixing colour for overlay regions. Figural condition is the other key
element in perceptual transparency. Researchers like Singh and Hoffman (1998)
have outlined this less quantifiable property. Other researchers, including Kasrai
and Kingdom (2002), study the figural condition topic by manipulating the X-
junctions. This research shows that figural condition and luminance ratio work
hand-in-hand to give perceptual transparency. Without the right figural condition,
we see patches of colour overlaying each other, but without transparency.
In addition to the luminance ratio and figural condition, researchers have
identified other conditions that give perceptual transparency. For example,
Anderson (1997) reported that on a grill-like pattern overlaying a block,
transparency is perceived even if no luminance change is applied; Grieco and
Roncato (2005) demonstrated that a contour connecting patches of colour
outside a flat colour, without any luminance change, gives perceptual
100
transparency. These researchers show that without the luminance change, but
with patches of colour or contour in a suggestive layout, an illusory transparency
emerges. In other words, colour patches are grouped into two layers, with the
figural condition.
Perceptual transparency is all about our tendency to group visual objects
into one piece, if enough conditions suggest doing so. To apply this knowledge to
visualization, we need to step back and rethink the requirement of visualization:
why do we need to apply transparency to visualization? From Chapter 2 to
Chapter 3, our focus changed from using the luminance ratio and figural
condition to show transparency, to enhancing plot integrity thus the figural
condition by showing transparency.
In Chapter 3, the function-based taxonomy outlines the different usage of
transparency in visualization. The most direct benefit for using transparency is to
reduce the occlusion problem. Occlusion is a potential issue when location is
used to encode value. Transparency provides an immediate fix for occlusion. The
requirement in this case is to show the presence of a partially or fully occluded
plot. Furthermore, transparency can show the composition of the heavily
occluded plots. This adds utility for transparency as a cue in visualization. A low
alpha value will allow more overlays to be seen in the heavily overlapped area,
but less visible for the un-occluded area. However, if the simple algebraic model
is used for mixing colours, the change of contrast for each additional overlay
reduces. This diminishing contrast worsens if a higher alpha value is used.
101
Another type of usage is to overlay layers. Overlapping is not an issue, but
a feature that makes this technique works. Often, there is a map layer at the back
and one or more data layer on top. By applying transparency we can see both
layers. One criterion for this type of usage is to allow the seeing of the details in
the background. At the same time, it is the data layer that requires attention. If
there is only one data layer, a medium alpha value should give the best balance.
But when there are more data layers on top of the background, it is less easy to
generalize it as a rule. The key to deciding how much alpha should be used
depends on the priority for seeing the details in the background, and whether or
not there is prior knowledge expected for the background
When layers or plots are overlapping, a by-product is the order of overlay.
The order can be useful or unwanted depending on the nature of the data. For
example, for generic plots that should be viewed with the same visual priority, or
the order resulting from overlay may be unwanted. For time events, the plot order
may be used to show the sequence of events. We saw a hypothetical case that
plots can hint to the ordering. The opacity and the outline can affect whether a
plot appears on top of all or not.
To further examine the use of both transparency and contour, we saw the
practice that manipulates the order of bubble plots to reduce the cluttering. By
plotting smaller bubbles on top and large bubble at bottom, like the configuration
of the Tower of Hanoi, the smaller bubble plot is more visible and subtly reduces
the overall cluttering.
102
At a level higher than just a fix of occlusions, applying transparency
enables certain types of visualization techniques. In other words, without
transparency, these techniques cannot exist. To illustrate the point, we examined
some examples of transformation of technique with the help of transparency.
One example is to transform small multiples into an overlay plot. We identified
issues in the scalability: only a few layers can be stacked along the Z-index.
There is also issue in the practicality: most overlays of comparable graphs have
the majority of regions overlapped and only fringes to be un-occluded.
Nonetheless, stacking up data layers allows for direct comparison, which is not
doable in small multiple or stacked (along Y-axis) graph techniques.
Another transformation is to cluster lines into planes. We identified a
number of parallel coordinated based techniques that cluster plot lines into
planes. Transparency is necessary and enables this type of transformation. The
benefit is the reduction of cluttering, while the cost is the loss of details after
clustering plots into planes.
The function-based taxonomy also recognizes that transparency might
carry semiotics that can be applied to data encoding. This implies the alpha value
is varying when used to encode non-binary data. When the data density is high,
the overlapping of a single level of alpha becomes a continuous change of alpha
value visually. The case illustrates that certain properties can convert into one
another when density is high. The chapter also provide examples and rationales
of varying alpha value to encode. Transparency carries special meaning for
visualizing data like uncertainty, completeness, solidness, etc.
103
In Chapter 4, we conducted a study on a selected setting in using
transparency. It follows and extends a prior study in JAD, which studied the alpha
setting for grid on a data layer. The prior studies examined how the alpha for grid
is affected by factors like background colour, grid spacing, and plot density. In
this new study, Image Type for the data layer, Density, and Grid Colour are
examined. The results show that when two layers of very different visual priority
are overlaid, an alpha value of 0.2 for the reference layer should be used to avoid
intrusiveness. Among the five hypotheses we made, we have refuted a couple of
them. The significance from Density is the only expected result. For Image Type,
there is no effect on the alpha. Only in the analysis for the range of mean alpha
between Density we can find that Image Type is significant. Unfortunately, this
could simply due to variation of Density among the three Image Type, rather than
the nature of the Image Type itself.
A more interesting discovery is that after we found evidence that density
contributes to a different alpha, we found the range of adjustment for a denser
data structure is only 0.02. The same range is found for both the Faint task and
the Strong task. Given that the amount is not very detectable numerically and
visually, this finding may support the use of a generic low alpha for grids,
regardless of data density. On the other hand, this finding can also suggest that
we are able to somehow distinguish and have a response for such a subtle
difference in alpha. A future study on these two mutually exclusive postulates can
be interesting. A more myth busting result is about the use of colour in grids. The
104
test results show that alpha for a Red grid is not significantly different than the
Black grid, whereas the Blue grid requires a higher alpha.
5.1 Future study
There is still much to be studied with respect to applying transparency in
visualization. The following list outlines a number of ideas for future study,
ordered according to the expected complexity of the study.
Plot order: The first potential area of study is whether or not the
transparency plot can be used to carry time or to order data. We see the visual
effects of ordering in Figure 24. It is unfortunate that the order being shown does
not match the sequence of the events. We believe a small fix to the program, or
even a sorting of the data will match the visual ordering effects to the event
sequence, and provide an additional visual cue to the time-based data.
Blending algorithm: Another study idea is to evaluate the potential of a
different algorithm for colour blending. The current algorithm for the blending of
overlapping regions in most graphic engines follows the additive model
suggested by Metelli. The result promotes perceptual transparency rather than
the integrity of plots in visualization. We have also seen from Fuchs transparency
in Chapter 2, and from multiple examples in Chapter 3, that adding contour to
plots will enhance perceptual transparency and plot integrity. One proposed
blending algorithm is to use a linear increment for each additional overlay. This
may maintain both the need to show transparency and plot integrity, while
pushing the limits of the amount of overlay.
105
JAD on bands: The effects of false shape and colours are worth further
study by themselves. In the current JAD study, these factors are minimized and
reduced as the grid is composed of lines only. However, there are ample
examples for reference structures that use bands to highlight regions. It is
interesting to know how well the findings of the current JAD study can be applied
to an area-based reference structure.
Interaction with parallax motion: It is known that interactive input from
users, like parallax motion and brushing, can enhance the seeing of objects in a
overlay visualization, but due to the scope of this thesis these tools are purposely
omitted from the analysis. In general, interactions will enhance the visibility of a
complete shape. We have seen how brushing can be used in parallel coordinates
when a mouse is moved over. A more 'complete' and natural solution can be
done with motion detection. As gyroscopic sensor on mobile devices (e.g.
iPhones™) and motion detecting cameras (e.g. Xbox Kinect™) become more
widely available, issues like full occlusion can be easily fixed by adding a small
distance along the Z-axis, and showing the shifting of all layers according to the
head (viewpoint) or device movement.
Detecting the limits and suggesting a different design: This is a bigger
research direction that applies to not only application transparency, but to
visualization in general. Using transparency can quickly fix issues like occlusion.
However, it can also quickly fall apart when more data are added. A future
research direction can extend this analysis of the end-result of visualization, and
detect if certain dimensions have gone beyond their limits in combinations of
106
values within other dimensions. The result from this research in detecting the
limit will be the stepping-stone to developing an automated system for suggesting
alternative design based on the nature and amount of data.
107
APPENDICES
Appendix A Printed Instructions Given to Participants
Information about the Study for Participants
Subtle visualization: visual attention and design practice in information representation.
Experiment 1: Grid/image design
Certain visual elements in many visualizations are used for reference rather than data: examples are grids, labels, and contour lines. These elements need to be accessible without being too obtrusive. Visual designers understand and carefully manipulate this balance between these elements and data in the image. However, this balance is often difficult to maintain in dynamic computer-based visualizations where the amount of information in the image is constantly changing. The general goal of our research is to understand and quantify these subtle aspects of visual representation required in dense information displays such that they can be algorithmically manipulated to match human requirements in interactive and dynamic conditions. The objective of this particular experiment is to determine whether there is a common judgment among participants of grid appearance against certain types of images and background. You are asked to manipulate the transparency setting of a grid in two separate tasks. In one, you will set the grid transparency to meet your best judgment of how obvious it can be before becoming too intrusive; in another you will adjust the setting until the grid is just perceptible without being unnoticeable or unusable. You will adjust the transparency using the mouse. There are no time restrictions and no “correct” answer, so please take your time and play with the settings until you are satisfied with the result. We are explicitly looking at two outcomes: first, the final transparency settings for each of these tasks with respect to the particular types of image and backgrounds; and second, the steps you took to get to that setting. All data will be kept confidential. IF you are interested in the results of this study, please contact: Lyn Bartram [email protected] 778. 782.7439/604.908.9954 If you have any concerns about this study, please contact: Dr. Hal Weinberg, Director Office of Research Ethics [email protected] !!
108
REFERENCE LIST
Adelson, E.H. & Anandan, P. (1990, July 20). Ordinal characteristics of transparency, AAAI-90 Workshop on Qualitative Vision, Boston, MA.
Anderson, B.L. (1997). A Theory of illusory lightness and transparency in monocular and binocular images: The role of contour junctions. Perception, 26(4), 419–453.
Bartram, L. & Stone, M. (2010). Whisper, Don't Scream: Grids and Transparency. IEEE Transactions on Visualization and Computer Graphics, 17(10), 1444–1458.
Beck, J., Prazdny, K. & Ivry, R. (1984). The perception of transparency with achromatic colors. Perception & Psychophysics, 35, 407–422.
Bier, E.A., Stone, M.C., Pier, K., Buxton, W. & DeRose, T.D. (1993). Toolglass and magic lenses: the see-through interface. Proceedings of the 20th annual conference on Computer graphics and interactive techniques.
Chen, C. (2005). Top 10 unsolved information visualization problems. IEEE Computer Graphics and Applications, 25(4), 12–16..
Chen, V.J. & D'Zmura, M. (1998). Test of a convergence model for color transparency perception. Perception 27(5), 595–608.
Colantoni, P., D'Zmura, M., Knoblauch, K. & Laget, B. (1997). Detection of color transparency. SPIE 3016: 360–366.
109
Collins, C., Penn, G. & Carpendale, S. (2009). Bubble sets: Revealing set relations with isocontours over existing visualizations. IEEE Transactions on Visualization and Computer Graphics, 15(6), 1009–1016..
Coninx, A., G.P. Bonneau, J. Droulez, and G. Thibault (2011) Visualization of uncertain scalar data fields using color scales and perceptuallyadapted noise, Proceedings of the 8th annual conference on Applied Perception in Graphics and Visualization.
Correa, C.D., Y. Chan, and K. Ma (2009) A Framework for Uncertainty-Aware Visual Analytics. Proceedings of IEEE Visual Analytics Science and Technology, October 11–13, Atlantic City, NJ.
Da Pos, O. (1999). The perception of transparency with chromatic colours. Research in Perception. Eds M Zanforlin, L Tommasi (Padua: Logos) 47–68.
D'Zmura, M., Rinner, O. & Gegenfurtner, K.R. (2000). The colors seen behind transparent filters. Perception 29: 911 – 926.
ESRI. (n.d.) GIS Dictionary, ESRI Press. Retrieved from http://support.esri.com/en/knowledgebase/Gisdictionary
Ellis, G., and A. Dix (2007). Taxonomy of Clutter Reduction for Information Visualisation . IEEE Transactions on Visualization and Computer Graphics, 13(6) 1216–1223.
Faul, F. & Ekroll, V. Psychophysical model of chromatic perceptual transparency based on substractive color mixture. Journal of the Optical Society of America A, 19(6) 1084-1095.
Few, S. (2008, February). Practical Rules for Using Color in Charts. Visual Business Intelligence Newsletter, Perceptual Edge. Retrieved from http://www.perceptualedge.com/articles/visual_business_intelligence/rules_for_using_color.pdf
110
Fisher, D. (2007). Hotmap: Looking at geographic attention. IEEE Transactions on Visualization and Computer Graphics 13(6) 1184-119.
Fukuda, M. & Masin, S.C. (1994). Test of balanced transparency. Perception 23(1) 37–43.
Fulvio, J.M., Singh, M. & Maloney, L.T. (2006). Combining achromatic and chromatic cues to transparency. Journal of Vision 6(8) 760–776.
Grieco, A. & Roncato, S. (2005). Lines that induce phenomenal transparency. Perception 34(4): 391–407.
Griethe, H. & Schumann, H. (2005). Visualizing uncertainty for improved decision making. Proceedings in the 4th International Conference on Business Informatics Research, Skövde, Sweden.
Harrison, B.J., Kurtenbach, G. & Vicente, K.J. (1995). An experimental evaluation of transparent user interface tools and information content. Proceedings of User Interface Software and Technology ‘95, Pittsburgh, PA..
Harrison, B.J. & Vicente, K.J. (1996). An experimental evaluation of transparent menu usage. Proceedings of CHI ‘96. Retrieved http://www.sigchi.org/chi96/proceedings/papers/Harrison/blh_txt.htm
Kasrai, R. & Kingdom, F.A.A. (2002). Achromatic transparency and the role of local contours. Perception 31(7): 775–790.
Kitaoka, A. (2005). Perceptual transparency: A new explanation of perceptual transparency connecting the X-junction contrast-polarity model with the luminance-based arithmetic model. Japanese Psychological Research, 47(3): 175–187.
111
Kosara, R., Bendix, F. & Hauser, H. (2006). Parallel sets: Interactive exploration and visual Analysis of categorical data. IEEE Transactions on Visualization and Computer Graphics 12(4): 558–568.
Levkowitz, H. & Herman, T. (1992). Color scales for image data. IEEE Computer Graphics & Applications, 12(1): 72-80.
Logvinenko, A.D., Adelson, E.H., Ross, D.A. & Somers, D. (2005). Straightness as a cue for luminance edge interpretation. Perception & Psychophysics, 67(1) 120–128.
Masin, S.C. (1984). An experimental comparison of three- versus four-surface phenomenal transparency. Perception & Psychophysics 35(4), 325–332.
Masin, S.C. (1998). The luminance conditions of Fuchs's transparency in two-dimensional patterns. Perception 27(7) 851–859.
Masin, S.C. (2006). Test of models of achromatic transparency. Perception 35(12) 1611 – 1624.
McDonnell, K.T. & Mueller, K. (2008, May). Illustrative parallel coordinates. Computer Graphics Forum 27(3) 1031–1038.
Metelli, F. (1974). The perception of transparency. Scientific American, 230(4), 91–98.
Metelli, F., Da Pos, O. & Cavedon, A. (1985). Balanced and unbalanced, complete and partial transparency. Perception and Psychophysics, 38(4):354–36.
Pang, A. (2008). Visualizing Uncertainty in Natural Hazards. In: Risk Assessment, Modeling and Decision Support. Springer, 261–294.
112
Reijner, H. (2008). The development of the horizon graph. Proceedings of Vis08 Workshop From Theory to Practice: Design, Vision and Visualization.
Rogowitz, B. & Treinish, L. (1998). Data visualization: The end of the rainbow. IEEE Spectrum, 35(12): 52–59.
Rosenholtz, R., Y. Li, J. Mansfield, and Z. Jin (2005) Feature Congestion: A Measure of Display Clutter, SIGCHI (pp. 761–770). Portland, Oregon
Singh, M. & Hoffman, D. (1998). Part boundaries alter the perception of transparency. Psychological Science 9(5) 370–378.
Silva, S., Santos, B.S. & Madeira, J. (2011). Using color in visualization: A Survey. Computers & Graphics, 35(2): 320-333.
Stone, M.C. (2003) A Field Guide to Digital Color, A K Peters, Natick MA.
Stone, M.C. & Bartram, L. (2009). Alpha, contrast and the perception of visual metadata. In Color Imaging Conf..
Tommasi, M. (1999). A ratio model of perceptual transparency. Perceptual and Motor Skills 89(3), 891 – 897.
Tufte, E.R. (1990). Envisioning Information. Graphics Press, Cheshire CT.
Wang, L., J. Giesen, K.T. McDonnell, P. Zolliker and K. Mueller (2008) Color Design for Illustrative Visualization, IEEE Transactions on Visualization and Computer Graphics; 14(6):1739–1754
Ware, C (1988) Color sequences for univariate maps: Theory, experiments and principles, IEEE Computer Graphics and Applications, 8(5): 41-49.
113
Wegman, E.J., and Luo, Q. (1996), "High Dimensional Clustering Using Parallel Coordinates and the Grand Tour," Computing Science and Statistics, 28, 352-360