RT Journal Article
JF IEEE Transactions on Visualization & Computer Graphics
YR 2009
VO 16
IS
SP 419
TI Texture Mapping via Optimal Mass Transport
A1 Allen Tannenbaum,
A1 Ayelet Dominitz,
K1 Texture mapping
K1 optimal mass transport
K1 parametrization
K1 spherical wavelets.
AB In this paper, we present a novel method for texture mapping of closed surfaces. Our method is based on the technique of optimal mass transport (also known as the “earth-mover's metric”). This is a classical problem that concerns determining the optimal way, in the sense of minimal transportation cost, of moving a pile of soil from one site to another. In our context, the resulting mapping is area preserving and minimizes angle distortion in the optimal mass sense. Indeed, we first begin with an angle-preserving mapping (which may greatly distort area) and then correct it using the mass transport procedure derived via a certain gradient flow. In order to obtain fast convergence to the optimal mapping, we incorporate a multiresolution scheme into our flow. We also use ideas from discrete exterior calculus in our computations.
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1077-2626
LA English
DO 10.1109/TVCG.2009.64
LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.64

RT Journal Article
JF IEEE Transactions on Visualization & Computer Graphics
YR 2009
VO 16
IS
SP 434
TI Yet Faster Ray-Triangle Intersection (Using SSE4)
A1 Adam Herout,
A1 Jiří Havel,
K1 Ray tracing
K1 geometric algorithms.
AB Ray-triangle intersection is an important algorithm, not only in the field of realistic rendering (based on ray tracing) but also in physics simulation, collision detection, modeling, etc. Obviously, the speed of this well-defined algorithm's implementations is important because calls to such a routine are numerous in rendering and simulation applications. Contemporary fast intersection algorithms, which use SIMD instructions, focus on the intersection of ray packets against triangles. For intersection between single rays and triangles, operations such as horizontal addition or dot product are required. The SSE4 instruction set adds the dot product instruction which can be used for this purpose. This paper presents a new modification of the fast ray-triangle intersection algorithms commonly used, which—when implemented on SSE4—outperforms the current state-of-the-art algorithms. It also allows both a single ray and ray packet intersection calculation with the same precomputed data. The speed gain measurements are described and discussed in the paper.
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1077-2626
LA English
DO 10.1109/TVCG.2009.73
LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.73

RT Journal Article
JF IEEE Transactions on Visualization & Computer Graphics
YR 2009
VO 16
IS
SP 484
TI Enhanced Voxelization and Representation of Objects with Sharp Details in Truncated Distance Fields
A1 Miloš Šrámek,
A1 Pavol Novotný,
A1 Leonid I. Dimitrov,
K1 Voxelization
K1 truncated distance fields
K1 run-length compression
K1 CSG operations
K1 sharp details
K1 implicit solids
K1 artifacts
K1 representability.
AB This paper presents a new method for voxelization of solid objects containing sharp details. Voxelization is a sampling process that transforms a continuously defined object into a discrete one represented as a voxel field. The voxel field can be used for rendering or other purposes, which often involve a reconstruction of a continuous approximation of the original object. Objects to be voxelized need to fulfill certain representability conditions; otherwise, disturbing artifacts appear during reconstruction. The method proposed here extends the traditional distance-based voxelization by an a-priori detection of sharp object details and their subsequent modification in such a way that the resulting object to be voxelized fulfills the representability conditions. The resulting discrete objects are represented by means of truncated (i.e., narrow-band) distance fields, which provide reduction of memory requirements and further processing by level set techniques. This approach is exemplified by two classes of solid objects that normally contain such sharp details: implicit solids and solids resulting from CSG operations. In both cases, the sharp details are rounded to a specific curvature dictated by the sampling distance.
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1077-2626
LA English
DO 10.1109/TVCG.2009.74
LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.74

RT Journal Article
JF IEEE Transactions on Visualization & Computer Graphics
YR 2009
VO 16
IS
SP 439
TI Hierarchical Aggregation for Information Visualization: Overview, Techniques, and Design Guidelines
A1 Jean-Daniel Fekete,
A1 Niklas Elmqvist,
K1 Aggregation
K1 clustering
K1 clutter reduction
K1 massive data sets
K1 visual exploration
K1 visual analytics.
AB We present a model for building, visualizing, and interacting with multiscale representations of information visualization techniques using hierarchical aggregation. The motivation for this work is to make visual representations more visually scalable and less cluttered. The model allows for augmenting existing techniques with multiscale functionality, as well as for designing new visualization and interaction techniques that conform to this new class of visual representations. We give some examples of how to use the model for standard information visualization techniques such as scatterplots, parallel coordinates, and node-link diagrams, and discuss existing techniques that are based on hierarchical aggregation. This yields a set of design guidelines for aggregated visualizations. We also present a basic vocabulary of interaction techniques suitable for navigating these multiscale visualizations.
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1077-2626
LA English
DO 10.1109/TVCG.2009.84
LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.84

RT Journal Article
JF IEEE Transactions on Visualization & Computer Graphics
YR 2009
VO 16
IS
SP 468
TI Mélange: Space Folding for Visual Exploration
A1 Yann Riche,
A1 Jean-Daniel Fekete,
A1 Nathalie Henry-Riche,
A1 Niklas Elmqvist,
K1 Interaction
K1 visualization
K1 navigation
K1 exploration
K1 folding
K1 split screen
K1 space distortion
K1 focus+context.
AB Navigating in large geometric spaces—such as maps, social networks, or long documents—typically requires a sequence of pan and zoom actions. However, this strategy is often ineffective and cumbersome, especially when trying to study and compare several distant objects. We propose a new distortion technique that folds the intervening space to guarantee visibility of multiple focus regions. The folds themselves show contextual information and support unfolding and paging interactions. We conducted a study comparing the space-folding technique to existing approaches and found that participants performed significantly better with the new technique. We also describe how to implement this distortion technique and give an in-depth case study on how to apply it to the visualization of large-scale 1D time-series data.
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1077-2626
LA English
DO 10.1109/TVCG.2009.86
LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.86

RT Journal Article
JF IEEE Transactions on Visualization & Computer Graphics
YR 2009
VO 16
IS
SP 499
TI An Evaluation of Prefiltered B-Spline Reconstruction for Quasi-Interpolation on the Body-Centered Cubic Lattice
A1 Balázs Csébfalvi,
K1 Filtering
K1 sampling
K1 volume visualization.
AB In this paper, we demonstrate that quasi-interpolation of orders two and four can be efficiently implemented on the Body-Centered Cubic (BCC) lattice by using tensor-product B-splines combined with appropriate discrete prefilters. Unlike the nonseparable box-spline reconstruction previously proposed for the BCC lattice, the prefiltered B-spline reconstruction can utilize the fast trilinear texture-fetching capability of the recent graphics cards. Therefore, it can be applied for rendering BCC-sampled volumetric data interactively. Furthermore, we show that a separable B-spline filter can suppress the postaliasing effect much more isotropically than a nonseparable box-spline filter of the same approximation power. Although prefilters that make the B-splines interpolating on the BCC lattice do not exist, we demonstrate that quasi-interpolating prefiltered linear and cubic B-spline reconstructions can still provide similar or higher image quality than the interpolating linear box-spline and prefiltered quintic box-spline reconstructions, respectively.
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1077-2626
LA English
DO 10.1109/TVCG.2009.87
LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.87

RT Journal Article
JF IEEE Transactions on Visualization & Computer Graphics
YR 2009
VO 16
IS
SP 407
TI A Point-Cloud-Based Multiview Stereo Algorithm for Free-Viewpoint Video
A1 Wenli Xu,
A1 Yebin Liu,
A1 Qionghai Dai,
K1 Multiview stereo
K1 MVS
K1 free-viewpoint video
K1 point cloud.
AB This paper presents a robust multiview stereo (MVS) algorithm for free-viewpoint video. Our MVS scheme is totally point-cloud-based and consists of three stages: point cloud extraction, merging, and meshing. To guarantee reconstruction accuracy, point clouds are first extracted according to a stereo matching metric which is robust to noise, occlusion, and lack of texture. Visual hull information, frontier points, and implicit points are then detected and fused with point fidelity information in the merging and meshing steps. All aspects of our method are designed to counteract potential challenges in MVS data sets for accurate and complete model reconstruction. Experimental results demonstrate that our technique produces the most competitive performance among current algorithms under sparse viewpoint setups according to both static and motion MVS data sets.
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1077-2626
LA English
DO 10.1109/TVCG.2009.88
LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.88

RT Journal Article
JF IEEE Transactions on Visualization & Computer Graphics
YR 2009
VO 16
IS
SP 394
TI Real-Time Rendering Method and Performance Evaluation of Composable 3D Lenses for Interactive VR
A1 Jan-Phillip Tiesel,
A1 Christopher M. Best,
A1 Christoph W. Borst,
K1 Interaction styles
K1 virtual reality
K1 volumetric lens
K1 windowing systems.
AB We present and evaluate a new approach for real-time rendering of composable 3D lenses for polygonal scenes. Such lenses, usually called “volumetric lenses,” are an extension of 2D Magic Lenses to 3D volumes in which effects are applied to scene elements. Although the composition of 2D lenses is well known, 3D composition was long considered infeasible due to both geometric and semantic complexity. Nonetheless, for a scene with multiple interactive 3D lenses, the problem of intersecting lenses must be considered. Intersecting 3D lenses in meaningful ways supports new interfaces such as hierarchical 3D windows, 3D lenses for managing and composing visualization options, or interactive shader development by direct manipulation of lenses providing component effects. Our 3D volumetric lens approach differs from other approaches and is one of the first to address efficient composition of multiple lenses. It is well-suited to head-tracked VR environments because it requires no view-dependent generation of major data structures, allowing caching and reuse of full or partial results. A Composite Shader Factory module composes shader programs for rendering composite visual styles and geometry of intersection regions. Geometry is handled by Boolean combinations of region tests in fragment shaders, which allows both convex and nonconvex CSG volumes for lens shape. Efficiency is further addressed by a Region Analyzer module and by broad-phase culling. Finally, we consider the handling of order effects for composed 3D lenses.
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1077-2626
LA English
DO 10.1109/TVCG.2009.89
LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.89

RT Journal Article
JF IEEE Transactions on Visualization & Computer Graphics
YR 2009
VO 16
IS
SP 513
TI Scalable L-Infinite Coding of Meshes
A1 Alin Alecu,
A1 Dan C. Cernea,
A1 Adrian Munteanu,
A1 Peter Schelkens,
A1 Jan Cornelis,
K1 L-infinite coding
K1 L-2 coding
K1 scalable mesh coding
K1 MESHGRID
K1 3D graphics
K1 MPEG4-AFX
K1 1-CPRS
AB The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1077-2626
LA English
DO 10.1109/TVCG.2009.90
LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.90

RT Journal Article
JF IEEE Transactions on Visualization & Computer Graphics
YR 2009
VO 16
IS
SP 381
TI A Novel Prototype for an Optical See-Through Head-Mounted Display with Addressable Focus Cues
A1 Dewen Cheng,
A1 Sheng Liu,
A1 Hong Hua,
K1 Three-dimensional displays
K1 mixed and augmented reality
K1 focus cues
K1 accommodation
K1 retinal blur
K1 convergence
K1 user studies.
AB We present the design and implementation of an optical see-through head-mounted display (HMD) with addressable focus cues utilizing a liquid lens. We implemented a monocular bench prototype capable of addressing the focal distance of the display from infinity to as close as 8 diopters. Two operation modes of the system were demonstrated: a vari-focal plane mode in which the accommodation cue is addressable, and a time-multiplexed multi-focal plane mode in which both the accommodation and retinal blur cues can be rendered. We further performed experiments to assess the depth perception and eye accommodative response of the system operated in a vari-focal plane mode. Both subjective and objective measurements suggest that the perceived depths and accommodative responses of the user match with the rendered depths of the virtual display with addressable accommodation cues, approximating the real-world 3-D viewing condition.
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1077-2626
LA English
DO 10.1109/TVCG.2009.95
LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.95

RT Journal Article
JF IEEE Transactions on Visualization & Computer Graphics
YR 2009
VO 16
IS
SP 455
TI Representation-Independent In-Place Magnification with Sigma Lenses
A1 Olivier Bau,
A1 Caroline Appert,
A1 Emmanuel Pietriga,
K1 Graphical user interfaces
K1 visualization techniques and methodologies
K1 interaction techniques
K1 evaluation/methodology.
AB Focus+context interaction techniques based on the metaphor of lenses are used to navigate and interact with objects in large information spaces. They provide in-place magnification of a region of the display without requiring users to zoom into the representation and consequently lose context. In order to avoid occlusion of its immediate surroundings, the magnified region is often integrated in the context using smooth transitions based on spatial distortion. Such lenses have been developed for various types of representations using techniques often tightly coupled with the underlying graphics framework. We describe a representation-independent solution that can be implemented with minimal effort in different graphics frameworks, ranging from 3D graphics to rich multiscale 2D graphics combining text, bitmaps, and vector graphics. Our solution is not limited to spatial distortion and provides a unified model that makes it possible to define new focus+context interaction techniques based on lenses whose transition is defined by a combination of dynamic displacement and compositing functions. We present the results of a series of user evaluations that show that one such new lens, the speed-coupled blending lens, significantly outperforms all others.
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1077-2626
LA English
DO 10.1109/TVCG.2009.98
LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.98

RT Journal Article
JF IEEE Transactions on Visualization & Computer Graphics
YR 2009
VO 16
IS
SP 355
TI Real-Time Detection and Tracking for Augmented Reality on Mobile Phones
A1 Dieter Schmalstieg,
A1 Tom Drummond,
A1 Alessandro Mulloni,
A1 Gerhard Reitmayr,
A1 Daniel Wagner,
K1 Information interfaces and presentation
K1 multimedia information systems
K1 artificial
K1 augmented
K1 and virtual realities
K1 image processing and computer vision
K1 scene analysis
K1 tracking.
AB In this paper, we present three techniques for 6DOF natural feature tracking in real time on mobile phones. We achieve interactive frame rates of up to 30 Hz for natural feature tracking from textured planar targets on current generation phones. We use an approach based on heavily modified state-of-the-art feature descriptors, namely SIFT and Ferns plus a template-matching-based tracker. While SIFT is known to be a strong, but computationally expensive feature descriptor, Ferns classification is fast, but requires large amounts of memory. This renders both original designs unsuitable for mobile phones. We give detailed descriptions on how we modified both approaches to make them suitable for mobile phones. The template-based tracker further increases the performance and robustness of the SIFT- and Ferns-based approaches. We present evaluations on robustness and performance and discuss their appropriateness for Augmented Reality applications.
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1077-2626
LA English
DO 10.1109/TVCG.2009.99
LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.99

RT Journal Article
JF IEEE Transactions on Visualization & Computer Graphics
YR 2009
VO 16
IS
SP 369
TI Simulating Low-Cost Cameras for Augmented Reality Compositing
A1 David W. Murray,
A1 Georg Klein,
K1 Artificial
K1 augmented
K1 and virtual realities
K1 visualization
K1 compositing.
AB Video see-through Augmented Reality adds computer graphics to the real world in real time by overlaying graphics onto a live video feed. To achieve a realistic integration of the virtual and real imagery, the rendered images should have a similar appearance and quality to those produced by the video camera. This paper describes a compositing method which models the artifacts produced by a small low-cost camera, and adds these effects to an ideal pinhole image produced by conventional rendering methods. We attempt to model and simulate each step of the imaging process, including distortions, chromatic aberrations, blur, Bayer masking, noise, sharpening, and color-space compression, all while requiring only an RGBA image and an estimate of camera velocity as inputs.
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1077-2626
LA English
DO 10.1109/TVCG.2009.210
LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.210

RT Journal Article
JF IEEE Transactions on Visualization & Computer Graphics
YR 2010
VO 16
IS
SP 353
TI Guest Editors' Introduction: Special Section on The International Symposium on Mixed and Augmented Reality (ISMAR)
A1 Ronald T. Azuma,
A1 Hideo Saito,
A1 Mark A. Livingston,
A1 Oliver Bimber,
K1
PB IEEE Computer Society, [URL:http://www.computer.org]
SN 1077-2626
LA English
DO 10.1109/TVCG.2010.51
LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2010.51