JF IEEE Transactions on Visualization & Computer Graphics

YR

VO 16

IS 4

SP 531

TI Guest Editors' Introduction: Special Section on Volume Graphics and Point-Based GraphicsK1 Graphics

K1 Data mining

K1 Rendering (computer graphics)

K1 Solid modeling

K1 Conferences

K1 Isosurfaces

K1 Data visualization

K1 Transfer functions

K1 Shape

K1 Feature extraction

AB The six papers in this special issue are extended versions of papers presented at the IEEE/EG International Symposium om Volume Graphics, held in September 2007 in Prague, and the joint event of the IEEE/EG International Symposia on Volume Graphics (VG '08) and Point-Based Graphics (PBG '08), held in August 2008 in Los Angeles.

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2010.72

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2010.72

RT Journal Article

JF IEEE Transactions on Visualization & Computer Graphics

YR 2010

VO 16

IS 4

SP 599

TI Fast construction of k-nearest neighbor graphs for point clouds

A1 Michael Connor,

A1 Piyush Kumar,

K1 Concurrent computing

K1 Parallel algorithms

K1 Multicore processing

K1 Three-dimensional displays

K1 Computer graphics

K1 Visualization

K1 Surface reconstruction

K1 Algorithm design and analysis

K1 parallel algorithms.

K1 Nearest neighbor searching

K1 point-based graphics

K1 k-nearest neighbor graphics

K1 Morton ordering

AB We present a parallel algorithm for k-nearest neighbor graph construction that uses Morton ordering. Experiments show that our approach has the following advantages over existing methods: 1) faster construction of k-nearest neighbor graphs in practice on multicore machines, 2) less space usage, 3) better cache efficiency, 4) ability to handle large data sets, and 5) ease of parallelization and implementation. If the point set has a bounded expansion constant, our algorithm requires one-comparison-based parallel sort of points, according to Morton order plus near-linear additional steps to output the k-nearest neighbor graph.

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2010.9

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2010.9

RT Journal Article

JF IEEE Transactions on Visualization & Computer Graphics

YR 2009

VO 16

IS

SP 533

TI Subdivision Analysis of the Trilinear Interpolant

A1 Hamish Carr,

A1 Nelson Max,

K1 Isosurfaces

K1 marching cubes

K1 trilinear.

AB Isosurfaces are fundamental volumetric visualization tools and are generated by approximating contours of trilinearly interpolated scalar fields. While a complete set of cases has recently been published by Nielson, the formal proof that these cases are the only ones possible and that they are topologically correct is difficult to follow. We present a more straightforward proof of the correctness and completeness of these cases based on a variation of the Dividing Cubes algorithm. Since this proof is based on topological arguments and a divide-and-conquer approach, this also sets the stage for developing tessellation cases for higher order interpolants and the quadrilinear interpolant in four dimensions. We also demonstrate that apart from degenerate cases, Nielson's cases are, in fact, subsets of two basic configurations of the trilinear interpolant.

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2009.10

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.10

RT Journal Article

JF IEEE Transactions on Visualization & Computer Graphics

YR 2009

VO 16

IS

SP 548

TI Local Ambient Occlusion in Direct Volume Rendering

A1 Frida Hernell,

A1 Patric Ljung,

A1 Anders Ynnerman,

K1 Local illumination

K1 volumetric ambient occlusion

K1 volume rendering

K1 medical visualization

K1 emissive tissues

K1 shading

K1 shadowing.

AB This paper presents a novel technique to efficiently compute illumination for Direct Volume Rendering using a local approximation of ambient occlusion to integrate the intensity of incident light for each voxel. An advantage with this local approach is that fully shadowed regions are avoided, a desirable feature in many applications of volume rendering such as medical visualization. Additional transfer function interactions are also presented, for instance, to highlight specific structures with luminous tissue effects and create an improved context for semitransparent tissues with a separate absorption control for the illumination settings. Multiresolution volume management and GPU-based computation are used to accelerate the calculations and support large data sets. The scheme yields interactive frame rates with an adaptive sampling approach for incrementally refined illumination under arbitrary transfer function changes. The illumination effects can give a better understanding of the shape and density of tissues and so has the potential to increase the diagnostic value of medical volume rendering. Since the proposed method is gradient-free, it is especially beneficial at the borders of clip planes, where gradients are undefined, and for noisy data sets.

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2009.45

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.45

RT Journal Article

JF IEEE Transactions on Visualization & Computer Graphics

YR 2009

VO 16

IS

SP 663

TI Globally Optimized Linear Windowed Tone Mapping

A1 Michael S. Brown,

A1 Qi Shan,

A1 Jiaya Jia,

K1 High dynamic range image

K1 tone mapping

K1 display algorithms

K1 image enhancement

K1 filtering.

AB This paper introduces a new tone mapping operator that performs local linear adjustments on small overlapping windows over the entire input image. While each window applies a local linear adjustment that preserves the monotonicity of the radiance values, the problem is implicitly cast as one of global optimization that satisfies the local constraints defined on each of the overlapping windows. Local constraints take the form of a guidance map that can be used to effectively suppress local high contrast while preserving details. Using this method, image structures can be preserved even in challenging high dynamic range (HDR) images that contain either abrupt radiance change, or relatively smooth but salient transitions. Another benefit of our formulation is that it can be used to synthesize HDR images from low dynamic range (LDR) images.

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2009.92

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.92

RT Journal Article

JF IEEE Transactions on Visualization & Computer Graphics

YR 2009

VO 16

IS

SP 690

TI Evaluation of the Cognitive Effects of Travel Technique in Complex Real and Virtual Environments

A1 Larry F. Hodges,

A1 Sabarish V. Babu,

A1 Samantha L. Finkelstein,

A1 Amy C. Ulinski,

A1 Myra Reid,

A1 Evan A. Suma,

K1 Virtual reality

K1 travel techniques

K1 navigation

K1 real walking

K1 user study.

AB We report a series of experiments conducted to investigate the effects of travel technique on information gathering and cognition in complex virtual environments. In the first experiment, participants completed a non-branching multilevel 3D maze at their own pace using either real walking or one of two virtual travel techniques. In the second experiment, we constructed a real-world maze with branching pathways and modeled an identical virtual environment. Participants explored either the real or virtual maze for a predetermined amount of time using real walking or a virtual travel technique. Our results across experiments suggest that for complex environments requiring a large number of turns, virtual travel is an acceptable substitute for real walking if the goal of the application involves learning or reasoning based on information presented in the virtual world. However, for applications that require fast, efficient navigation or travel that closely resembles real-world behavior, real walking has advantages over common joystick-based virtual travel techniques.

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2009.93

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.93

RT Journal Article

JF IEEE Transactions on Visualization & Computer Graphics

YR 2010

VO 16

IS

SP 529

TI Editor's Note

A1 Thomas Ertl,

K1

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2010.71

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2010.71

RT Journal Article

JF IEEE Transactions on Visualization & Computer Graphics

YR 2009

VO 16

IS

SP 621

TI Almost Isometric Mesh Parameterization through Abstract Domains

A1 Marco Tarini,

A1 Paolo Cignoni,

A1 Nico Pietroni,

K1 Modeling

K1 surface parameterization.

AB In this paper, we propose a robust, automatic technique to build a global hi-quality parameterization of a two-manifold triangular mesh. An adaptively chosen 2D domain of the parameterization is built as part of the process. The produced parameterization exhibits very low isometric distortion, because it is globally optimized to preserve both areas and angles. The domain is a collection of equilateral triangular 2D regions enriched with explicit adjacency relationships (it is abstract in the sense that no 3D embedding is necessary). It is tailored to minimize isometric distortion, resulting in excellent parameterization qualities, even when meshes with complex shape and topology are mapped into domains composed of a small number of large continuous regions. Moreover, this domain is, in turn, remapped into a collection of 2D square regions, unlocking many advantages found in quad-based domains (e.g., ease of packing). The technique is tested on a variety of cases, including challenging ones, and compares very favorably with known approaches. An open-source implementation is made available.

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2009.96

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.96

RT Journal Article

JF IEEE Transactions on Visualization & Computer Graphics

YR 2009

VO 16

IS

SP 647

TI A Level Set Formulation of Geodesic Curvature Flow on Simplicial Surfaces

A1 Xuecheng Tai,

A1 Chunlin Wu,

K1 Geodesic curvature flow

K1 level set

K1 triangular mesh surfaces

K1 curve evolution

K1 scale-space

K1 edge detection.

AB Curvature flow (planar geometric heat flow) has been extensively applied to image processing, computer vision, and material science. To extend the numerical schemes and algorithms of this flow on surfaces is very significant for corresponding motions of curves and images defined on surfaces. In this work, we are interested in the geodesic curvature flow over triangulated surfaces using a level set formulation. First, we present the geodesic curvature flow equation on general smooth manifolds based on an energy minimization of curves. The equation is then discretized by a semi-implicit finite volume method (FVM). For convenience of description, we call the discretized geodesic curvature flow as dGCF. The existence and uniqueness of dGCF are discussed. The regularization behavior of dGCF is also studied. Finally, we apply our dGCF to three problems: the closed-curve evolution on manifolds, the discrete scale-space construction, and the edge detection of images painted on triangulated surfaces. Our method works for compact triangular meshes of arbitrary geometry and topology, as long as there are no degenerate triangles. The implementation of the method is also simple.

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2009.103

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.103

RT Journal Article

JF IEEE Transactions on Visualization & Computer Graphics

YR 2009

VO 16

IS

SP 609

TI Visual Integration of Quantitative Proteomic Data, Pathways, and Protein Interactions

A1 David H. Laidlaw,

A1 Lulu Cao,

A1 Kebing Yu,

A1 Radu Jianu,

A1 Vinh Nguyen,

A1 Arthur R. Salomon,

K1 Biological (genome or protein) databases

K1 data and knowledge visualization

K1 graphs and networks

K1 interactive data exploration and discovery

K1 visualization techniques and methodologies.

AB We introduce several novel visualization and interaction paradigms for visual analysis of published protein-protein interaction networks, canonical signaling pathway models, and quantitative proteomic data. We evaluate them anecdotally with domain scientists to demonstrate their ability to accelerate the proteomic analysis process. Our results suggest that structuring protein interaction networks around canonical signaling pathway models, exploring pathways globally and locally at the same time, and driving the analysis primarily by the experimental data, all accelerate the understanding of protein pathways. Concrete proteomic discoveries within T-cells, mast cells, and the insulin signaling pathway validate the findings. The aim of the paper is to introduce novel protein network visualization paradigms and anecdotally assess the opportunity of incorporating them into established proteomic applications. We also make available a prototype implementation of our methods, to be used and evaluated by the proteomic community.

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2009.106

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.106

RT Journal Article

JF IEEE Transactions on Visualization & Computer Graphics

YR 2009

VO 16

IS

SP 676

TI Modeling Repetitive Motions Using Structured Light

A1 Yi Xu,

A1 Daniel G. Aliaga,

K1 Three-dimensional graphics and realism

K1 digitization and image capture

K1 geometric modeling.

AB Obtaining models of dynamic 3D objects is an important part of content generation for computer graphics. Numerous methods have been extended from static scenarios to model dynamic scenes. If the states or poses of the dynamic object repeat often during a sequence (but not necessarily periodically), we call such a repetitive motion. There are many objects, such as toys, machines, and humans, undergoing repetitive motions. Our key observation is that when a motion-state repeats, we can sample the scene under the same motion state again but using a different set of parameters; thus, providing more information of each motion state. This enables robustly acquiring dense 3D information difficult for objects with repetitive motions using only simple hardware. After the motion sequence, we group temporally disjoint observations of the same motion state together and produce a smooth space-time reconstruction of the scene. Effectively, the dynamic scene modeling problem is converted to a series of static scene reconstructions, which are easier to tackle. The varying sampling parameters can be, for example, structured-light patterns, illumination directions, and viewpoints resulting in different modeling techniques. Based on this observation, we present an image-based motion-state framework and demonstrate our paradigm using either a synchronized or an unsynchronized structured-light acquisition method.

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2009.207

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.207

RT Journal Article

JF IEEE Transactions on Visualization & Computer Graphics

YR 2009

VO 16

IS

SP 636

TI Markov Random Field Surface Reconstruction

A1 Rasmus Larsen,

A1 Rasmus R. Paulsen,

A1 Jakob Andreas Bærentzen,

K1 Bayesian approach

K1 implicit surface

K1 Markov random field

K1 mesh generation

K1 surface reconstruction.

AB A method for implicit surface reconstruction is proposed. The novelty in this paper is the adaption of Markov Random Field regularization of a distance field. The Markov Random Field formulation allows us to integrate both knowledge about the type of surface we wish to reconstruct (the prior) and knowledge about data (the observation model) in an orthogonal fashion. Local models that account for both scene-specific knowledge and physical properties of the scanning device are described. Furthermore, how the optimal distance field can be computed is demonstrated using conjugate gradients, sparse Cholesky factorization, and a multiscale iterative optimization scheme. The method is demonstrated on a set of scanned human heads and, both in terms of accuracy and the ability to close holes, the proposed method is shown to have similar or superior performance when compared to current state-of-the-art algorithms.

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2009.208

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2009.208

RT Journal Article

JF IEEE Transactions on Visualization & Computer Graphics

YR 2010

VO 16

IS

SP 583

TI Isodiamond Hierarchies: An Efficient Multiresolution Representation for Isosurfaces and Interval Volumes

A1 Leila De Floriani,

A1 Kenneth Weiss,

K1 Isosurfaces

K1 interval volumes

K1 multiresolution models

K1 longest edge bisection

K1 diamond hierarchies.

AB Efficient multiresolution representations for isosurfaces and interval volumes are becoming increasingly important as the gap between volume data sizes and processing speed continues to widen. Our multiresolution scalar field model is a hierarchy of tetrahedral clusters generated by longest edge bisection that we call a hierarchy of diamonds. We propose two multiresolution models for representing isosurfaces, or interval volumes, extracted from a hierarchy of diamonds which exploit its regular structure. These models are defined by subsets of diamonds in the hierarchy that we call isodiamonds, which are enhanced with geometric and topological information for encoding the relation between the isosurface, or interval volume, and the diamond itself. The first multiresolution model, called a relevant isodiamond hierarchy, encodes the isodiamonds intersected by the isosurface, or interval volume, as well as their nonintersected ancestors, while the second model, called a minimal isodiamond hierarchy, encodes only the intersected isodiamonds. Since both models operate directly on the extracted isosurface or interval volume, they require significantly less memory and support faster selective refinement queries than the original multiresolution scalar field, but do not support dynamic isovalue modifications. Moreover, since a minimal isodiamond hierarchy only encodes intersected isodiamonds, its extracted meshes require significantly less memory than those extracted from a relevant isodiamond hierarchy. We demonstrate the compactness of isodiamond hierarchies by comparing them to an indexed representation of the mesh at full resolution.

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2010.29

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2010.29

RT Journal Article

JF IEEE Transactions on Visualization & Computer Graphics

YR 2010

VO 16

IS

SP 560

TI Per-Pixel Opacity Modulation for Feature Enhancement in Volume Rendering

A1 Catherine Mongenet,

A1 Jean-Michel Dischler,

A1 Stéphane Marchesin,

K1 Volume rendering

K1 adaptive rendering

K1 nonphotorealistic rendering.

AB Classical direct volume rendering techniques accumulate color and opacity contributions using the standard volume rendering equation approximated by alpha blending. However, such standard rendering techniques, often also aiming at visual realism, are not always adequate for efficient data exploration, especially when large opaque areas are present in a data set, since such areas can occlude important features and make them invisible. On the other hand, the use of highly transparent transfer functions allows viewing all the features at once, but often makes these features barely visible. In order to enhance feature visibility, we present in this paper a straightforward rendering technique that consists of modifying the traditional volume rendering equation. Our approach does not require an opacity transfer function, and instead is based on a function quantifying the relative importance of each voxel in the final rendering called relevance function. This function is subsequently used to dynamically adjust the opacity of the contributions per pixel. We conduct experiments with a number of possible relevance functions in order to show the influence of this parameter. As will be shown by our comparative study, our rendering method is much more suitable than standard volume rendering for interactive data exploration at a low extra cost. Thereby, our method avoids feature visibility restrictions without relying on a transfer function and yet maintains a visual similarity with standard volume rendering.

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2010.30

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2010.30

RT Journal Article

JF IEEE Transactions on Visualization & Computer Graphics

YR 2010

VO 16

IS

SP 571

TI Illustrative Volume Visualization Using GPU-Based Particle Systems

A1 Anna Vilanova,

A1 Huub van de Wetering,

A1 Roy van Pelt,

K1 Volume visualization

K1 illustrative rendering

K1 particle systems

K1 consumer graphics hardware

K1 parallel processing.

AB Illustrative techniques are generally applied to produce stylized renderings. Various illustrative styles have been applied to volumetric data sets, producing clearer images and effectively conveying visual information. We adopt particle systems to produce user-configurable stylized renderings from the volume data, imitating traditional pen-and-ink drawings. In the following, we present an interactive GPU-based illustrative volume rendering framework, called VolFliesGPU. In this framework, isosurfaces are sampled by evenly distributed particle sets, delineating surface shape by illustrative styles. The appearance of these styles is based on locally-measured surface properties. For instance, hatches convey surface shape by orientation and shape characteristics are enhanced by color, mapped using a curvature-based transfer function. Hidden-surfaces are generally removed to avoid visual clutter, after that a combination of styles is applied per isosurface. Multiple surfaces and styles can be explored interactively, exploiting parallelism in both graphics hardware and particle systems. We achieve real-time interaction and prompt parametrization of the illustrative styles, using an intuitive GPGPU paradigm that delivers the computational power to drive our particle system and visualization algorithms.

PB IEEE Computer Society, [URL:http://www.computer.org]

SN 1077-2626

LA English

DO 10.1109/TVCG.2010.32

LK http://doi.ieeecomputersociety.org/10.1109/TVCG.2010.32