EGSR 2016
=========
Keynote
========
The Technology to Create the Magic
--------------------------------------------
Markus Gross
Focus on things that connect with their entertainment business.
* Rendering
* Uncanny valley
* Robotics
* Animatronics
* Augmented reality
* Snow from Frozen was captured in Switzerland ;)
* The eyes of that alien woman with big eyes in Star Wars VII uses their eye capture system.
Session: Capturing Nature
===========================
Single-shot layered reflectance separation using a polarized light field camera
--------------------------------------------------------------------------------
Capture specular & diffuse from a single shot.
* It might be useful for capturing skin, as well as garments.
* If only it weren't limited by spatial resolution...
* Similar to light-stage. The only difference is that they require only one shot.
Perceptually Motivated BRDF comparison using single image
---------------------------------------------------------
They propose using a surface and a viewpoint with high coverage of the BRDF.
The image is both good for being used in human-based evaluations (because it contains a sphere in the center), and for image-based comparisons.
A phenomenological model for throughfall rendering in real-time
----------------------------------------------------------------
Rain rendering
Throughfall = Drops dripping from foliage
* The hydrological model gives the frequency at which a drop drips given a regeneration point.
* Their rain rendering method is published last year, Computer & Graphics
* Renders in UE4
* www.weyo.fr
Session: Into the pipeline
===========================
4D-rasterization for fast-soft shadows rendering
--------------------------------------------------
ちんぷんかんぷん
発音はめちゃくちゃ。 "simple" = sample; "pier" = pair
Local shape editing at the compositing stage
---------------------------------------------
The compositing stage is where you compose your G-buffer and auxiliary buffers together.
* All the examples modify the normal buffer.
* I don't understand the advantage over modifying directly the normal texture before rendering the normal buffer. I guess for special effects, but I need an example...
Session: Sampling
===================
Solid angle sampling of disk and cylinder lights
------------------------------------------------
framestore.com
Montecarlo integration for direct light
* Convert integral from solid angle to area
* Production shot from The Martian --lots of cylindrical lights inside the spaceship
Improving the Dwivedi sampling scheme
--------------------------------------
Weta Digital
Path tracing in participating media. Eg. light through the ear
* Zero variance random walks! = Montecarlo without noise
Line sampling for direct illumination
--------------------------------------
How to solve the direct light integral efficiently.
Projective blue-noise sampling
-------------------------------
* Blue-noise: similar to distribution of photoreceptors
* But When you project them, you don't have nice stratification as with samplings used in Monte-Carlo
* Projective dart throwing
Session: Light Transport
========================
Parallel multiple-bounce irradiance caching
-------------------------------------------
MIT
Accelerate rendering of HDR scenes
* Interesting formula: Daylight Glare Probability (DGP)
Product importance sampling for light transport path guiding
-------------------------------------------------------------
Univ. Tübingen, Univ. Prague, Weta Digital
* Slightly improves the quality for the same rendering time (in 1h, you get something reasonable, although still noisy)
* It uses Gaussian Mixture Models (GMM) 懐かしい
* Works better with glossy interactions
Forward light cuts: a scalable approach for real-time GI
---------------------------------------------------------
Paris
* Object-space algorithm
* Background: instant radiosity, reflective shadow maps
* Instant radiosity: VPLs (virtual point lights)
* Lightcuts: the required number of VPL samples decreases with distance
* Closest approach: Deep Screen Space
Keynote
========
Rendering research, its applications, and the real world
-----------------------------------------------------------
Steve Marschner
* SIGGRAPH award
* Academy award (Oscars)
* Today Computer Graphics is broader: sound, fabrication, computational design, HCI, ...
* what it looks like -> what it is like
* showing the thing -> making the thing
* Clothes: "Building volumetric appearance models of fabric using micro CT imaging", Siggraph 2011
* CG -> maintain the aesthetic
* accurate & simple, then complex (engineering) vs complex & approximate, then accurate (CG)
* "A comprehensive framework for rendering layered materials", siggraph 2014
* "Position-normal distributions for efficient rendering of specular microstructure", siggraph 2016
* "Light scattering from human hair fibers", Siggraph 2003
Session: Looking through surfaces
==================================
Sparse high-degree polynomials for wide-angle lenses
-----------------------------------------------------
fisheye lenses
motivation: panoramas, rendering for imax dome, rendering for VR, rendering with interesting bokeh, ...
Efficient ray tracing through aspheric lenses and imperfect bokeh synthesis
----------------------------------------------------------------------------
eg. onion-ring bokeh
* fabrication process of aspheric lenses
* aspheric lenses are used because they reduce the number of aberrations
* They created a virtual grinder to generate the imperfections in the lens, expressed as a normal map
Shape depiction for transparent objects with bucketed k-Buffer
---------------------------------------------------------------
Open Inventor: tool for scientific visualization (CAD, medical, geology)
Order-independent transparency
* Dpeth peeling vs Per pixel list vs Weighted Blended OIT
* Weighted Blended OIT is not as faithful, but wins otherwise
Session: Faster rendering
==========================
First-order regression with nonlinear weights for denoising Monte Carlo renderings
-----------------------------------------------------------------------------------
Disney
Denoising
* Spatio-temporal filtering --cleaner results than spatial filtering
Fast shadow map rendering for many lights settings
---------------------------------------------------
* cull primitives before visibility determination, so more applicable to shadow maps than ray tracing.
* 32x32x32 volume used for culling
Fast filtering of reflection probes
------------------------------------
* Reconstruction and approximation on a cube, with a weighed quadratic b-spline
* Polar frame in the cube
* Singularities at the poles fixed by blending 3 frames
* 30x faster than importance sampling for the same amount of error
* Quality looks much better than importance sampling in general
Adaptive image-space sampling for gaze-contingent real-time rendering
----------------------------------------------------------------------
This paper is about avoiding puking when doing VR 😬👍
1. Find attended image part
2. Adapt shading quality to attended image part
Related:
* foveated 3d graphics, Microsoft, Siggraph Asia 2012
* Multi-rate shading, Nvidia Gameworks VR 2015
* Sampling distribution changes depending on the gazing point
* per-pixel shading (G-buffer)
* Their model includes exposure adaptation, motion blur adjustment, etc.
Session: Materials at all scales
=================================
Predicting visual perception of material structure in virtual environments
---------------------------------------------------------------------------
* critical distance: material structure is not perceived
* BTF: Bidirectional Texture Function (structure) vs BRDF (reflectance)
A robust and flexible real-time sparkle effect
------------------------------------------------
Studio Gobo,
Beibei Wang and Huw Bowles --I worked with them on Infinity 3
* Beibei couldn't get her visa to present! 😱
* It's the sparkles they created for the snow in Infinity 3
* I think they presented this in Advances in RTR course in Siggraph 2015.
? Budget for the sparkles? I think it was 1ms for the effects. Cost lighting was taking most of the frame (up to 16ms in PS3)
Additional progress towards the unification of microfacet and microflake theories
----------------------------------------------------------------------------------
* Experimental paper. They are sharing the insights of the investigation. They've got a final paper in Siggraph. (almost double submission??)
* For the siggraph paper they use a different microflake model
* surface to volume? Preserve light transport
* Microflake volume: density + microfacet NDF (specular distribution)
A general micro-flake model for predicting the appearance of car paint
-----------------------------------------------------------------------
* Anisotropic RTE (Radiance Transfer Equation)
* Simulate the translucency of the material. The perceived color changes.
Session: Acceleration techniques
==================================
Constrained Convex Space Partition for ray tracing in architectural environments
---------------------------------------------------------------------------------
* Space partition vs object partition
* Linear vs hierarchical data structure
* Best should be a linear space and object partition -> Constrained Convex Space Partition (CCSP)
Stackless and deep partitioned shadow volumes
----------------------------------------------
Real time shadows
* PSV: accurate, isotropic light
* Stack PSV, new method from EG 2015
* Hybrid solution: start with small stack, and switch to stackless when the stack is full
* The hybrid is the fastest and more stable
Node culling multi-hit BVH traversal
-------------------------------------
Ray tracing
* which primitives are intersected? one or more, ordered by t-value along the ray
Session: Light transport - part 2
==================================
Subdivision next-event estimation for path-traced subsurface scattering
-------------------------------------------------------------------------
Disney
Path tracing.
Bidirectional polarised light transport
----------------------------------------
Point-based light transport for participating media with refractive boundaries
-------------------------------------------------------------------------------
* define radiance, BSDF and importance
Point-based light transport for participating media with refractive boundaries
------------------------------------------------------------------------------
Beibei Wang again
liquids with containing media (you need a glass to contain the liquid...)
Related: volumetric photon mapping -> beam radiance estimate -> photon beams -> point-based
Point-based GI, used by both Pixar (Up) and Dreamworks (Kung-fu Panda)
* Caustics look bad because it's an approximation... Given the same rendering time, I think PM with BRE, or UPBP look better IMO...