Encoding: UTF-8 SIGGRAPH 2010 ========================================================== 印象  去年のSIGGRAPHからゲーム関係の論文やトークは増えていますが、今年のSIGGRAPHにも同じ傾向続いています。  実際発表者として参加したゲーム会社は次のリストです:  Activision Studio Central, Avalanche Software, Bizarre Creations, Black Rock Studio, Bungie, Crytek, DICE, Disney Interactive Research, EDEN GAMES, Fantasy Lab, Gearbox, LucasArts, Naughty Dog, Quel Solaar, tri-Ace, SCE Santa Monica Studio, Square Enix R&D, Uber Entertainment, Ubisoft Montreal, United Front Games, Valve, Volition.  (realtimerendering.com)  さらに、映画関係の発表とゲーム関係の発表を合わせたセッションも始まっています。  今年は特に画期的な論文がなかったと思いますが、ゲーム関係の発表ではこまめな説明、工夫、小細工があったりしましたので重要な参考になると思います。個人的に参考になったのは、  1. Sample distribution shadow maps - カスケードシャドーマップのパラメータ自動選択の部分は役に立つ。  2. AvalancheのToy Story 3のゲームのAmbient Occlusionの手法。Crytekの手法より速いし、結果が奇麗。  3. CrytekのGI(間接ライト)の手法(LPV)の細かい説明。例えば、法線のエンコードの仕方。 [Sunday 7/25] Physically Based Shading Models in Film and Game Production ============================================================ Course. 14:00 - 17:15 Course notes: http://bit.ly/s10shaders * Hoffman / Activision * Introduction by Hoffman on Light physics was nice and easy to understand. * Gotanda / tri-Ace it's easy that artists set wrong parameters. Don't let artists to set the Fresnel parameter directly. -> templates and give in-house courses. * Deferred shading needs more passes for proper BRDF :( * Lack of proper BRDF for ambient light :( -> improve ambient lighting with SSAO, SH, etc (check slides) -> Ambient BRDF using a volume texture * Performance table is rendered in PS3 research.tri-ace.com * Hoffman / Activision * VFX / ILM - similar issues as in games - trying to reproduce FILMED reality, not reality. - capture * Alice in Wonderland / Sony Pictures - Arnold: a GI Ray Tracer co-developed with Solid Angle SL (Spain) -> Marcos Fajardo talk 発表者: * Naty Hoffman - Activision, "Real Time Rendering"のco-author * Yoshiharu Gotanda - tri-Ace * Ben Snow - ILM * Adam Martinez - Sony Pictures Imageworks ゲームでもライティングをやる時にはライトの物理を考慮した方が見た目も開発の効率もよくなる。 * マテリアルのパラメータが少なくなるので、設定しやすくなる。 * 物理のモデルなので、パラメータの意味が分かりやすくなる。Ad-hocのライティングモデルだと、適当で調整が難しくなる。 * 調整に頑張らなくても、リアルな映像得られる。 * アンビエントにでもちゃんとしたモデルを使った方が奇麗な絵ができる。五反田さんのアンビエントモデルはリアルタイムで使える。 * Ad-hocのライティングモデルでも(Phong等)、物理を考慮すれば改善できる(Hoffmanのスライドと五反田さんのスライド)。PhongよりBlinn-Phongの方が正しい。 * 正しく描画できるように、まずゲームエンジンに必要なのは:  1.正しいGamma;  2.HDRのサポート;  3.いいトーンマッピング。"filmic"(映画らしい)の方がいい(火曜日のコース:"Color enhancement...") * Image-based Lighting(環境マップ等)をやる時にキャプチャーがとても重要(Snowのスライド)。サンプリングをする時に多くのサンプルが必要なので、Importance samplingを考慮するといい(Martinezのスライド)。 印象 * HoffmanのIntroductionは分かりやすいと思うので、スライドをチェックするといい。中身:light absorption, scattering, subsurface scattering, BRDF, Microfacet BRDF, Fresnel reflectance * そのスライドを見ながら、ほかの発表を見ると話の流れがよく解ると思います。 Papers Fast Forward --------------------------------------- * A bit boring compared to previous years... 印象: 2時間で130人の発表者は論文を紹介してくれるセッションです。発表が多いので、みんな印象に残るのを目指している。なので、面白いネタを入れる人が多い。結構笑える。今年はちょっとレベルが低くなったという気がしたけど。 (ネタではなく、中身が)一番面白いと思った論文: * Toward evaluating material design interface paradigms for novice users * Physics-inspired topology changes for thin-fluid features * Structure-based ASCII art (もう新しいCGのネタがなかなか生まれないから、なんでもありと思った ^^; ) * From image parsing to painterly rendering * Line-space gathering for single scattering in large scenes * Programmable motion effects * 2.5D cartoon models * Real-time lens-blur effects and focus control * Reducing shading on GPUs using quad-fragment merging * Robust physics-based locomotion using low-dimentional planning * Optimizing walking controllers for uncertain inputs and environments * Non-linear disparity mapping for stereoscopic 3D * Photorealistic models for pupil-light reflex and iridal pattern deformation * Feature-preserving triangular geometry images for Level-of-Detail representation of static and skinned meshes * By-example synthesis of architectural textures * Mesh colors * Video tapestries with continuous temporal zoom * Fool me twice: exploring and exploiting error tolerance in physics-based animation * Using blur to affect perceived distance and size * Ambient point clouds for view interpolation * A deformation transformer for real-time cloth animation (260 fps!) * A wave-based anisotropic quadrangulation method * Effects of Global Illumination approximations on material appearance * A synthetic-vision-based steering approach for crowd simulation * Data-drive biped control [Monday 7/26] All about Avatar ===================== Avatarの映画のCG製作についての話。 * Virtual production stage 仮想カメラを使って、アクションをキャプチャする。データをWeta Digitalに渡す。 * Lighting * ライト(スポットライト)が多すぎて、シャドーマップの生成がスゴい時間かかる。なので、前処理で影を計算するようにしている。 * 1つのシーンには6000個のオブジェクト、600Mポリゴンがある。複雑すぎる。そこで、NVIDIAとPantaRayというシステムを開発した。 * パイプラインはデータキャッシュ向けになっている。1週間で50TBのデータができちゃうので。(2TB/hour) * Volume Rendering * 立体対応のため、モーションブラーも3Dで計算しないと行けない(Volumetric motion blur)。 * prmanにdeep shadow mapsのサポートがないので、独自でプラグインを書いた。 * deep-compositing friendly * Compositing * 立体対応のテストは2006年から始めた。 * SXR(Stereo EXR)というフォーマットを開発した。もうスタンダードになっているので、openEXR 1.7に入っている。 * Deep-alpha compositing: 深度を考慮しながらアルファブレンドをやるにはholdout mattesを使用(deep alpha mattes)。1サンプルにZもアルファも入っている。 * 立体対応する時に時間や色のシンクの問題が発生するので、調整が必要。 People behind the pixels ========================== * New section: Siggraph dailies * Awards: - Alyosha - for bringing Computer Vision and Computer Graphics together - Jessica Hodgins - for work in motion - Yoichiro Kawaguchi - art award Keynote: Don Marinelli ========================== * He's a tornado!!! * Drama * Left brain + right brain * Innovate at ETC (Entertainment Technology Center)-> SATE curriculum * Zeggeist - spirit of the times * Storytelling is a craft * Focus on technology and story * VR -> everyone can become an actor, be someone different for a couple of hours * Craftmanship is when you use your WHOLE brain, left and right! * Stop fear of videogames and interactive media in education! * Reward risk and daring! * Too much burocracy! Do we need Deans anymore? * Avoid federal funding, to avoid the regulations attached * Ivory tower mentality is also bad... A university is not a small universe. The work we do should have impact in others. * Tenure and promotion is system is bad. More fairness is needed! * Immigration regulations work against students! :( Reward merit * Education should unlock potential! * "The Comet & The tornado" - his book Very inspiring! Dramaの先生のDonald Marinelli氏はComputer Scienceの先生のRandy Pausch氏とCarnegie Mellon大学のEntertainment Technology Center(ETC)を1999年に設立しました。Pausch氏は2008年に亡くなられましたが、彼の最後の発表は今でもネットで人気があります("The Last Lecture")。Marinelli氏のその発表の続きのつもりで講演あげました。内容はETCの目標や哲学、 * 学生の左脳(科学)ばかりでなく、右脳(アート)も活用させたい。 * なので、ゲームやインタラクションのメディアは教育に最適。 * リスクを取る学生を応援しないといけない。 などなど。 注記: * 去年GDC受賞されたindieゲームの"The world of Goo"を作ったのはETCの学生だそうです。 * 日本でもETCのキャンパスができた。大阪で。 * 発表の仕方はとてもうまかったです。さすがDramaの先生。 * 彼の本を買っちゃいました、"The Comet & The Tornado"(The Comet = Randy; The Tornado = Marinelli)。 (lunch) * crazy guy from Norway who says he saw God... and that videogames are bad and violent and that we will have to answer to Jesus... And he says he's a scientist... PhD Norwegian religion, Odin, Thor, and buddishm and all, they are all copies of Christianism, he says... THEY are being delusional, not him... oh my... Talk: Split Second Screen Space =================================== BlackRock Studio Screen Space Classification for Efficient Deferred Shading ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Deferred Shadingを速くするための手法です。  * バッファをタイル化する  * アトリビュートによってタイル分類を行う  * 分類結果によって適切なシェーだが選択される 詳細: * Related work: - Swoboda 2009, SPU classification, DOF on PS3 - Moore, Jefferies 2009, classification of shadows * 30 fps (33ms) * G-buffer: - normals, motion ID & MSAA edge (a single bit) * Shadow: 3 cascades * Global lighting (not GI) - sky, soft shadows, etc * Screen Space tiles - locality is key to reduced lighting complexity - 4x4 tiles, 56000 tiles at 720p * Screen space shadow mask - 320 x 180 @ 720p (1/16 size) - 1 pixel = 1 tile - classify screen tiles according to attributes -> soft shadows, sunlight, etc * Index buffer generation - for every shader ID * On 360 (4ms) - tile classification on GPU - 0.7ms * On PS3 (6ms) - both SPU and GPU 印象:   この手法良さそうですが、PS3の場合はSPUを結構使ってしまいそうな感じがしました。 Real-time frame rate up-conversion for video games ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lucasarts Dmitry Andreev - demoscene guy 30 fps to 60 fps * Use velocity buffer to interpolate information and render at 60fps - instead of motion blur 30fpsで描画しているものをちょっとごまかして60fpsで出力する手法です。 * 普段モーションブラーのために使うvelocityバッファをフレームとフレームの間 の位置を補間するために使う。 (詳細は論文で) 印象: LucasArtsのStarWarsで実験されているものを見た限り、確か滑らかになるけど、 補間が失敗するところはちょっと気になります。 Split Second Motion Blur ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 普通のスクリーンスペースのモーションブラーですが、効率よく描画するため工夫しているのは:  * オブジェクト毎にIDを持たせる(vector motion ID)  * タイル毎、フィルタ用のサンプルの数を変える(adaptive sampling)   → 計算するのは大変なので、カメラの距離でサンプルの数を固定する    ー 近ければ近いほどサンプルの数を増やす    ー レーシングゲームなので、スクリーンの縦軸でだいたい判断できる    ー スピードが速い時に結果はよくない  * テクスチャ空間でのモーションブラー(〜1.6ms)   ー テクスチャ実体をぼかす 印象: レーシングゲームなので、色々工夫できているけど、ほかのゲームで使えないかもしれません。 * screen space * velocity buffer * blur in direction of velocity * but, bandwidth heavy -> to reduce bandwidth * Vector motion ID's (1 ID per earch rigid object) * adaptive sampling, per tile - static fixed camera tiling -> the closer to the camera, the more taps - bad results for high velocity -> * texture space motion blur * 1.6ms * Future work: stochastic rasterization - to handle artifacts: disocclusion, transparencies... Deferred shading pipeline for real-time indirect illumination ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ INRIA * G-buffer doesn't contain info about hidden objects, so result will be approx. * They say that Crytek's method doesn't apply to any kind of light..., and theirs do (is image based...) * high memory cost * temporal artifacts * シェーディングの結果をGバッファに追加して、間接ライトの参照として使う。 * パイプライン:(論文の図) * ライトのバウンスを増やしたければ、間接ライトを追加した結果を入力として使って、また同じプロセスを繰り返す。 問題点: * メモリのコスト * temporal artifacts (動くと差が目立つ) 印象: * Crytekの手法だとすべてのライト対応できないと言っていて、この手法を提案しているわけですが、Crytekの 手法の方がいいと思いました。 * SSDOに近い? Real time demos ~~~~~~~~~~~~~~~~ demoscene God of War III ちょっと面白かったのはdemosceneからのデモで、すべてparticlesで描画されていた。動きは不思議で面白かった。 Electronic Theater ~~~~~~~~~~~~~~~~~~~~~ ゲームからの映像は今年3つしかなかった: Assassin's Creed 2, DJ Hero, Rock Band The Beatles. [Tuesday] Color enhancement ================================= course slides: http://bit.ly/s10color 色扱いについてのコースです。主なトーピックス:  * HDRとHDRのエンコーディング  * Color management → metadataの扱い  * Color space (色空間)→ エンコーディングに影響する  * Tonemapping (HDRからLDRに移すオペレータ)  * 色調整: LUTの扱い 前半は映画関係の発表でした。後半はゲームの方のやり方でした。 印象:  イントロダクションと五反田さんの発表は結構参考になると思います。 詳細: Intro ~~~~~~~~~ Joshua Pine * In paintings, there's usually a huge dynamic range (Rembrand). Presents the feeling of HDR, even if the painting is low contrast. * stevens effect * hunt effect * display flare * bartleson effect * surround also is important: - normal (office), - dim (living room) - dark (theatrical) - more shadow detail comes out 面白かった。 Color Management ~~~~~~~~~~~~~~~~~~~ * 3 layers - color management: * metadata to disambiguate color - color rendering - color enhancement 細かいdensitometryについて。映画用。 Color Spaces and operations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Sony Pictures Imageworks * not "Color space", but color encoding (overuse of word): linear / log ... * very simple... he's just saying that results are different working in linear space (of course...) * Log units are similar to stop units (camera) * dynamic range is not always preferable * Artist training is necessary * Noise / acquisition difficulties * OpenColorIO (OCIO) - all configured with a single file - built-in support for many LUT formats - etc Check slides Pixar ~~~~~~~ * precision enables scale * ad hoc vs. physically based shading/rendering (sunday's talk) The Craft of Colour Grading ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * colorist - translator of creative authors from ideas to reality Eg. he wants it "darker", what "darker" means? Creamier, toastier, etc... - participants of the authoring process * Baselightというソフトウェア。スライダーが多い。 Filmic tonemapping for real-time rendering (videogames) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * HDR & linear color math * new terms - scene-referred (linear) -> Scene-referred Linear - output-referred (sRGB, the device) -> output-referred sRGB * Cineon Log - used in VFX since circa 1993 * Filmic tonemapping * Authoring a LUT (slides) Film simulation for videogames ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Gotanda * More detail than GDC presentation * Reinhard vs K-revearsal * 1st approach was not a simulation, so the results were not satisfactory * simulation overview (slides) * Equivalent Neutral Density to get neutral gray Color enhancement in videogames ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Naty Hoffman / Activision * how to author 3D LUTs * artists should have properly calibrated TVs and viewing environments!!!! [Wednesday] Course: Advances in real time rendering in 3D graphics and games =================================================================== http://advances.realtimerendering.com (I) 9:00-12:15 Introduction - Natalia / Bungie ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 印象: この人の話はいつも面白い。発表の仕方もうまい。  特に面白いと思ったのは:   * ユーザーがスクリーンショットを取れるようにすることで、ユーザーが何を見ているか、何が気になるか分かるようになる。意外な所を見ているかもしれません。それを参考しながら、グラフィックスを改良して行くといい。   * ほとんどの時間はエンジンを作る時間ではなく、パイプラインに取り入れる時間です。なので、パイプラインに優しい手法を選ぶといい(例として、個人的に、その後のDICEとEnlightenの話を聞くと、Enlightenをパイプラインに入れるのはとても大変だったという気がしました)。 詳細: * More than pretty pixels * Visual bar for computer games has risen sharply - it's assumed that a AAA title is going to have outstanding visuals * Focus on the whole package - physics, AI, etc * Bungies' REACH: BRDF, Cook-Torrance, Terrain... * How will you stand out from crowd? - focus on what matters to the user - does the user sees nice water? - let users take screenshots, to see what they pay attention to. - in the player's living room! Maybe SD TVs... - graphics is a realization of artistic vision - setting the mood - NPR graphics * Games are a unique medium - pixels in motion: SSAO troubles... - input lag * It's all about the pipeline - some IPs live long! reuse old assets... - need pipeline-friendly techniques. Programming the engine may be 30%, 70% is about plug it in the pipeline Avalanche and Toy Story 3 ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Toy Story 3のゲームについて: * 30 fps * 640pで描画されている * すべてのassetはAvalancheのアーティストが作った。 * Screen-Space Ambient Occlusion * Line Integralsに基づいた手法。 * Crytekの手法より効率よくサンプリングできるし、カメラの回転等でいきなりoccluderが消えたりしない。 * Crytekの手法より速い。 * Ambientライト * Deferred手法で、シーンを二つの領域に分ける。 * 各領域で別のambientを適用できる * 平行光源の影(カスケードシャドーマップ) * 640x640の3つのカスケード * ソフトシャドーっぽくしたいので、十分ぼかしている * self-shadowingの問題を解決するために"dynamic depth bias"(自動的にバイアスを求める)手法を使用。別の問題が現れますが、ぼかしてシーンに加えた後はあまり気にならないかも。 印象:  SSAOは結構良さそうです。実装する価値があると思います。 詳細: SSAO ----- 1. How/where to sample - line integrals * continuous responses rather than discrete -> occlusion changes continuously * 2D rather than 3D. In crytek method, you have to project to 3D - crytek method: 5 samples gets you ~2 samples. Their method, get 5. * SSAO line integral algorithm (slides) * x,y coords precomputed offline. - samples with varying radii, to capture more occluders. - heuristically choose those so it covers well the unit sphere 2. how to fake more samples - random rotations in 2D - aliasing problems with limited samples - create rotations 3. how to deal with large depth differences - need distance attenuation - paired samples -> check the heuristic in slides * the problem is that you get half of the samples, but it's worth based on Volumetric Obscurance, 2010 paper Ambient Lighting ------------------ * Irradiance Light Rigs with SH - live tweakable - negative light - SH can be blended in real time * Deferred technique to render a volume to split the scene in 2 areas, so you can have 2 ambients (to mix indoors/outdoors) Shadows ---------- * dynamic shadows - no light maps * drop shadow for main character - always projected straight down the character - reprojection artifacts -> add thickness check in the pixel shader * cascades are slow for their complex scenes * VSM was not used, because they wanted softer shadows * 3 640x640 cascades, and blur a lot to get soft shadows. - it looks fine and very inexpensive * 5x5 cross bilateral filter * Deferred shadows * Depth bias problem - static depth bias requires a lot of work by artists to get the right value - dynamic depth bias, based on surface derivatives [Isidoro 2006] A real time radiosity architecture for video games ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ EAのスエーデンのDICEのスタジオでEnlightenをFrostbiteエンジンに取り入れた。 * Enlighten (real time radiosity)のパイプラインは前処理とランタイムで分かれる * 何のライトでもサポートできる(エリアライトでも)。 * Frostbiteではgeometryを分類する: * static: ライトを受けて、反射する * dynamic(動的): 受けるだけ * CPUではradiosityの計算を行う  * 複数のライトバウンスじゃないと、リアルではない。前のフレームの情報を使える  * 外のシーンだったら、毎フレームを更新しなくてもいい(ライトがあまり変わらない) * GPUではdirectなライトとcompositingが行う 印象: Enlightenをパイプラインに取り入れるには結構苦労したという気がしました。 詳細 * EA DICE - frostbite engine * not about the algorithm, but about the architecture. Need to be able to change alg. Enlighten ---------- * cost and quality must be scalable! * separate lighting pipeline * Enlighten pipeline - precompute - runtime * Indirect light can be very low resolution. It works. * compute radiosity in the CPU. Use previous frame result, so get multiple bounces. * (check lighmap output in slides) -> they are cachable - depending on the scene, you have several options - eg. outdoors, don't need to update every frame, since light barely changes with time * use simplified geometry when computing light -> relight from target geometry * any type of lights (area lights too) Real time radiosity in Frostbite --------------------------------- * to avoid unnecessary iteration times * to handle dynamic envs * pipeline 1. classify static & dynamic geometry - static: receive and bounce light - dynamic: only receive * rendering - CPU: radiosity - GPU: directy light & compositing - deferred rendering (check pipeline in slides) * artists don't have to wait to see the light changes! Pretty cool demo. (?) * they manually create the target mesh to avoid inconsistencies * Enlighten since 2006, Frostbite integration since 2008 (their guinea pigs...) Realtime order independent transparency and indirect lighting in DX11 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (AMD) Order Independent Transparency (OIT) * OITの実装はDX11を使うと楽になる。 * DX11ではlinked-listがGPUで実装できるので、半透明の描画順を解決するためのリスト作れる。 * 現在、十分メモリーが必要ですが、すぐ解決できそう Indirect ilumination using indirect shadows 1. DX11のコンピュートシェーダで間接ライトよう3Dグリッド作成 2. RSM (Reflective Shadow Maps)を描画する 3. 間接ライトを加算(accumulate)。大きいカーネルが必要だけど、ディサーパターンでごまかせる。その後の bilateralフィルタでなんとかなる。 4. 間接影(3と同じ過程;けど通すライトではなく、ブロックされたライト) 印象: DX11があれば、簡単に実装できちゃう感じでした。 OIT ---- * create linked lists in GPU * related: depth peeling methods for transparency * linked list construction - 2 buffers: head pointer buffer & node buffer - (check process in slides - seems easy) * AA: store coverage info in the linked list * Mecha demo Indirect ilumination using indirect shadows -------------------------------------------- * Indirect shadowing - help perceive subtle dynamic changes occuring in a scene - adds helpful cues for depth perception * 4 phases 1. Create 3D grid holding blocker geom for indirect shadowing (DX11 compute shader) 2. RSMs 3. Indirect light accumulation 4. Indirect shadowing * For (3), need a too big kernel :( - don't use a full kernel for each screen pixel - use dithered pattern of pixels which only considers a few pixels at a time - add bilateral filter with up-sample to smooth things out * (4) is similar to (3), but BLOCKED light instead * Fairly simple implementation and fully dynamic CryEngine 3: Reaching the speed of light ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Kaplanyan / Crytek このセッションではCryEngine3の細かい実装のところを発表してくれた。 特に面白いと思ったのは、 * Gバッファのnormalバッファはとても効率よくエンコードされている。  オフラインで24-bitを一番よくカバーするquantizationを求めて、法線  を描く時に2Dテクスチャを参照してエンコードされた法線を描く。普段のスケールの  手法だと24bitの空間の中では2~3%ぐらいしか使われないけど、Crytekの手法だと98%ぐらい使われる。 * アーティストはライトのクリップボリュームを設置できる。Deferred lightingの色bleedingの  問題を防ぐため。 * Anti-Aliasingは二つに分かれている: * 近いものはポストフィルタのAA * 遠いものはtemporal AA:前のフレームの情報を覚えて、補間する * Crytekではもう次のエンジンを作り始めている 詳細: - he's already working on next next gen engine... Improvements in CryEngine 3 * Occlusion culling - software Z-buffer (aka coverage buffer) * SSAO - encode depth as 2 channel 16-bits value - half screen res * Color correction, LUT in photoshop * Deferred shading - partial deferred shading (light prepass) - don't generate albedo in 1st pass - add albedo in 2nd pass - No AA :( - limited material variations - transparency... - G-buffer: 64bit/pixel - best fit of the quantized normals in 24 bits -> solve it and bake it into a cubemap. -> store only the most meaninful part into 2D texture * Physically based BRDFs - Phong BRDF (deferred only allows for that model) - check course notes of monday * HDR vs bandwidth vs precision - PS3: RGBK (RGBM) compression - XBox360: R11G11B10 texture for HDR - use dynamic range scaling to improve precision - from average luminance of previous frame - detect lower bound for HDR image intensity - exponential tonemapping vs film tonemapping * Lighting tools: clip volumes - deferred light source tend to bleed - you could add shadow, but they are expensive... - so artists add a clip volume to avoid bleeding (we should add this) * Anisotropic deferred materials - extract the major Phong love from NDF (Normal Distribution Function) - microfacet BRDF model - approximate NDF with Spherical Gaussians * AA - temporal flickering can't be solved with any post MSAA (including MLAA) - hybrid AA - post process AA for near objects * very simple method that checks color difference. MLAA won't solve everything anw - temporal AA for distant objects * store previous frame and depth buffer * reproject the texel to the previous frame - separate it with stencil and non-jittered camera (II) 14:00-17:15 Sample distribution shadow maps ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Intel この手法ではカスケードシャドーマップのパラメータを自動的に設定される。 * 最初と最大のZを求める * logスケールで4つのカスケードを切り分ける * アーティストはもう調整しなくてもいい * 効率よくシャドーマップの領域が使われるので一番奇麗な結果が得られる * ソース: visual-computing.intel-research.net/art/publications/sdsm/ 印象:CPUの時間ちょっと使いますけど、これ絶対必要です。 カスケードの設定は苦労するので、自動化するのは美味しい。 * Naive shadow mapping - wasted space - perspective aliasing - projective aliasing * CSM (Z-partitioning) is the base of this work * Analyze the shadow sample distribution - find tight z min/max - partition logarithmically * source code available: visual-computing.intel-research.net/art/publications/sdsm/ 賢い。結果は奇麗。アーティストが調整しなくてもいいし。 Adaptive Volumetric Shadow Maps ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ヘア、煙、フォグ等の影 * AMDの発表と同じくDX11でlink-listを実装している。 * 各ノードで深度とtransmittanceを保存する (AVSM insertion) * ソース: visual-computing.intel-research.net/art/publications/avsm/ * Realistic lighting of volumetric media (hair, smoke, fog, etc) * Compute visibility curve: transmittance * AVSM insertion: Store depth and transmittance in each node * max 5 nodes. Remove internal nodes. Never compress 1st and last node, since those give important shadow cues. * Variable memory implementation - use a pixel linked list (DX11) - check OIT presentation. * source code available: visual-computing.intel-research.net/art/publications/avsm/ Uncharted 2: Character lighting ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ スキン、ヘア、クロスについての発表 スキン: * スキンシェーダはNVIDIAのヒューマンデモに基づいている。 * SSS(Sub-surface scattering)があり * 10パスも! ちょっと軽くさるため、12-tapでサンプリングする * 画面に出ているすべてのキャラクタみんな同じUVバッファをシェーしている。キャラクタが多いと、解像度が落ちる ヘア * Kajiya-Kayモデルを使用 * self-shadowingなし * deferredライトがBlinn-Phongモデルを使用するけど、ヘアに関してanisotropicが必要になってくる クロス * diffuseばかりではない!クロスにも意外とspecularが入る * Fresnelも必要 おまけ: 1. avoid the hacks as long as possible - Eg. orange glow in the silhouettes - hacks look great in still images 2. avoid wraparound lightning models 3. Avoid blurry maps on faces - crank the detail! 4. don't bake too much light in diffuse maps - keep AO separate from diffuse map 5. make sure you AO and diffuse match * Skin, Hair, Cloth & secrets * Skin - based in NVIDIA human head demo, with SSS - the light bleeds around the normals, so it looks more natural - 10 passes :( - to make it cheaper, 12-tap approx. (Shader X7) - they have a UV buffer shader with all the characters in screen. The more characters, the less resolution they get. * Hair - Kajiya-Kay - no self-shadowing - deferred lights use Blinn-Phong - need anisotropic. Blinn-Phong won't work for everything * Cloth - not only diffuse! also specular - it also has a Fresnel * 5 secrets 1. avoid the hacks as long as possible - Eg. orange glow in the silhouettes - hacks look great in still images 2. avoid wraparound lightning models 3. Avoid blurry maps on faces - crank the detail! 4. don't bake too much light in diffuse maps - keep AO separate from diffuse map 5. make sure you AO and diffuse match Destruction masking in Frostbite 2 using Volume Distance Fields ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ DICE 破壊を効率よくするためのマスキング。 * geometryに球がアサインされる * ディテールの必要に応じてスケールアップする * ボリュームテクスチャを使用。Xbox360では32x32x4 * たくさんの小さいテクスチャを使うとメモリもったいにので、テクスチャのアトラスを使うといい * RSXでは動的ブランチ(if文)が遅いので、マスクをdeferredデカルとして描く("Volume Decals") * SPUに計算をさせる (distance field triangle culling) * Signed volume distance field * Spheres are placed on the geometry * Getting higher detail - scale up * Volume texture - xbox360: 32x32x4 - many small textures will waste memory - use texture atlases * Dynamic branch on RSX not efficient -> draw mask as deferred decal "Volume Decals" * Move computation to SPU -> Distance field triangle culling Water flow in Portal 2 ~~~~~~~~~~~~~~~~~~~~~~~ (Valve) 水の流れの作り方について * flow-mapを作るにアーティストに使いやすいツールを使用 * flowをスムースするため2枚のレイヤーを使用 * オフセットを加えることで、リピートが防げる * debrisも追加(汚い水) * 現在物理を考慮していない * how artists create flow maps? * visual goals - solve repeating texture artifacts - flow around obstacles - vary water spped and bump strength * technical goals - work with existing reflective surfaces - min hardware ps2.0b & Xbox 360 * Gameplay - in test plays, players got lost in swamps - use the water flow to highlight the correct path (subtle movement) * scroll normal maps in the direction of the flow maps * with Houdini, artists can brush and paint the flow vectors * Flow visualization (Nelson Max paper, 1995) * Double layer (to smooth) with offset (to avoid repetition) * In Portal 2, also flow debris in water * Flowing color works better with different offset (from - to +) * Future: modify flow texture with physics engine, if something falls in the water, etc [Thursday] The making of God of War III =============================== この話はアーティスト向けです。 主なポイントは: * GoW I, IIのスタイルを守りたかった * スケールはとても重要 * PS2時代はスケールを小さくすると、AAがないので、小さなオブジェクトのエッジをフリッカーしちゃうので、  でかいものの方がよかった * 大きいモンスター * 大きなステージ(タイタンが動く!) * Solid shapes of scale * solid shapes: 奇麗な形(ボックスとか) * GoWでは大きく見えるsolid shape。例えば、ボックスのエッジをsculpted bevelled edgeにしたりする * Kratosのデザインのルールはとても重要 * 例えば、silhouetteが変わってない * アニメーションの定義も細かい * outsourcingの時にとても大事。例えば、PSP版は完全に別の所で作っているけど、ちゃんとルールが  守られている。 * Kratosの体に付く血が3つのテクスチャでブレンドしていく * タイタンがステージとして扱えるにはスキンコリジョンシステム使用 詳細: Vision * Going bigger and more epic than previous games * Things that couldn't be done in PS2, make them in PS3 * Represent story through visuals: eg. kill Poseidon, and world floods * Cranck up the violence, so when you kill a god, you remember it * Team structure Game Director | Art Director | Visual Dev. - Env. Art - Character Art - Anim. - Tech Art/Special Fx Creating Art * Creating envs from blue sky stage * Eg. creating Hade Front - who is Hades? where is Hades? is it Greek enough? - color -> red - shape of the castle - rough 3D model - deliver to level designer, see if fits in the gameplay - balance between game play and concept art - final color was blue. Red is too stereotypical * Creating sky dome for 3D env - hawai pictures as ref + concept art Env. Art * GOW style? re-examine the past * Solid shapes of scale * Solid shapes - clearly definable, solid shapes - lighting helps to define shapes - subdued textures don't detract from shapes - take out contrast and detail from textures, so you can see shape well - solid: firm, hard, compact, heavy in nature of appearance - bevelled edge: solid! - sculpted bevelled edge: GOW solid! -> structures feel solid and heavy * scale - influences on scale: - the GOW story: big struggles, big monsters - PS2: lower res, no AA -> edges of small objects will shatter a lot when moving cam * GOW III envs - solid shapes: similar to previous titles - scale: redefined scale Char. concept art (Izzy) * for why? - maintain the established aesthetic - pre-viz characters * maintain feel of GOW, while giving new look * connect fantasy and recognizable Greek elements * established aesthetic: confrontation and extreme violence * scale * "shotgun into a funnel" approach to design - shot many ideas - narrow down until getting the final concept * pre-viz game characters - work out the kinks b4 3D models Lead character artist * Hades * character kickoff meeting - get all departments * cinematic assets - check GOW1, GOW2 cinematic assets (pre-rendered movies) of Hades * texturing and inspiration - burnt cheese pizza - for Hades skin * the face of Hades * Kratos - how to translate Kratos to PS3? * Maintain Silhouette * Same assets for cutscenes * Blood - lerp between 3 blood skin textures Lead in-game animator * create exciting contact sensitive moves (CS, or QTE Quick Time Events) - anims when kratos interacts with other characters (kills them) - why CS moves? to show cool anims, while letting the user interact - button placement changed, to free the view of anim - good QTE vs bad QTE - one major action per button press * Kratos skin is white because in GOW1 the artist hadn't finished painting it when the director saw the sketch... And he liked the white skin. * Kratos - keep him consistent * Kratos rules - always grim and angry - never feel small or weak - even if everything is BIG, he's always in the center - never falls on his back until he dies - always must have forward movement * brutality from movies and comic books * Kratos rules are important so all animators learn to animate Kratos - rules also apply beyond anim department eg. sound - keep the character consistent from scene to scene - good for outsourcing - Ready at dawn (PSP) - Soulcalibur - LBP * scale trough animation * Chimera was cut in PS2, too complex - it's 3 characters in one - 3 types of anims * Titans - skin collision system - to interact with them as a level Games & Real Time =================== User generated terrain in ModNation Racers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Requirements * Editability * Compact storage * support for more than height * efficient reconstruction and editing Outline * map based storage * GPU based brush application * brush based editing and serialization - store brush strokes instead of maps Maps * all terrain data is store in 2D maps * height, surface type, etc * Multi-res maps Brushes * brush = a model and a shader * signed additive blend Bridges and tunnels * cut up the terrain * MIN/MAX blend * can bury tunnels with more terrain Pinned level brushes * rendered in separate pass Color brushes * preset by theme Ground cover map * adds geometric detail at little cost Irradiance rigs ~~~~~~~~~~~~~~~~~~ Goals * distant light sources aggregated and evaluated in "rig" of characters * * Generalization of Valve's wrap model Practical Morphological AA (MLAA) on the GPU ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * MLAA used in GOW III and Saboteur (PS3) * Issues for GPU implementation (check slides) * Discontinuities in color is computed as distance in L*a*b* space (isn't it slow to compute the space transform for every pixel??) * Electric lines are not properly corrected (mesh info is lost) L*a*b*色空間の変換処理はもったいないと思う。 後処理なので、temporal aliasingを解決しないし。 Curvature-dependent reflectance function for rendering translucent materials ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * CDRF * can synthesize SSS * Using LUT, just slightly slower than Lambert + Phong @ PS3 * TECMO KOEI, Team Ninja evaluated it - enough quality for fast skin shader - single pass rendering (?) * self-shadow can cancel the effect :( * even if in shadow, is it lit? - only focus on local features. Not considered depth * similar to GPU Gems 1? - Half-life 2. This is more realistic * Precomputation? for animated characters? - Gauss curvature... but there's no geometry shader in PS3 :( - it seems it raised lots of interest.... Global Illumination Across Industries ======================================== * Light bouncing around in a scene * Industries: anim movies, sfx, games Ray tracing in film production rendering ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Fajardo / Solid Angle SL (Spain) * pathtracing * Arnold - physically-based Monte Carlo ray tracer * Eg. "Pepe" 1999, one single artist * "Monster House" 2006, first use of pathtracing for a full CG movie * Cloudy with a chance of metaballs * 2012 * Alice in Wonderland * Intro to Arnold * Pros of pathtracing - single-pass: simpler pipeline - shadows are always perfect (no shadowmaps etc) - interactive (progressive refinement) - only one quality knob (number of samples) * Cons - slow? - noise - indoors are hard - geometry must reside in memory at all times -> 3 orthogonal axes to improve: - make rays faster - reduce memory use - variance reduction -> importance sampling -> geometry instancing * Interactivity - it's very important that lighters have a fast feedback -> CPU time: $1/hour vs artist time: $40/hour * 3D motion blur, almost for free Point-based GI fror movie production ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Pixar * PBGI - generating direct illumination point cloud - redering GI using point cloud * at least used in 35 movies * extensions - area light sources - soft shadows * Final gather for photon mapping - use point-based to make it faster * GI in volumes * interpolation for optimization; sikilar to "irradiance cache" for ray-tracing .. this guy has 2 voices... it sounds like 2 guys are talking Ray tracing vs PBGI ~~~~~~~~~~~~~~~~~~~~~~ Dreamworks * Ray tracing and irradiance caching * they switched to PBGI * "How to train your dragon" * Ray tracing system - micropolygon deep frame buffer renderer * PBGI Pros - faster render times (4x) - bias vs noise = image stability - unified framework for many fx - good at large ray footprint fx - geom detail doesn't increase cost - spatial clustering more effective than tessellation LOD - only deal with points * PBGI Cons - bias vs noise = intensity shift - clustering limitations - lazy building not straightforward - so many points! * Choose ray tracing if you want accuracy Adding real-time PBGI to video games ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Michael Bunnell / Fantasy Lab * adding PBGI to 2 video game engines * goals - fully dynamic envs - deformable geom - destructible geom - use same lighting system for chars and envs * feats - Indirect lighting (color bleeding) - area lights - SSS - normal map support - shiny / glossy materials without env maps * PBGI based on disk-to-disk radiance transfer * avoid env lighting / AO (too much computation) - use area lights instead - halves computation - avoids AO "horizon artifact" * negative emmitance used to avoid explicit visibility computation * GI solution represented as a set of simultaneous equations light emitted fwd = light received * surface color (...) * infinite bounce simpler to compute than a single bounce, unlike the film renderer approach - converge in 5 iterations at max * budget of about 5 ms to solve for sample point irradiance * 2 approaches (differences in game engine) - use subdivision surfaces (tesselator handles attribute interpolation) - use a proxy mesh, and upsample from low poly (this engine had no tesselator) * Per-pixel PBGI - surface shader reads attributes - < 1ms - SSAO but without artifacts * Multiple irradiance, for Normal mapping * Treat PBGI solver like a physics simulator * Run after anim and physics * Write 4 colors (3 irradiance, 1 sss) (per vertex) * Diffuse lighting - Vertex shader passes 3 values to px shader - Convert in px shader (check slides) * Specular lighting (check slides) - problem: it's anisotropic * negative radiance -> light leakage - fix with group dependencies (scalar value) - eg. between 2 rooms, how much light will leak (?) * the 3 vectors don't need to be orthogonal. It's not a basis. Pre-computing lighting for video games ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Illuminate labs -> now Autodesk * baked lighting, for static scenes * Eg. Mirrors Edge, by EA DICE (but these guys were ditched by DICE for Enlighten... ?) * Predictable performance * Local geom changes - fill room with small light probes Real-time Diffuse GI in CryEngine 3 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Kaplanyan / Crytek * dynamic objects -> dynamic GI needed * Cascaded light propagation volumes * works only for diffuse * RSM (Reflective Shadow Maps) to sample surfaces * Lit surfels represented as VPLs * Propagate across 3D grid * Rendering: sample 3D texture with HW trilinear interpolation * limitations (check slides) - sparse spatial and low-freq * cascades * Extensions - transparent objects - GI on particles (check paper) * Why it works? - human perception of indirect lighting - cascades: importance-based clustering * Tools - mark objects as non-caster or non-receivers - clip areas: provides control over indoors - transition areas * mult with SSAO * deferred env probes * fill lights and deferred lights * Console optimizations (check details) - 8-bit => 2 orders of SH - every 5 frames - extremely fast -> 1ms on PS3