This page gathers together technical art tools, personal projects and development challenges that don’t really fit anywhere else. Older experiments in Unity and Unreal can be found on the ‘Realtime’ page, but I’ve decided to leave that page ‘as-is’, and separate the two by time.
Author: ajconcep
Physical Lighting
These were part of a verification test for physically plausible lighting (PPL) in lumen / UE5, as well as an exercise in rapid kitbashing. Each environment was created in around 4hrs from existing assets.
Spherical Normals 1
Houdini tool for generating spherical normals and pre-baked AO for foliage. Left is a tree with default normals – right is one with spherical normals. Was used on Avatar:TLA, primarily in the palace environments.
Edge flatten
K
I kept running into megascans assets that didn’t scatter well, since the exterior edge wasn’t flat. This is a very simple HDA to automatically select and warp a scanned mesh so the outside edge lies on flat ground plane.
Machine Learning 1
This was a project to train a machine learning system in terrain erosion, to speed up the process of landscape creation. It used a dataset from Washington in the US, and a custom version of ‘Pix2pix’. While the results looked promising, the resolution (256×256) wasn’t high enough to be useful, and turned into more a learning exercise than a production-ready tool.
Machine Learning 2
This was another ML project, aimed at generating building footprints for my ‘Greenhouse’ tool. I eventually ended up writing a differentiable SVG renderer in an attempt to make the training fast enough to run on my home machine. Another interesting exercise, which turned into more of an academic challenge than something robust enough to actually do anything useful (an ongoing theme with ML tools).
Gaussian Splatting
R
Because Gaussian Splats are basically just a pointcloud with fancy point attributes, I wanted to see how easy they were to edit in Houdini. This example takes a splat from PolyCam, cleans it up in Houdini, and renders it in Unreal. Not quite performant enough to beat polygons just yet, but it’s a promising workflow for the future.
VAT Wind
A relatively complex tool to take a tree without a pre-built skeleton, and automatically rig it with curves for wind simulation. The result was encoded as a texture using SideFX’s VAT tools, and used in several projects in Unreal. The limit for this workflow is about 300,000 tris, but the result is very render efficient.
Alien Growth
‘Shortest Path’ alien growth simulation for a location-based entertainment pitch. The growth would shrink in light, and grow in darkness, and the guests would ‘fight’ the infestation using tracked torches and other light sources. The aim was to make a fast-paced co-op experience without resorting to the usual ‘give people guns’ approach.
Gaussian Splats 2
Further tests in rotating, blending and combining multiple Gaussian splats together – this example blends 3 splats scanned in various locations around Bowen Island.
c
Cloud Sprites 1
Cloud renders from Terragen, created for a ‘four point’ sprite shader in Unreal
Cloudscape
Cloudscape using the ‘four point’ shader material, created as a proof of concept for the Appa flying shots in Avatar: The Last Airbender. The material also included a depth-based shadow system, so the flat cards would shadow each other.
Fire Country
Virtual forest fire created for a pre-rendered car process shot for Scarab Digital / ‘Fire Country’. I created 4 looping VDB sequences in Embergen and Houdini – which were then duplicated around 400 times across the 2 minute sequence.
Megascans Retexture 1
Megascans have become so ubiquitous in CG that some of the most used assets are now very easy to spot! This is a tool to automatically generate layered material masks in substance, and allow scanned assets to be easily repurposed for different biomes.
Megascans Retexture 2
This example takes a detailed scan with a relatively featureless material, and repurposes it as an undersea rock formation covered in barnacles and seaweed.
Greenhouse Titles
‘Greenhouse’ is my continuation of the work started with the ‘Diorama’ tool at MPC – but in Houdini rather than Unity
Greenhouse volumes
Unlike ‘Diorama’, which was a point-based scatter, Greenhouse is a volume-based tool.
Building process
R
Volumes are subdivided based on modular rules – so a building volume is first split into floor volumes. Each floor volume is then further subdivided into window, wall or column volumes based on it’s position in the overall building.
Bounding box matching
The tool then uses a ‘best fit’ box matching algorithm to assign the most appropriate module from a library. The modules don’t have to follow a set grid, or a set scale, so it’s easy to mix and match from different modular packs with minimal manual setup.
V
Grass packing 1
Because the tool simply matches tagged volumes with assets, exactly the same system can be used for vegetation. Here, a ‘grass’ volume is filled with cubes using a packing algorithm, and the closest fit is assigned from a vegetation library.
Grass scatter 2
Early versions of the tool used a CSV file to transfer the data between Houdini and Unreal. This is the grass scatter example combined with vegetation sprites and an HDRI captured at the Squamish spit (before it was demolished).
Greenhouse city 1
For a full ‘city block’ demo, I chose the most convoluted, difficult location I could think of – Hotwells in Bristol, UK, near where I used to live. Many procedural city tools lack functionality for steep slopes, or even assume an American-style grid layout.
Greenhouse city 2
The same ‘bounding box matching’ workflow could be tweaked to produce roads, kerbs, pavements, paths and green spaces based on a single source image. This example lacks rooftops, which aren’t a very good fit for a modular system unless the surface is very uniform.
City render 1
Creating this city block took around 3mins, with plenty of scope for optimisation. The buildings in particular need more rules for practicality – such as not placing a wall right in front of a door…
City render 2
The building system has not yet been used for a final production layout, but has proved very useful for rapid prototyping and visual development. Further development is ongoing.
c
Underwater 1
The vegetation tools have been used on several projects, however. For speed and flexibility, I also moved away from CSV data transfers to native USD stages.
Underwater 2
VP showcase at Scarab Digital’s stage in Vancouver, demonstrating the underwater environment created with ‘Greenhouse’, and streamed in to nDisplay as USD.
Desert 1
Another demo environment, used for a Steadicam shoot at Scarab Digital. Desert layout in ‘Greenhouse’, streamed in as USD.
Rock faces
I’ve had some success with the ‘Greenhouse’ tool on cliff and rock faces, but the results have been a little too random and undirected to look convincing. Additional rules will be needed to make the module assignment more natural – likely based on physical weathering and erosion.
TATD Titles
Bunker Digital was lucky enough to be contracted by Dimension Studios to work on Roland Emmerich’s ‘Those About to Die’, which included a stint on set at Cinecittà studios in Rome. The production was one of the few big budget shows to continue filming through the 2023 SAG strike.
Circus Maximus 1
The bulk of my work on the show was the Virtual Production environment of the ‘Circus Maximus’, along with Levi Victoria, Ina Chen and another Bunker Digital artist – Kaitlin Perry.
Bunker logo
I’m credited as ‘Content Creation Supervisor’ on this show, subcontracting for Dimension under my company ‘Bunker Digital Inc.’
Circus Maximus 2
The VP environment was primarily used for the track-level fight scenes – and apparently had a very high percentage of ICVFX finals (‘In Camera VFX’ shots requiring little or no cleanup in compositing).
Circus Maximus Ref
The Unreal Environment had to match both the physical set build, and the VFX set extension work done by ReDefine, which really pushed the capabilities of UE5.
TATD software
‘Those About to Die’ was the first Unreal 5 virtual production show completed at Dimension
Circus Maximus 3
The VFX asset build was, as usual, extremely heavy, and required significant optimisation for realtime. We ran a semi-automated Houdini workflow for merging, cleaning, retopologising and re UV-ing the stadium into 13 manageable sections.
Circus Maximus material masks
To simplify the massive UDIM tile count into something that could run in 48GB of VRAM, we used a fairly involved layered material / height blend system. An automated tool baked each section of the stands into a set of materials masks – Red for dirt, Green for wear, Blue for rain staining, and Alpha as a material variance mask. We managed a very close match to the VFX final with less than 5% of the texture data.
Rome optimisation
The overall ‘Rome’ asset was inherited from a museum, and had clearly been developed over a long period of time. Some assets were very simple (and pre-PBR), while the main temples were overly complex. The same workflow as the ‘Circus Maximus’ was used to simplify the heaviest buildings, and stop the nanite streaming pool from overflowing.
Colosseum
I was also responsible for final polish and stage delivery of the Colosseum / Ludus Magnus Entrance Environment, as well as the Judean workers camp. This included set dressing, Blueprint control utilities for stage operations, and the usual cleanup/retopo/retexture of the Colosseum itself.
The Bet 1
There isn’t a lot of ‘behind the scenes’ material showing the crowd system we developed – but fortunately Dimension have also released a short film called ‘The Bet’ which demonstrates a similar workflow.
Colosseum crowds 1
More than 100 extras were recorded at Dimension’s volumetric capture studio, from which 24 looping sequences were created. These were baked to a set of flipbooks, including world normal and depth information. The result was a lightweight crowd setup that would still react to lighting changes, as well shadow correctly.
Colosseum crowd 2
The ‘Circus Maximus’ ended up with around 32,000 virtual crowd actors, while the Colosseum created for ‘The Bet’ had around 80,000. The current limit with this workflow appears to be around 130,000 if the surrounding environment isn’t too heavy.
Time bandits titles
I’m credited as ‘Virtual Art Department Supervisor’ on this show, and spent several months on set with Dimension Studios and DNEG360 in Wellington, NZ
Ice Age 1
This environment (Bingley in the Ice Age) was simply referred to as ‘Snowy Plain’. The principle challenge was the snowstorm, which I created using a volume shader for the fog, and multiple layers of animated textures for the cloud and spindrift. The distant landscape is a detailed matte painting from DNEG’s DMP team, but you can barely see it in the final frame.
Bunker logo
This was the first project working with Dimension Studios through Bunker Digital Inc. Around this time Dimension formed a partnership with DNEG to produce virtual production content, branded as ‘DNEG360’.
Ice Age 2
Snow scenes represent a particular challenge in Virtual Production, and require a lot of care to sell the blend between the practical and digital worlds. They also look VERY different on set than they do ‘in camera’, which adds to the challenge.
Stonehenge
Stonehenge featured several times during the show – at sunset, dawn, and night. I created and lit a large portion of this environment, with assistance from the matte painting department at DNEG. The surrounding landscape was a real location in New Zealand that was split into a multi-layered camera projection in Unreal, with CG grass and stone monoliths for blending with the practical set.
Time bandits software
This was the last show I touched that used Unreal 4, before the switch to UE5, lumen and nanite. I can’t say I miss lightmass!
Neanderthal Settlement
There aren’t too many ‘behind the scenes’ images of another significant chunk of my work – the Neanderthal Settlement in Ep. 7. This re-used all the snow techniques developed for ‘Snowy Plain’, as well as Houdini simulated snow buildup created by myself and Lukas Schwarzer.
Ice Age 4
The settlement is seen in three different lighting scenarios, including one at night with a realtime Aurora Borealis I created that seems to have made it to the final shot almost untouched!
Mansa Musa 1
The last significant chunk of work I completed on the show (and before I left Wellington) was the desert encampment for Mansa Musa’s caravan in Ep. 4
Mansa Musa 2
My work was primarily the BG landscape and lighting, as well as the integration of the scattered tents (courtesy of an extensive Blueprint scatter system built by Marko Seper). Four lighting scenarios were created, but I think only one of them made the final cut.
Sonic Title
‘Sonic’ was the last show completed by MPC Vancouver, and was led by Scott Russell. The ENV team was slowly reducing in size as contracts came up for renewal, and there was no hiring or onboarding to do, so I had more hands-on involvement in the show than normal.
Jungle 1
A large part of my work on the show was using the Unity-based ‘Diorama’ toolset I developed with John Vanderbeck for the creation of several complex fully CG environments. See the ‘Diorama‘ page for more information on the development of this tool.
Jungle 2
The dense jungle in the opening sequence was the most detailed, primarily done by myself and Emily Cho. Layout and temp renders were done in Unity, with the result exported to a ‘shotPKG’ for rendering in Katana/Renderman
Sonic logos
Houdini was used to process the underlying terrain in all of these environments – though the detail is usually hidden by the resulting dressing
Jungle 3
Most of the vegetation was sourced from the library of jungle plants used on ‘The Lion King’ the year before – but was missing mid-scale elements. I therefore created a set of tree fern assets in PlantFactory, and textured them in Substance, with the assistance of Jeffrey Scott.
Jungle 4
The ‘real world’ pine forest environment was created in the same way, using assets from Megascans, as well as pine trees from the landslide sequence in ‘Pokemon’.
Chase 1
The production shut down Highway 17 on Vancouver Island for several weeks for the ‘drone chase’ sequence, but shot all the driving plates from exactly the same height. To get the dynamic camera moves the sequence needed, I set up a system in nuke that reprojected the plates onto a smoothed version of the lidar scan, and re-rendered the (more interesting) shot cameras.
Chase 2
For the most extreme movements, we had to re-create the road surface entirely. To save time, this was done in entirely in nuke – including the wet road and puddle reflections. The projected environment was simply mirrored, blurred, and revealed with a mask.
Chase 3
A particularly challenging shot was a pan of the second drone vehicle, as we only had footage travelling forwards over the bridge. Jeffrey Burt in the layout department came up with an ingenious solution, reprojecting the same plate backwards for the first half of the shot, and aligning it with the same plate played forwards for the end. I reconstructed the missing middle section as a projected DMP, along with Hana Hirosaka and Nozomi Nakano.
Mushroom planet 1
The largest environment I worked on was the ‘Mushroom Planet’, again done in Unity using the Diorama toolset. The layout was published to Katana, and the resulting renders enhanced in DMP.
Mushroom planet 2
Diorama was retired by MPC after I left, but this sequence really showed the power of realtime instanced workflows. The first version of the layout was heavy, but still navigable in Unity – but it was too large for renderman, and had to be significantly scaled back in order to render in a reasonable time.
Tails 1
I believe this is the last shot I worked on at MPC. The far background was from a drone shoot done in Squamish by Scott Russell and augmented by Hana Hirosaka. I added the FG vegetation as set of projected cards. I was also, bizarrely, responsible for the animated UI of Tails’ handheld tracker, which was created in nuke. I would not recommend doing this…
Avatar titles
In 2021, I co-founded the company ‘Bunker Digital’ along with Krystian Guevara and Ariel Lorenzo-Luaces. One of the first large projects we landed was client-side supervision on Netlfix’s live action remake of ‘Avatar: The Last Airbender’
Wolf cove 1
I was bought on to the project only a few weeks before principle photography, with many of the virtual environments not yet optimised for shooting. My role was to work with the external vendors, my team at Bunker Digital, and the stage operations team from PXO to make sure each environment was as polished as it could be for the volume, given the available time and technology.
Wolf cove 2
Some ‘Wolf cove’ shots were more successful than others, but in the end the VP environment was just used for lighting, and to give the actors a sense of place – the entire background was replaced in post by ILP
Bunker Logo
Thanks for Netflix and Bunker Digital for permission to show these images – all captured in camera on the LED volume in Vancouver before any VFX augmentation or replacement.
Appa 1
The on-set backgrounds for the Appa flying sequences were created by Mold3D and Bunker. True volumetrics were too expensive to render in realtime (this was UE 4.27), so the system was sprite based. There’s some more information on the 4-point lighting setup in the ‘techart‘ section. All backgrounds replaced in post.
Avatar software
The bulk of the virtual production environments were used as reference, and replaced in VFX. My work finished as soon as principle photography did, so I’m having to make some guesses as to what remains in the final image. Happy to correct any mistakes – definitely don’t want to take credit for any work done (or redone) by another vendor!
Omashu 1
Background DMP for the sky and mountains surrounding Omashu, based on renders from DNEG
Omashu 2
Opimisation and shader work (particularly on the vegetation) for the Omashu Palace environment. See the ‘techart‘ section for more information on the technique. Main build done by Dimension Studios.
Omashu 3
The Omashu Arena environment was a smaller, more contained environment – and it looks like a handful of shots made the final edit without wholesale replacement (though with a lot of FX and destruction work on top from OutpostVFX)
Omashu 4
The Omashu throne room was the most successful ICVFX environment – very well suited to the scale and layout of the volume (the biggest in the world at the time). Main build by Dimension, with supervision from myself and Andrew Budyk. Optimisation and additional shader work by Bunker.
Roku’s mountain
I created an animated flowmap shader and the sky for the Roku’s Mountain sequence. Seemed like a small thing at the time, but the same shader has been used on virtually every show since. Augmentation by BigHugFX, but it looks like some of these backgrounds made the final frame.
Burnt forest
The ‘burnt forest’ was a fairly successful ICVFX environment (and one that made an early trailer), but was partly replaced in post for story reasons by Nexodus.
Anga Qela
As the environments got bigger and more complex (especially in the later episodes), we started to hit the limit of what Unreal 4 was capable of. Both hardware, software and virtual production techniques have improved significantly since the shoot, so if produced today it likely more shots would make it to final. This environment was taken over in post by Image Engine.
Agna Qela concepts
Concept paintings to help guide the DNEG build team on the creation of Agna Qel’a
Agna Qela throne room
Texture, shading and lighting on Agna Qel’a throne room environment. Supervision by myself and Andrew Budyk, build by Dimension Studios.
Spirit Oasis
The ‘Spirit Oasis’ environment was an ambitious build for both the practical set, and the virtual background, especially baking lighting for complex vegetation and multiple lighting scenarios. ICVFX build by Dimension studios, taken over by ScanlineVFX for post.
Avatar Kuruk
FX development for the Avatar Kuruk sequence – heavily treated in post, but looks to have made it through to final.
Minstrel Cave
Minstrel cave environment – created as an emergency backup when a practical location fell through. Main build by Dimension Studios and Bunker, taken over in post by Untold Studios.
Diorama Title
One of the pieces of work I’m most proud of from my time at MPC was ‘Diorama’. I had the opportunity to co-present this work at SparkFX in Vancouver, and at Siggraph 2019 in LA.
Diorama explanation 1
Diorama is an environment layout and set dressing toolset built on top of Unity, and integrated into MPC’s Genesis platform. I designed the tool specifications, and acted as product owner. All software development was done by John Vanderbeck.
Diorama explanation 2
Diorama uses a combination of static artist-authored templates and procedural rules to create a fast, art directable layout tool. In many ways it can be thought of as a 3D version of Photoshop’s bitmap-based brushes.
Diorama logos
We had a lot of support on this project from Unity Technologies, particularly Mike Wuetherick and the M+E division.
Diorama Demo 1
I created this showcase environment to demonstrate the use of the tool. This was originally intended only for internal use, but ended up being the first part of our talk at Siggraph.
Diorama demo 2
Each station demonstrated one of the features of the tool, including clustering, physical scatter and a quite advanced layer-based culling system.
Diorama demo 3
For the second part of our talk I created a more realistic environment layout demo, inspired by an area of the Sunshine Coast. The assets are a combination of my own library and packs from ‘NatureManufacture’, who were kind enough to help us prepare for Siggraph.
Diorama WIP
Mid way through the layout demo (approx. 2 mins of work)
Diorama Final
Final layout example from one of the demo run-throughs. Approx 4 mins of work (see the embedded video to see it done live).
Diorama Siggraph Final
Two other layout examples using the same Diorama Templates
Diorama Pokemon
Diorama was used in some way on most shows that came through MPC Vancouver – including 10,000 billboards for the Ryme City environment on ‘Pokémon’s Detective Pikachu’.
Diorama Raytracing
I really wanted to see how a Diorama environment would perform with realtime raytracing – these are my first tests with an alpha of 2019.3.
Diorama Video
John and I were invited by Unity Technologies to present ‘Diorama’ at Siggraph 2019. Unity were kind enough to record and release the talk online.
Realtime Titles
Despite starting off as a 2D concept artist, I’ve found a lot of my personal projects have drifted into technical experiments in realtime and VR.
Houdini water 1
Houdini ocean test. I found it’s possible to bake 10.6 secs of a Houdini ocean sim to an 8k flipbook as vector displacement data. The texture is huge, but since it’s only sampled once, is extremely fast.
Houdini water 2
Houdini Vector-displaced water in Unity HDRP
Houdini Engine
Two Houdini procedural assets combined in Houdini Engine for Unreal
Realtime Houdini
Houdini procedural assets – one that warps photogrammetry scans to fit a custom shape, and one that grows dripping moss on arbitrary objects.