Inigo Quilez is a mathematician and visual effects artist, a Pixar and Oculus alumni, the creator of Quill (a VR painting and animation tool) and the co-creator of online shader sharing platform Shadertoy.
Quilez grew up in San Sebastian, Spain, studying electrical engineering at the University of the Basque Country before moving to Belgium and later the United States to pursue his passion of using equations to create art and solve problems in graphics, virtual reality, and storytelling.
His credits while working at Pixar include Brave, for which he helped develop procedurally generated foliage, grass, trees and forest elements that solved challenges both artistic and computational.
While at Oculus, Quilez worked on two short VR films, the Emmy Award-winning Henry, and Dear Angelica, the production of which saw him develop Quill.
In November last year, Quilez spoke at Siggraph Asia in Brisbane, discussing his love of maths and the power behind equations to create beautiful images and artwork. He spoke with ShutterCG ahead of his keynote presentation.
ShutterCG: I’m curious about how your electrical engineering degree evolved into art and graphics?
Quilez: It didn’t evolve actually. It just never bloomed, I never became an electrical engineer. I got my diploma, my masters and all of that but I never worked as an electrical engineer. There is a small overlap between the things an electrical engineer learns and the things a computer scientist learns. It’s not a lot but the math is similar.
I realised that maths and computer graphics were more interesting to me than designing antennas and I was like, ‘OK, I like the maths and the computers part of all of this’. So I started doing graphics on my own and started learning.
I got to know about Siggraph and I read papers on graphics, I learned about the Pixars and Disneys of the world and realised that was my thing.
Have you focused purely on math-based graphics or have you been interested in traditional polygonal modelling as well?
Not so much but I have done some of it. I still need to know how polygons work, and I use them. Sometimes it is convenient to bake mathematically generated images as polygons, if only to give something to an artist. So I do polygons and textures and things sometimes but it is not my main interest.
When I realised I could make content with maths, that was like, ‘OK nobody else is doing this’. I love maths, it’s super interesting and there is so much unexplored territory. It’s so easy to find new things, because nobody has been there before. It was like a drug, I wanted to learn more and more.
Not long after university you moved to Belgium to work in VR, way before the current generation of VR systems we have today, what was that like?
Quilez: As I understand it, what we were working on in 2005 was the second, if not third, time that computer graphics scientists had tried to do VR.
So I sometimes now refer to ‘The old days of VR’ and someone will correct me and say, ‘No, no, the old days of VR were the nineties’. You know when they had that Nintendo and the Silicon Graphics.
I lived the second wave of VR and that was different from today’s VR. Back then it wasn’t consumer-focused. It wasn’t something that anyone could buy. It was something that only huge companies with millions of dollars could buy.
You had to dedicate a whole room to it. Have projectors illuminating all the walls and the ceiling and the floor. It was a cave, that’s what they called it. And you had a supercomputer in the room next door, tracking where you were in the space and adjusting all the projectors on the walls, doing stereo rendering for you to experience VR.
It was a crazy thing, and only companies like BMW and Audi and these kinds of engineering firms could afford it but it was very good. Way better than say a current consumer headset in terms of density of pixels and the field of view.
Was there a head-mounted component to it?
The logic was that the displays were on the walls and you just had polarised glasses to get the stereo effect. But unlike in a 3D movie where everyone sees the same stereo pair, in those systems we were using, which were powered by Silicon Graphics, as you were moving in the room the computer would know where you were and readjust all the virtual cameras for you to have the correct and exact perspective. It was crazy, you could be walking around objects.
What things during that phase of VR did you learn that carried on into this generation?
Stereo disparity, how to prevent discomfort, how to do the maths of weird transformations. All of that applies to now too.
At Pixar, your job was to use procedural techniques to help fill out the detail of dense forest scenes in Brave, how advanced was their procedural graphics department at that time?
Not that much. As I understand it, during the early days of Pixar, the late 80s, they were using a lot of procedural stuff because they didn’t have as many artists, and they were opening a new branch of art and entertainment and there were not many digital artists around, so many of the assets and objects and textures were made by programmers using procedural techniques.
Then, as I understand it, it evolved throughout the 90s and they started getting art directors and lighting artists and models and all of those things, they started using fewer procedural techniques because they don’t look as good as the work of an artist, especially if a programmer does it.
An artist’s work is easier to art direct too. If you paint a texture and you don’t like it, you go into photoshop and you can erase the thing you don’t like. If you have to do it procedurally you might have to change an equation somewhere, some code, compile it to make sure it looks good; it’s a lot. So they kind of stopped doing procedurals except for little things like particles floating in the air, the movement of the leaves in the canopy of trees and little things like that.
When I landed there they had a problem: they needed to render a huge forest. Like 70 percent of the shots were outdoors too, it’s trees, grass, moss, leaves, bushes and they didn’t have enough artists to do that, nor did they have enough rendering power. This was Renderman 16 I think and they had machines with 4GB of memory; today movies are rendered with 100GB.
So going the procedural route where you don’t load a billion little 3D meshes and a billion textures and instead replacing all of it with maths, that was the only option, so they had to embrace procedural stuff again.
They didn’t have much set up for me. I had to write it from scratch. I learned Renderman, then I went ahead and wrote a few plugins and things that would help me. I had to build some of the framework … and actually I had to get involved with a bit of the politics, but hopefully, the images I produced spoke for themselves so I didn’t have to do as much convincing, but I did have to convince people.
I had to build the framework and then use it myself to do shot by shot work. It was super fun. I learned so much. And they were great. Pixar’s production designer loved that kind of process. I did an experiment and it took like four or five months to land at the right solution, he liked it and that opened the doors to do a lot of the movie in that way.
I understand that during your time at Pixar you also co-created Shadertoy?
Yes that was 2013 with Pol Jeremias. We met by virtue of being both Spaniards living in San Francisco and being into rendering. We met through some common friends and spent a year and a half before Shadertoy doing something similar.
We would go to electronic music clubs in San Francisco with the computer, plug in the sound at the DJ stand, and start making real-time images synchronised to the music. Eventually, we ran out of ideas of what to display so we added this live coding system to it where you could show up in the club, plug it in and start coding a new effect, and people were seeing both the code and the images made from the code.
After a while, we left the whole clubbing scene because we were getting old, rapidly (laughs). Then we started Shadertoy together. The idea came from those days in the clubs and also my background in the demoscene, which is this mostly European community of people who do real-time graphics.
I put those two things together and we positioned Shadertoy like a YouTube for nerds. It’s about them, not about the code so much. So we needed to have names and accounts and have them able to comment and talk to each other and be a community. It’s worked well. I think we’ve got all the nerds! Me included. I’m user number one and I still spend so much time on it. I learn so much from it. Because I’m the kind of guy who explores the code and says, ‘What are they doing here?’. It’s super fun.
What got you into the demoscene?
The demoscene has many axes and interesting parts to it. One is the community, another is the fact that they still today work with retro machines.
There’s a demoscene event here at Siggraph and they’re using Commodore 64s and Amstrads. People today are writing programs for those old machines and creating graphics.
Another part of it that I really like is the size coding aspect, which is building beautiful graphics with a very small amount of code and computation. That’s the thing that I fell in love with because it required maths.
When you don’t have data, the only way to create content is through equations and maths and procedural techniques. I think that has links not only in my personal case but the industry in general. A lot of demosceners are now working in video games, taking those procedural techniques and using them to make terrains for open-world games or clouds or things like that.
Many demosceners are coders but they have a good sense of visuals and style too. I have never worked in game companies myself and never worked on game dev. But of those I know from the demoscene that work for game companies, they sit right next to the artists.
Does gaming hold any interest for you?
I’ve never really played games myself. I mean I’ve played Tetris and Prince of Persia, the old DOS games, and maybe Monkey Island 2 and 3. That was a long time ago and I never really played games again.
I know about those things of course since I like graphics. I read all the relevant papers and read about the presentations that come out of GDC and Siggraph but I’ve not played or worked on games.
Do you ever feel like the real-time graphics industry is locked in step with the product cycle of mainstream, ‘current-generation’ hardware?
That’s a good question. I was having this thought the other day while preparing my presentation. I was realising that much of if not all graphics development in the real-time space, which is video games mostly, has been conditioned to the hardware and decisions on the architecture of that hardware, which I think were wrong, made in the 2000s by the Nvidias and ATIs of the world.
They may have been the right decisions at the time but because hardware evolves so slowly, once you make an architectural decision on the hardware, it can take two or three generations to get rid of it … and that can mean six or seven years.
While for software, which is what films use, if they want to change the algorithm they go and change it; the next movie is made with a different algorithm. Pixar switched from rasterisation to raytracing just like that, so easy.
With hardware, there is all this legacy and inertia. That, together with what I think were not the best decisions for the long run, means that game development has been having to go through a route that is a little bit wasteful and I think we could otherwise have been further ahead in 2019.
I think we lost about five years with respect to how some of the shading is done and how anti-aliasing is done, which is of course super important for cinematic quality.
On the other hand because of those decisions we have things like screen-space shaders, which I love, and is the whole idea of Shadertoy, so there have been benefits.
What made you want to take the expertise you had in maths and create art with it?
That’s exactly the topic and title of my presentation, ‘Why I create images with mathematics’.
I can spend an hour talking about that, but really it’s because it’s very cool! That’s the simple answer.
I do explain how there are practical advantages to developing images with maths and how it can lead to jobs at Pixar and Oculus and things like that but at the core of it is because it’s cool. It’s a challenge.
Getting to tell people that 60 out of the 90 minutes of a film they saw in a theatre was effectively equations and maths, disguised as bushes, grass, wrinkles and trees and things like that. Telling people that, getting their reactions and having a chance to talk to them about it is what I really like.
In the beginning, I didn’t know anything about graphics from an artistic perspective, and I’m still learning. It was an open field, I learned about lighting and composition and colour grading and how to dress a set, make it pretty, avoid tangents so the image can be read better. All that learning was so cool. There was a great sense of progress and seeing it get better and better.
Some people go to the gym and get pleasure out of lifting more every day, for me it was getting better and better at creating images.
In addition to the challenging side, it was having an excuse to talk to people about maths and the creative use of maths. Because to me, previously, everyone thought maths was just for accounting or designing a car. Nobody knew in my surroundings early on that you could use maths in a creative way, as a brush to paint with.
There’s also that sense that nobody else has been there before. Nobody had tried to push maths all the way into an amazingly detailed 3D image. So having the feeling of being a pioneer, that was great.
Can you tell us about your time at Oculus Story Studio, developing Quill and working for Facebook?
The work I did at Oculus was super interesting. It was mostly trying to find out what the language of film and cinematography was in VR, and in that way we were pioneers, with quite a few other studios following later.
It was great, we assembled a team of people from Pixar and Dreamworks and Electronic Arts, all very good in their own disciplines, put them all together and we made two movies.
In those cases, we used maths not so much to make pretty images but to solve technical issues. It was early days in [this generation of] VR, so there was no Photoshop, no Maya, no tools or pipelines to build content for VR. Now at least we have Unity and Unreal which have VR modes in them and help you with a few things. Back in 2015, there was nothing. So going the math route was the only way to get certain things done.
So first we made Henry. I was in charge of making the fur, I also made the eyes and the occlusion of the eyes, which is very important because he has huge eyes. In getting his eyes right, it’s all about the emotion: you read his eyes. So making those eyes cute, not creepy and expressive was important. I also worked on lighting.
You know how you add colour grading to an image by tweaking contrast and curves and things like that, we needed something like that but in a 3D space, where you could apply a colour correction for the purpose of guiding the viewer to a given location in VR because you can’t frame the camera. We needed a colour correction method that worked in 3D space volumetrically, not just in the region of a 2D screen but in an actual space, so I had to develop some maths for that, it was super fun.
The second movie we made was Dear Angelica, and that changed a lot of things for me. That changed the next five years of my life. I was the VFX supervisor for that one but again, because we didn’t have tools and the whole story was illustrations in VR, I had to develop Quill.
I developed it in production. I was sitting next to Wesley [Allsbrook], the artist who painted everything. She was painting in early prototypes of Quill while I was getting the feedback from her and proposing things and writing new features.
The tool grew in production and by the time we finished I realised it was super powerful. We had seen Wesley and other artists from the studio producing beautiful art. Drawing in VR and creating stop motion animation is just amazing. It was magical.
Story Studio was closed when Oculus decided they were going to focus on sponsoring external studios to make games and content. They closed and I took Quill with me and I looked around to see who could host it and help me build a team and continue development.
In the end, I landed at Facebook, which was starting a new division for VR, separate from Oculus at the time. I told them what I wanted to do, pitched some ideas there and we grew a team.
I became the product manager for Quill. So I was doing strategy and marketing and the team management and things like that.
It was great because I was progressing towards my dream, that being to have 200 animators at Disney work for two years in VR and make the next Snow White, Lion King or whatever in VR, something great that changes everything. To get there though we have to convince them and create an ecosystem where people can consume the content and we have to create the tools.
We were making a lot of progress but I was getting further and further away from equations and making pretty pixels, so at one point I was like, ‘Quill’s going great, there’s momentum, I’ve got Facebook supporting it fully, the team has grown a lot, we’re having great results … it’s time to leave’.
I left in May 2019 and I’m now back to attending Siggraphs and making new shaders, and I have a YouTube channel as well. I’m back to do doing more creative, visual, experimental work.
I’m also interested in moving into the education space with my YouTube videos. Shadertoy is also a way of teaching people but it is designed for people like me who self teach and prefer to learn on their own.
I want to start doing content that explains how maths are a beautiful tool and a creative tool to paint, how it is used in movies and in games. I want to package and put that message out there on YouTube and other forms so everyone can enjoy it. Not just nerds who are there because they really like maths (laughs).