About My Target Roles

When it comes to roles, my target positions are rigger or 3D modeler.

A rigger’s job is relatively complex. Besides pure rigging work, it’s sometimes combined with motion capture tasks. A large number of realistic or semi-realistic game companies use motion capture to improve animation production efficiency.

Take Papergames’ “Love and Deepspace”(《恋与深空》)as an example. The rigger’s job expands from simple skeleton creation to things like: cleaning, optimizing and fixing mocap data; adapting mocap data to characters with different proportions; combining physics simulation with mocap; and so on. Riggers also need to work closely with the motion capture tech team, animators, tech artists, and other roles to keep the whole production pipeline running smoothly.

Compared with traditional 3D game animation, mocap-based animation is actually even more troublesome, because after using mocap, the animations still need a final round of polishing and fixes. For a game of that scale, every single step is tedious to a level that’s hard to imagine. On the plus side, though, the team size, budget, equipment and other resources are also relatively abundant.

In comparison, a modeler’s job is simpler, but the competition is also fiercer, because modeling is the most basic, lowest-barrier role in 3D. If I look at it from the perspective of how hard it is to get a job, similar roles like level designer or environment artist might actually fit me better.

That said, a modeler’s work is not limited to 3D game studios. Many visual novel studios, 2D game teams, and even film and advertising companies all use 3D to help build scenes.

Aside from these two directions, I might also consider combining my undergraduate major with my future career. My bachelor’s degree was in industrial design, so from that angle, working in the advertising department of a product design company would be a great fit for me. I’m very familiar with how to present and promote products, and I’ve basically already mastered the 3D skills needed to make product promo videos.

That’s roughly how I’m thinking about my target roles.

Reference

Character Rigger Job Description: Salary, Skills & Career Paths

《恋与深空》首次深度技术分享:这可能是叠纸最「掏心掏肺」的一次

Where I Want to Work in the Future

After deciding on a general career direction, the next very practical question is: where should I start my career?

Right now, my plan is to go back to Shenzhen for work after graduation.

First of all, Shenzhen has a series of support policies for fresh graduates, including things like entrepreneurship, housing, and household registration. From the perspective of someone who has lived in many different cities, I feel Shenzhen is a very young and energetic place. China’s biggest game company, Tencent, also has its headquarters there.

Of course, besides Tencent, the Nanshan District of Shenzhen is home to all kinds of big and small game studios. I hope that in such a place, close to the core of the industry, I can quickly absorb mature development experience and make the transition from student to professional.

Working at a small company is also appealing in its own way, but my first choice is still Tencent. In recent years, Tencent has been pushing the “Bamboo Shoot Program” (“春笋计划”), setting up lots of small studios to incubate innovative projects. Joining one of these teams is my goal.

Their recent game “Lili’s Tiny Kingdom”(《粒粒的小人国》)has a style I’m very good at. I could work in that kind of team as a 3D modeler or animator.

“Lili’s Tiny Kingdom”

Besides “Lili’s Tiny Kingdom”, many other Chinese companies have released PVs for similar 3D casual games this year, which was a pleasant surprise for me. Before this, realistic “next-gen” style was generally the mainstream in China. I even spent some time learning ZBrush sculpting because of that. Fortunately, now I have more options that better fit my strengths.

Between realism and full-on cartoon style, there’s another company whose style I really like: Papergames (叠纸).

“Infinity Nikki”

Papergames’ “Infinity Nikki” was developed using UE5, which I’m familiar with. Also, their environment artist positions are among the few that explicitly welcome Blender users. If we only talk about software, Papergames is probably the company that best matches my skill set.

That’s roughly how I’m thinking about cities and companies for my future job.



Reference

12 Measures to Support Employment and Entrepreneurship for Hong Kong and Macao Youth in Qianhai – 深圳市前海深港现代服务业合作区管理局网站

According to the news, Tencent plans to formulate a “spring bamboo shoot plan” and will increase the incubation of vertical games – laitimes

《粒粒的小人国》官方网站-小人国生活模拟治愈新作

Infinity Nikki Official Website – Side by side, the world unfolds in beauty

Some Practical Work I’ve Done on Games

Working on 3D games has always been my goal.

At the beginning, my dream was to join an indie game team. To move toward that, I once took part in a GameJam and teamed up with four classmates to form a temporary dev team. Our goal was to make a rough game demo within 24 hours.

Although the atmosphere at first was great, things unfortunately didn’t go very smoothly. The biggest problem was that we overestimated how much work we could finish in 24 hours. We gave our game a huge worldbuilding setup and an overly long timeline, so in the end the game we made could barely express what we originally had in mind.

Photos from Game Playtests

After that experience, I realized just how much work it takes to make a game. It was far beyond what I had imagined at first. I could of course consider switching to 2D or text-based games, but it’s obvious those directions don’t really match the skills I’ve been developing over the years.

After that, I wanted to dig deeper into the actual pipeline of making a 3D game. During the GameJam, I was only in charge of art, and didn’t really get to understand the production side, especially programming. So a few months ago, I tried teaching myself UE5.

This round of learning focused on the art-related parts of game development: modeling, making textures, unwrapping UVs, rigging characters, creating collision bodies, making environment animations, and so on.

Screen recording of the game I made

The learning process was really fun, and I ended up with a pretty good final result. But it also helped me understand even more clearly how hard it is to make a 3D game all on your own. Because of that, I probably won’t consider becoming a independent game developer. Even if I form my own team one day, my first concern will be whether the game can actually be finished. Overly complex worldbuilding and overly long game’s playthrough are things I’ll definitely try to avoid.

All in all, after graduation I still plan to prioritize joining a big game company and becoming a “small screw” in a giant machine. A lot of people have told me that working on this kind of production line is boring, because individual creativity isn’t valued that much. But to me, there are plenty of people with good ideas, and the bigger a project gets, the harder it is to take every single person’s ideas into account. That kind of trade-off feels reasonable to me.

That’s roughly how my career goals have shifted over time.

Practice-Based Research – Applying 2D Composition Principles in 3D

In my previous experience as a 2D creator, I learned many composition principles, but after becoming a 3D animator, I initially thought I had to learn everything from the beginning. However, during this CGI class exercise, I experimented with applying 2D composition principles to 3D creation and saw positive results. This process was highly inspiring and had an impact on my future direction. Therefore, I want to use this blog to document some of my thoughts during the creative process.

(Final Render)

Creating a Sense of Space

At the beginning, it’s important to establish the general placement and spatial relationships of the objects.

In 2D composition, we often use overlapping objects to create a sense of space. I applied the same method here, using the stairs as a foreground element to enhance the sense of depth. At the same time, the stairs help link the first and second floors, preventing them from feeling disconnected. I applied the same method to the design of the floor and cabinets on the second floor, modelling them in an ‘L’ shape to strengthen the connection between the left and right walls.

Before and after enhancing the sense of space

Enriching the Composition of Shapes

In 2D, we often use the three elements—points, lines, and planes—to enrich the composition of shapes. I added the stair railing as a line to guide the viewer’s gaze and enhance the rhythm of the scene.

Small objects like books and bottles serve as points, adding detail to the scene, while the walls and floor function as planes, distinguishing different areas and expressing the stability of the space.

In addition to points, lines, and planes, it’s also important to consider enriching the shapes of objects by using different forms such as triangles, circles, and squares. For example, the desk is composed entirely of square shapes, so I designed streamlined legs for the chair to add variety to the composition.

Adjustments

In terms of color, I applied the 6:3:1 rule from 2D composition: 60% brown as the base color, 30% green as the secondary color, and 10% highly saturated red, orange, and purple as accent colors. For materials, I added various metallic decorations to enhance the distinction of textures.

When adding scene details, I also paid attention to the contrast between simplicity and complexity to highlight the cauldron as the focal point of the composition. When there are too many details around the cauldron (such as books and bottles), adding more details to it would overwhelm it. So, I made the cauldron the ‘simpler’ element, allowing the surrounding ‘complex’ details to contrast it.

Finally, I used lighting to emphasize the cauldron as the focal point.

Summary

This was an attempt to apply the 2D knowledge I had previously learned, and it has been very inspiring for me personally. At the same time, this logical approach to constructing a scene is replicable, meaning I can use the same way of thinking, rather than simply relying on inspiration, for my future creations. I can also build on this foundation to refine my process and develop my own style. Therefore, this has been a very important experiment for me.

Self-reflection on the Simulated Work Experience

2nd Year student: Jiwon Ahn

This 3D animation project is themed around dance. In this experience, I was mainly responsible for two parts:

  1. Testing the simulation of fabric movement.
  2. Rigging the character’s skeleton and creating facial expressions.

Fabric Motion Simulation Test

This is the character design provided by the senior.

The animation involves many dance movements, and the character’s clothing is relatively loose. To better express the dynamic movement of the clothing during the dance, I decided to try using Marvelous Designer (MD) for fabric simulation to create more realistic physical effects.

I created a test animation in Blender and then imported the animated model into MD.

(Clothing model created in MD)

Then let MD simulate the fabric motion based on the animation, and imported the completed fabric simulation results back into Blender. The effect is as follows:

(The clothing moving with the body)

In this section, the main challenge is to address issues that can occur during MD’s simulation process, such as fabric slipping, flying wildly, and tearing. These problems can be solved using the following three methods:

  1. Select a stiffer material in MD and increase the fabric’s friction.
  2. Add a ‘solidify’ modifier in Blender to increase the fabric’s thickness.
  3. Use a ‘mask’ modifier in Blender to hide the parts of the body covered by the clothing.

Character Rigging and Facial Expression Creation

(One of the character models I rigged)

The hair is rigged with physics effects, allowing it to sway automatically with the head’s movement, making it more natural and dynamic.

(Hair movement)

As for the facial expressions, here is the facial expressions design provided by the senior:

(The width and length of the eyes and mouth can be adjusted)

I added controllers to the facial rig to adjust the width and length of the facial features, making it easy to adjust both their position and shape at the same time.

By using the ‘Shrinkwrap’ modifier and ‘Track To’ constraint, the facial features can move freely without deforming.

Summary

I successfully met the senior’s requirements. Throughout this process, I learned Marvelous Designer (MD) and challenged myself with fabric simulation, something I had never tried before, achieving good results. This experience will also be valuable for my future projects.

At the same time, the facial rigging in this project was stylized and unconventional, but I was able to build the rig according to my senior’s requirements using the skills I had learned. The team was also satisfied with my results, and I am happy that, in my first team collaboration, I was able to make a contribution and play my part.

The Beginnings of 3D Rendering

Soma Cube

Let’s start with a small colored cube.

(Exploded 3D Rendered Soma Cube,1968)

The Soma Cube rendered by Gordon Romney in 1967 is considered the first rendering of a complex, manipulable, colored 3D virtual object.1

Gordon Romney was David Evans’ first Ph.D. student in computer graphics, and David Evans was a pioneer in this field. David’s idea was to use raster scan technology (similar to that used in TVs) to generate realistic 3D images, and Gordon’s Soma Cube was created using this method.

However, this technology is very different from our modern 3D rendering methods. It essentially involved drawing each pixel one by one, rather than constructing 3D images by building points, lines, and surfaces like we do today. This brings us to the third person—Edwin Catmull.

The Birth of Points, Lines, and Surfaces

Edwin Catmull, also a student of David Evans, proposed an alternative way to generate 3D images.

(Edwin Catmull’s A Computer Animated Hand, 1972)

This hand is taken from a computer-animated short film created by Edwin Catmull and Fred Parke. It shows the movement of the hand and is the first animated 3D object generated by a computer.2

As you can see, this hand is quite similar to modern 3D models. It is made up of a polygonal mesh, consisting of triangular faces and vertices. However, unlike today’s 3D software, this process involved creating a real model, drawing the triangular faces on it, scanning the coordinates of its each vertex, and inputting them into the computer to build the model.3

This project laid the foundation for 3D computer graphics and had a profound impact on the future of computer animation.

From Trials to Industry

By the 1980s, 3D rendering entered a period of rapid development and commercial application.

During this time, Pixar’s RenderMan rendering software and the use of ray tracing algorithms pushed the progress of rendering technology. RenderMan supported complex lighting calculations and became a core tool for animated films like Toy Story.4

RenderMan uses physical rules similar to those in the real world to simulate the behavior of light, such as reflection, refraction, scattering, and more. By simulating these physical phenomena, RenderMan is able to generate highly realistic images.

After that, with the launch of commercial software such as Maya and 3ds Max, 3D animation became more widely used across various industries.

Summary

From the history of 3D rendering development, we can see that the core of 3D rendering has always been based on ‘simulation’. Simulating the physical laws of reality within a computer has been a key focus in 3D rendering technology development.

Based on understanding the physical rules of the real world, technicians use more efficient methods to build similar physical environments in the computer, creating realistic 3D models. This concept is not just part of the past—it will continue to play an important role in future innovation and development.

Reference

1.First Rendering: A history of 3D rendering

2.The rise of 3D animation—a journey through its evolution

3.A Computer Animated Hand – Wikipedia

4.Milestones:The Development of RenderMan® for Photorealistic Graphics, 1981-1988 – ETHW