Jump to content

Search the Community

Showing results for tags 'tips'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Welcome
    • Announcements / News
    • Introductions & Off-Topic Discussion
    • Help & Feedback
  • 0 A.D.
    • General Discussion
    • Gameplay Discussion
    • Game Development & Technical Discussion
    • Art Development
    • Game Modification
    • Project Governance
    • Testing

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







First Name

Last Name

Skype ID

Found 5 results

  1. Hello everyone, I recently had a few conversations with players on the lobby about "how to improve". When I am being asked, I usually reply with: 1. watch replays 2. watch ValihrAnt videos 3. read this file. This file is what I am presenting in this thread: it's a collection of tips & tricks I initially shared with my friends and then made it public on my GitLab page. It might be incomplete, it might be wrong in some of its parts, it might give importance to aspects that are not or viceversa, etc... I am not in the position of claiming this is a manual to become a good player. However, it might be useful to somebody and I'm happy to share if it helps. The file is divided into three sections: 1. Interface, 2. Logistics, 3. Fighting. Each section contains a list of tips & tricks in sparse order; each tip comes with an extra tip. Cheers Source: https://gitlab.com/mentula0ad/0ad-tips-and-tricks
  2. We talked with @elexis about constant references and I gave an example why passing by value may be better than passing by constant reference. During todays refactoring I met another important things that you need to know about constant references. Aliasing Take a look at the following code. Do you see a problem? (It's our code from ps/Shapes.h). // Interface class CSize { public: // ... void operator/=(const float& a); // ... public: float cx, cy; } // Implementation void CSize::operator/=(const float& a) { cx /= a; cy /= a; } If not, would the following usage example help you? CSize size(2560, 1440); // Normalize the size. if (size.cx) size /= size.cx; debug_printf("%f %f", size.cx, size.cy); You'll get: 1.0 1440.0 Because the a references to cx, and the first line of the operator modifies the cx. And in the next we just divide the cy by 1. It may happen in each case where we get the same class. I fixed the problem in D1809. Lifetime Another important thing is a lifetime. Let's take another look at another example (fictional, because I didn't find more detailed example in our code yet): std::vector<Node> nodes; // ... duplicate(nodes[0], 10); // ... void duplicate(const Node& node, size_t times) { for (size_t i = 0; i < times; ++i) nodes.push_back(node); } From first look it seems ok. But, if you know how std::vector works then you know, that on each push_back std::vector can reallocate array to extend it. And then all iterators and raw pointers are invalid, including our constant reference. So after few duplication calls it may contain a trash. So, you need to be careful with such cases.
  3. Share Simulating Real-world Film Lighting Techniques in 3D Updated September 9, 2011 By Lucy Burton Source: https://software.intel.com/en-us/articles/simulating-real-world-film-lighting-techniques-in-3d With all the advances in modeling and animation, an often overlooked but absolutely critical area of 3D scene creation is the proper use of lighting and rendering techniques. Good lighting and rendering can make even a simple 3D model look extraordinary, and poor lighting can make even the best model look bland. This series of tutorials cover not just the mechanics of lighting within the Autodesk* Softimage*|XSI* software package but also how to incorporate the language of light used in fine art, theater, and film into your 3D scenes to create truly cinematic views that are compelling for your audience. I'll explore the basics of real-world lighting scenarios and how to implement them in 3D, and because rendering is inseparable from lighting, I'll also discuss various rendering techniques such as Global Illumination, Final Gather (FG), Ambient Occlusion, and of course, high dynamic range imagery (HDRI) lighting. Light and Shadow In music, the silence between notes is as important as the notes themselves for creating emotion within a composition. Similarly, in lighting, shadows-their placement, direction, intensity, softness, and so on-are just as important as light in creating meaning and mood within a scene, as Figure 1 shows. Figure 1.This still life demonstrates the importance of not just light but shading to a scene. Notice that although the image on the left is well lit with Global Illumination and FG rendering, it is still relatively flat. But with the addition of the directional shadows in the right image, cast courtesy of the Physical Sky shader in Autodesk* Softimage*|XSI*, light passes through a window, a curtain, and shadows from the leaves outside, and the scene gains additional visual interest and realism. More often than not, beginners take one of two approaches: either flooding the scene with flat light as if merely seeing the objects is enough for the viewer (or worse, simply bumping up the ambience to fill in poorly lit areas) or not lighting it enough in order to hide shortcomings in modeling. When used properly, however, both light and shadow can actually enhance a model or animated character, providing an additional layer of drama or suspense to the scene. Obtaining an understanding of these basics is critical to realizing your vision. Three-point Lighting Three-point lighting is derived from a technique originally developed for theater by Stanley McCandless, who is widely considered the premiere developer of lighting design in the United States. The variation used in film, television, and commercial product promotion uses three lights: a key light, a fill light, and a back light, each with a specific purpose. The key light (Figure 2a) is the main directional light on the object or character and is typically the brightest light source in the scene. The fill light (Figure 2b) is used to simulate the light bounced from objects and sources on the opposite side of the frame from the key light. Finally, the back light (Figure 2c) is used mainly to separate the character or object from the background by providing a slight halo effect off the back edges of the character, such as a hint of light at the rim of their shoulder or off the edge of its hair. Back light is not the same as background lighting, however. Three-point lighting is about illuminating the main subject, whether that's a character or the model of a product a company hopes to sell. Therefore, back lighting points toward that subject, not the objects behind that subject. Figure 2.These images show a standard three-point lighting set-up. Note how the rays are cast to light the figure and to differentiate the subject from its background; (a) key light; (b) fill light, and (c) back light. Autodesk* Softimage*|XSI* provides another handy tool for positioning your lights. By clicking the drop-down arrow of any 3D viewport, you can select Spot Lights, then choose any of the listed spot lights within your scene. The view will change to the perspective of the light itself (Figure 3) as if you were looking through it, directly at the object it is lighting. From here, you can see precisely how much of the object is receiving light and where the umbra begins. You can also interactively adjust the spotlight by pressing the B hotkey to display the light's manipulators, and then pressing the Tab key to reveal the manipulators for the cone and spread angle of the light. The white exterior cone controls the Cone Angle value; the inner yellow cone determines the Cone Spread value. Simply click and drag the cone you with to adjust, or Shift-click and drag the edge of the cone to manipulate both cones simultaneously. Figure 3.Autodesk* Softimage*|XSI*'s Spot view Key-to-Fill Ratios The main thing that key-to-fill ratios (Figure 4) determine within a scene is how "contrasty" your final image will be. Low key-to-fill ratios are excellent for creating desaturated looks, such as those of a cloudy day where the sky is overcast and muting any harsh directional light from the sun, or in places where there is a lot of bounce light from the surrounding environment, such as a hospital room or the white tile of a kitchen. Low key-to-fill ratios are also used for any atmosphere in which you want to create the impression of happiness, such as in a child's room. In this sort of lighting scheme, there is a great deal of fill light, nearly matching that of the key light. Figure 4.This series of images demonstrates a range of lighting possibilities. The first image (a) presents a standard lighting scheme; (b) shows a low key-to-fill ratio, and (c) a more dramatic high key-to-fill-ratio. The last image (d) demonstrates how a change of angle can alter the emotional character of the subject, making him look quite ominous, and even change the structure of the face. Examples of high key-to-fill ratios include scenes with deep shadows and bright highlights, such as the stark lighting of film noir classics like Touch of Evil, scary films like Les Diaboliques, or in the paintings of Rembrandt. In this kind of lighting, the key light is often set at a very high angle, producing sharp shadows and a triangle of light below the eye of a character on the opposite side of the face from the key light position. It is highly directional, and there is virtually no fill light on that side. Color Temperature Color can modify form and is a powerful visual and emotional stimulus within a design that can cause objects or characters to appear to change dimension, reverse directions, and alter the interval between forms. It can even seem to generate motion within a scene independent of object animation. The colors you choose within your lighting establish overall mood and reinforce the theme of the overall work. Color temperature (measured in degrees Kelvin [K]) varies depending on the light source in question. All objects emit light when sufficiently heated. The degree of brightness is a function of temperature. Through a device called a spectrophotometer, any color can be equated with the amount of temperature being applied to a blackbody, which results in a Kelvin measurement (Figure 5). Candlelight is very warm in color and varies between 1850° and 1950° K, whereas a typical household incandescent light is about 3000° K, producing a color ranging between orange and yellow. Fluorescent lights give off cooler color tones ranging from green to blue and are between 3200° K and 7000° K. Daylight ranges between 5500° and 7500° K. Figure 5.Color temperature chart in degrees Kelvin Your color choices in lighting are also important when trying to composite 3D objects or characters with live-action backgrounds. In addition, films are registered to certain color temperatures: Daylight-balanced film is 5500° K, and tungsten-balanced film is 3200° K. Even today's digital cameras use filters to achieve the same sort of effects, and on-set lights use specific gels to tint light sources. So, if you're working with photographers or cinematographers, you're going to need to get the shooting data from them if you're going to mesh your scene well with their work. Additionally, postproduction color timing used by film processors can change the scenic color balance. Adjusting Gamma Correction and Contrast Gamma measures the degree of brightness and contrast within the midrange luminance values of an image (Figure 6), either in a photograph or via a video or computer device. When video is encoded and decoded, variations in the contrast values of the image occur. In a typical cathode ray television, the gamma value is 2.2 darker than the original 1.0 gamma value of the video compression that a camera records. Additionally, there are differences between Windows and Mac systems, with Windows gamma encoding being 0.45 and the gamma decoding being 2.2. Mac OS X and later versions encode gamma values at 0.55 and decode gamma at 1.8. A Nintendo Game Boy displays images with a gamma value ranging between 3.0 and 4.0. Figure 6.This series of images of a blue frog demonstrates how gamma affects an image. The second image from the left should be ideal for most devices. What all this means in practical terms for the 3D artist is that you will need to adjust the output gamma values of your project depending upon what sort of display your project is likely to be viewed on. Within Autodesk* Softimage*|XSI*, you can adjust the gamma values for a render pass by opening the Render Manager in the Render toolbar, and then clicking Pass > Edit > Edit Current Pass or Render > Render > Pass Options. Within the Pass Gamma Correction option, you can select Apply Display Gamma Correction. You can also edit the gamma of an image clip used as a material, light, or environment texture simply by selecting the geometry or light in question, clicking Modify > Texture, and selecting the relevant image clip from the submenu listings or by double-clicking the image clip you want to alter from within Autodesk* Softimage*|XSI*'s RenderTree. From there, you can alter the HDRI or OpenEXR Display Gamma settings for your particular display as well as alter the Color Profile Gamma as an sRGB preset or with your own user-defined gamma settings. OpenEXR images and HDRIs have linear color profiles; Cineon and DPX images are logarithmic and, therefore, are automatically converted to a linear profile. Any 8-bit image is regarded as sRGB. Notice that within this window, you can also control F-stop; exposure; and color correct for hue, saturation, gain, and brightness with value sliders. Additionally, you can animate all parameters by setting keys via the green animation curve marker to the left of each option. Alternately, you can correct gamma values globally during the compositing phase via a Color Adjust Operator with Autodesk* Softimage*|XSI*'s custom compositing application, the FXTree. This integrated system is based on the Avid Media Illusion toolset and has become increasingly powerful as versions have progressed. Inverse Square Law: Light Intensity Attenuation/Falloff The Inverse Square Law of light states that as light waves radiate outward from their source, the intensity of that light decreases in inverse proportion to the square of the distance from the source. In other words, an object twice as far away from a light receives only one-quarter the intensity of that light as exists at its origin. Light attenuation, or disambiguation/falloff, describes the gradual diminishing intensity as light moves through any medium-be it air, water, glass, or a subsurface scattering such as that found in porous surfaces like marble or skin. Such light scattering can be simulated in 3D using lights and material shaders. Some light shaders use a linear falloff value. However, linear falloff tends to look less realistic, and visible noise results if you input the wrong value, because doing so actually makes the computer violate a natural law-namely, conservation of energy-and ends up producing hot spots of intense light in random places within the scene. It is far better to use Light Exponent Falloff mode, manipulate the inverse square falloff of the light source, and fine-tune the exponent values from there, as the examples in Figure 7 show. Figure 7.In these two images, the physical distance between the light and the object has not changed: Only the light exponent start/end falloff values have been altered. Therefore, the light's brightness in the first image diminishes before it can cast a full oval of light onto the table. Additionally, note that in the top image, the shadow cast by the pottery is much sharper, because in that image, the shadow was generated via ray tracing, whereas in the bottom image, the soft shadow is achieved by creating a shadow map. Depth of Field The human eye, unlike a computer, cannot focus all objects to infinity. So to add more realism to a scene, adding depth of field is an important option. Depth of field settings simulate a plane of maximum sharpness and corresponding areas surrounding that plane that are also in focus (commonly referred to as the circle of confusion) while increasingly blurring objects beyond this area. By varying the combination of aperture size/F-stop and shutter speed, cinematographers control the depth of field in an image. A high F-stop/small aperture combined with a slow shutter speed yield a wide area of prime focus, or a large circle of confusion. A low F-stop/wide aperture diameter combined with a fast shutter speed yields a narrow area of prime focus, or a small circle of confusion. A depth of field effect can be simulated in 3D with a lens shader applied to the camera (Figure 8). The settings work the same way as they do on a real camera when the aperture dilates to let in varying levels of light. This cityscape demonstrates how to achieve a common film technique known as a rack focus, where the area of prime sharpness moves from the foreground to the background or vice versa. You might use this method to keep a person in focus while walking through the cityscape towards the camera in order to keep the audience's attention on that character. Figure 8.In the top image, a lens with a focal length of 80 mm is used at f32, with a narrow circle of confusion (0.1) and a focal distance of 70. In the bottom image, the lens is still an 80 mm, but the F-stop setting has been changed to f2. In both cases, the depth of field strength is set at 0.07. To animate this effect, you would simply set keys on the focal distance and F-stop in frame 1 using the first values, and then advance the timeline to frame 60 and set another key with the second set of values, thereby creating a 2-second rack focus effect at 30 frames per second. However, effects executed using 3D shaders within scenes such as in Figure 8 can be extremely render intensive. A time-saving alternative is to use an output shader such as the Mental Ray 2D Depth of Field shader applied at the pass level, which uses z-depth information obtained during rendering to generate a global blurring effect in a postprocessing calculation within your composite (Figure 9). Figure 9.Postprocessing a depth of field effect through FXTree nodes. Once a separate depth pass has been rendered, an image is created that contains information regarding the relative z-depths of all the objects in your scene. With that, you can go into the FXTree and plug in the relative nodes to achieve the desired effect. The nice thing about this method within Autodesk* Softimage*|XSI* is that by simply passing your mouse over the composited image in the FX Viewer, the application will return a precise distance value to any object you point at, so that you can enter the corresponding focus values into the depth of field parameters. In terms of render time, the image in Figure 10 rendered basically in real time, virtually instantly, despite being a much more complex scene, with more geometry along with complex textures and displacement effects. So, this is something to consider when planning your production workflow. Figure 10.Final render of a composited depth of field effect. Summary Successfully working with complex 3D software packages like Autodesk* Softimage*|XSI* is largely about balance and selectivity, making conscious choices about which effects to apply where and what to leave off. 3D is ultimately about problem solving, case by case. This is why 3D is as much art as it is science and a challenge to both right-brain creative artists and left-brain technical specialists. That's also why it's so fun. In upcoming articles, I'll be demonstrating some specific techniques within the Autodesk* Softimage*|XSI* software package. Such items include how to use particular types of lights in combination with various shaders and textures, how to employ motion blur, a discussion of the complexities of glows and volumetrics, how to create light rigs and animate lights, and an exploration of how Autodesk* Softimage*|XSI* mimics the physics of light within a 3D scene to create realism. Finally, future articles will discuss the range of rendering strategies and optimization techniques that you can implement
  4. Hello I thought it could be interesting for 3D artists and future 0AD contributors to get advices, tips, good links, interesting tutorials, etc. So why not sharing them here?
  • Create New...