Texture filtering can become unrealistic when magnifying. When the viewer is close to a texture surface, and single texels start to cover many pixels. The linear magnification filtering of these texels results in an unrealistically smoothed image with little surface detail. Not only does the image look unrealistic, but the lack of high frequency spatial information on the surface makes it more difficult to get realistic height and and motion cues when moving over the surface.
Ideally, every texture will have enough fine levels that any normal view of the textured surface will always have sufficient high frequency spatial data. But providing extra levels are expensive. With mipmapping, each fine level requires four times as many texels as the next coarser one. In some cases, it is worth it. The finer levels contain much more visual information that's useful to the application.
But sometimes it is not. A very high resolution image of an object will contain surface details, but the details can be very similar across the surface. For example, a close-up photo of a road may show a lot of asphalt detail that's pretty similar across the entire road. Providing a mipmap level of this detail would consume a lot of texture memory, without adding a lot of useful image data. Yet this detail provides important motion and height cues, and keeps the level from looking too blurry.
A detail texture is one solution to this problem. A representative section of a high resolution image is chosen, and its high frequency information extracted. The extracted information is stored in a small texture that contains just a fraction of the entire image.
The main mipmapped textured can then have fewer, lower resolution levels. When the viewer is close to the textured surface, the detail texture is combined with the filtered base texture to provide high frequency information to the result. Since the detail texture is small, its pattern is repeated over the entire visible surface.
It is assumed that the detailed texture contains only high frequency image features. These features are changing rapidly even across a small detail texture, so there are no low frequency components to cause tiling artifacts when repeating the detail texture across the textured surface.
Detail textures should not contribute anything to a texture that is not magnifying. When implementing detail texturing, you must be careful to fade in detail texturing as a function of the magnification of the base texture.
One way to do this is to gradually blend in the detail texture contribution as a function of distance from the textured surface. In many cases, application specific constraints can simplify the problem. For example, a flight simulator may have a look down mode that only needs a height above ground and a precomputed scaling factor to determine magnification level. If the simulator's view frustum brings the entire visible textured surface into view at nearly the same magnification, this solution can work well.
In the general case however, computing texture magnification can be difficult. You must consider the visible vertices of the textured surface, the texture coordinate scaling resulting from the current modelview and projection transformations, the current texture generations settings, and the values in the texture transformation matrix. One way around this is to add detail texture support in the OpenGL implementation. This is done in the detail texture extension GL_SGIS_detail_texture supported on SGI hardware. This extension blends in the detail texture as a function of magnification, and allows the detail texture either to add to or modulate the base texture.