The SEDRIS Data Representation Model
APPENDIX A - Classes
Image Mapping Function

Class Name: Image Mapping Function

Superclass - <SEDRIS Abstract Base>

Subclasses

This DRM class is concrete and has no subclasses.

Definition

An instance of this DRM class specifies how the given <Image> is to be mapped onto a given textured object, including the mapping method, the projection, and how <Texture Coordinate> instances (if present) are to be treated if they fall outside the image space bounded by (0, 0), (1, 1).

Primary Page in DRM Diagram:

Secondary Pages in DRM Diagram:

Example

  1. An <Image> is mapped to a <Polygon>. The <Polygon> has <Texture Coordinates> at 3 or more <Vertices>, the <Image> is mapped the <Polygon> using the <Texture Coordinates>. The <Texture Coordinate> represents the location within the <Image> that lies on top of the <Vertex>.

  2. A <Polygon> could have 2 <Images> mapped to a single polygon. One <Image> could be the <Image> displayed for most cases. The second <Image>, if image_detail_mapping is SE_TRUE, would then be the imagery that is added to polygon when the eyepoint is so close to the polygon that the main image texels are no longer useful for texturing the polygon.

FAQs

What is the order of precedence for mapping an <Image> to a geometric object, such as a <Polygon>?

The <Texture Coordinate> instances on the attribute geometry always have the highest precedence. Next, if the attribute geometry has <Tack Point> instances, three conditions could occur.

  1. If there is only one <Tack Point>, the <Tack Point> is used to locate the imagery, and the scaling and rotation information from the <Image Mapping Function> is used.

  2. If two <Tack Point> instances are defined, the location and rotation of the imagery is taken from the <Tack Point> instances, but the scaling information is derived from the <Image Mapping Function>.

  3. If there are three or more <Tack Point> instances, the position, rotation information is derived from the first three <Tack Point> instances. Since <Tack Point> components are unordered, the complete set of supplied <Tack Point> components shall define an orthogonal image projection. This is to insure that no matter which three <Tack Point> components are used, the same projection is derived for the imagery.

If the <Polygon> does not have <Texture Coordinates> or <Tack Points>, then < Image Anchors> in the <Image Mapping Function> define the location of the image on the attribute geometry. If none of the previous conditions are met, then the <Image Anchors> in the <Image> are used. If there are no <Texture Coordinates>, no <Tack Points>, no < Image Anchors> in the < Image Mapping Function> and no anchor points in the <Image>, then the image mapping is undefined. It should be noted that the final two cases can be used to create non-orthogonal projections.

How can I create a non-orthogonal projection of texture onto a textured object, e.g. a <Polygon>?

Since all texture mappings that are a result of texture coordinate mapping at the polygon level are defined to be orthogonal projections, a non-orthogonal projection cannot be created with <Texture Coordinates>.

To create a non-orthogonal projection, the < Image Anchor> points shall be used. These points are defined in the currently scoped 'world' spatial reference frame, and therefore are not required to be in the plane of the <Polygon>. There is no method to create a non-orthogonal projection in the local spatial reference frame of the <Model>.

As a data provider, I have a texture map that is applied to a textured object using a spherical projection. How do I store the centre and radius of projection in the < Image Mapping Function>?

Rather than using < Texture Coordinate> instances to tie the <Image Mapping Function> to the textured object, you shall use an < Image Anchor>. For a spherical projection, the < Image Anchor>'s <Locations> are interpreted as follows:

  1. origin (the centre of the sphere)
  2. direction (point on the north pole of the sphere)
  3. alignment (point at the equator of the sphere)

Given an <Image> that is to be applied to a given object using a cylindrical projection, how can an <Image Mapping Function> be used to store the centre of projection?

The <Image Mapping Function> for this case specifies an <Image Anchor>, while the object to be textured does not specify <Texture Coordinate> instances. For details of how the <Image Anchor> specifies the cylindrical projection, see <Image Anchor>.

When <Image Anchors> are used, how are the rotation and scale of the image mapping represented?

The rotation and scale of the image mapping can be derived from the <Locations> of the 3 corners of the <Image>, i.e., the <Location> components of the <Image Anchor>.

When blending, how do is a blend (luminance) value interpreted?

1.0 = 100% Primary colour, No Blend contribution
0.0 = no Primary contribution, %100 Blend Colour

Does an <Image Mapping Function> apply to both sides of a <Polygon>, or only the front side? What if I want to represent a wall with brick texture on one side and wallpaper texture on the other?

An <Image Mapping Function> applies only to the front side of a < Primitive Geometry>. In the wall example, you would need 2 <Polygons>, one with brick texture on its front side, facing the outside of the building, and the other with wallpaper, facing the inside of the building.

Any other representation would be lost in a rendering system that used back-face culling.

Constraints

Associated to (one-way)

Composed of (two-way)

Component of (two-way)

Inherited Field Elements

This class has no inherited field elements.

Field Elements

SE_Image_Mapping_Method image_mapping_method; (notes)
SE_Image_Wrap image_wrap_s; (notes)
SE_Image_Wrap image_wrap_t; (notes)
SE_Image_Projection_Type image_projection_type; (notes)
SE_Long_Float intensity_level; (notes)
SE_Long_Float gain; (notes)
SE_Boolean image_detail_mapping; (notes)

Notes

Composed of Notes


Presentation_Domain

 Note that <Presentation_Domain> is optional
 only when the aggregating object(s) have
 one and only one <Image Mapping Function>
 component. If multiple <Image Mapping
 Functions> are present, each shall have a
 distinct <Presentation Domain> component.

Component of Notes


Aggregate_Feature

 This relationship is used to support attributes for derived objects.

 That is, the <Image Mapping Function> components are used only to
 specify texture mapping information for geometry that is derived
 from the <Aggregate Feature> by the consumer. An
 <Image Mapping Function> component of an <Aggregate Feature>
 shall use <Image Anchor> components to specify the mapping.

Primitive_Feature

 This relationship is used to support attributes for derived objects.

 That is, the <Image Mapping Function> components are used only to
 specify texture mapping information for geometry that is derived
 from the <Primitive Feature> by the consumer. An
 <Image Mapping Function> component of a <Primitive Feature>
 shall use <Image Anchor> components to specify the mapping.

Fields Notes


image_mapping_method

 For details on these methods, see SE_Image_Mapping_Function_Method.

image_wrap_s

 This specifies whether to clamp or repeat the given <Image> instance
 in s.

image_wrap_t

 This specifies whether to clamp or repeat the given <Image> instance
 in t.

image_projection_type

 This specifies the type of projection to be used when applying the
 given <Image> instance to textured objects.

 1) If planar projection is specified, the following cases may apply:
     a) The object may have <Texture Coordinates> or <Tack Points>, in
        which case the <Image Mapping Functions and Texture Coordinates>
        rule will apply.
     b) The <Image Mapping Function> may have an <Image Anchor>.
     c) The <Image> may have an <Image Anchor>.

     See <Image Mapping Function> FAQs for how to interpret these cases.

 2) If cylindrical or spherical projection is specified, the object
    shall not have <Texture Coordinates> or <Tack Points>. Instead,
    either the <Image Mapping Function> or its <Image> shall have an
    <Image Anchor>.

intensity_level

 value between 0.0 and 1.0

 This indicates the percent contribution of this <Image Mapping Function>
 instance to the total effect on the textured object. For an <Image>
 with a colour coordinate component specified by its signature, multiply
 first, second, and third colour coordinate values within the <Image>'s
 texels by this value.

gain

 This value is to be added to each colour data value from the
 texel data of the <Image> to obtain the displayed image.

image_detail_mapping

 This indicates whether this <Image Mapping Function> instance is
 used to describe mapping of a "detail" image on the textured object.

Prev: Image Lookup. Next: In Out. Up:Index.

Last updated: October 1, 2002 Copyright © 2002 SEDRIS™