When starting a new project you begin with the "fundamentals" (especially if you suffer from the
NIH syndrome), you build frameworks, toolkits and all kinds of fun stuff - what I mean is that I have not started working on my game yet :). The most recent of these tools is a Content Pipeline processor that renders a model to a texture. Before you say that this is useless keep reading. One day I was trying to render a terrain heightmap from Blender (there are tutorials
here and
here) and it hit me, why not render from the pipeline, this would streamline my workflow a little by eliminating the scene rendering step in Blender. All I needed to do was to draw the model using an orthographic projection with a shader that outputs the model height.
So here is the shader:
float4x4 World;
float4x4 View;
float4x4 Projection;
void Transform (float4 position : POSITION, out
float4 outPosition : POSITION,
out float3 outPixelPosition : TEXCOORD0) {
float4x4 worldViewProjection = mul(mul(World, View), Projection);
outPosition = mul(position, worldViewProjection);
outPixelPosition = outPosition.xyz;
}
float4 Heightmap(float3 pixelPosition : TEXCOORD0): COLOR {
float h = 1 - pixelPosition.z;
float4 height = float4(h, h, h, 1.0f);
return height;
}
technique BasicShader {
pass P0 {
VertexShader = compile vs_2_0 Transform();
PixelShader = compile ps_2_0 Heightmap();
}
}
Pretty simple stuff, just transform and then output the height of the vertex as the color. The projection is orthographic so the Z is already between 0 and 1, no need to scale the value or anything like that.
To the content processor now, (I'm not going to go through the moves of creating a content processor project, there is plenty of documentation on the internets), here is the class and the Process method. We create a graphics device and load the shader, then we set up a render target and draw the model using our shader and finally we take the pixel data and return a texture content with it.
- namespace Vendazoa.Content.Pipeline {
- [ContentProcessor(DisplayName = "Orthographic Renderer - Vendazoa Toolkit")]
- [Description(
- "Renders the input model to a texture using an orthographic projection.")]
- public class OrthoRenderer : ContentProcessor<NodeContent, Texture2DContent> {
- public override Texture2DContent Process(NodeContent input,
- ContentProcessorContext context){
-
-
-
-
- using (Form form = new Form()) {
- PresentationParameters pp = new PresentationParameters();
- pp.BackBufferWidth = (int)textureSize.X;
- pp.BackBufferHeight = (int)textureSize.Y;
- pp.BackBufferFormat = surfaceFormat;
- using (
- GraphicsDevice device =
- new GraphicsDevice(
- GraphicsAdapter.DefaultAdapter,
- DeviceType.Hardware,
- form.Handle,
- pp)) {
- Effect effect = CreateEffect(
- effectName,
- Path.GetDirectoryName(input.Identity.SourceFilename),
- device);
- RenderTarget2D renderTarget = new RenderTarget2D(
- device,
- device.PresentationParameters.BackBufferWidth,
- device.PresentationParameters.BackBufferHeight,
- 1,
- device.DisplayMode.Format,
- MultiSampleType.None,
- 1);
- device.SetRenderTarget(0, renderTarget);
- device.Clear(Color.Black);
-
- ModelContent model = new ModelProcessor().Process(
- input, context);
-
-
- BoundingBox modelBoundingBox = GetBoundingBox(
- input.Children, new BoundingBox());
- Vector3 sz = modelBoundingBox.Max - modelBoundingBox.Min;
- float min = MathHelper.Min(sz.X, sz.Y) * (100 - crop) / 100;
-
- Matrix proj = Matrix.CreateOrthographic(min, min, 0, sz.Z);
- Matrix view = Matrix.CreateLookAt(
- cameraPosition, Vector3.Zero, Vector3.Up);
- Matrix world = Matrix.Identity;
- effect.Parameters["Projection"].SetValue(proj);
- effect.Parameters["View"].SetValue(view);
- effect.Parameters["World"].SetValue(world);
-
- Draw(device, effect, model);
-
- device.SetRenderTarget(0, null);
- Texture2D texture2D = renderTarget.GetTexture();
- byte[] buf =
- new byte[texture2D.Height * texture2D.Width * 4];
- texture2D.GetData(buf);
- PixelBitmapContent<Color> pix =
- new PixelBitmapContent<Color>(
- texture2D.Width, texture2D.Height);
- pix.SetPixelData(buf);
-
- Texture2DContent texture2DContent = new Texture2DContent();
- texture2DContent.Mipmaps = pix;
- return texture2DContent;
- }
- }
- }
- }
- }
The effect loading method employed below is only "slightly" INSANE, if anybody finds a better way of loading an effect in the content pipeline let me know - I tried, I really tried to load a compiled .xnb effect without using a ContentManager but it did not seem possible without recreating a lot of code from the ContentReader. Also, this method does a fair amount of work, it will first look for a .xnb file and if not found it will try to load and compile a .fx file and finally it will try to load the effect from the internal resources.
- private Effect CreateEffect(String effectName, String path,
- GraphicsDevice device){
-
-
-
-
-
- GameServiceContainer gsc = new GameServiceContainer();
- gsc.AddService(
- typeof(IGraphicsDeviceService),
- new FakeGraphicsDeviceService(device));
-
- Effect effect = null;
- String effectPath = Path.Combine(path, effectName);
- if (File.Exists(effectPath + ".xnb")) {
- ContentManager contentManager = new ContentManager(gsc);
- effect = contentManager.Load<Effect>(effectPath);
- } else if (File.Exists(effectPath + ".fx")) {
- effectPath += ".fx";
- CompiledEffect compiledEffect =
- Effect.CompileEffectFromFile(
- effectPath,
- null,
- null,
- CompilerOptions.None,
- TargetPlatform.Windows);
- effect = new Effect(
- device,
- compiledEffect.GetEffectCode(),
- CompilerOptions.None,
- null);
- } else {
- ResourceManager rm =
- new ResourceManager(
- "VendazoaContentPipeline.Resources", GetType().Assembly);
- ResourceContentManager resourceContentManager =
- new ResourceContentManager(gsc, rm);
- effect = resourceContentManager.Load<Effect>(effectName);
- }
- return effect;
- }
And here is the IGraphicsDeviceService implementation.
- internal class FakeGraphicsDeviceService : IGraphicsDeviceService {
- private readonly GraphicsDevice graphicsDevice;
-
- public FakeGraphicsDeviceService(GraphicsDevice graphicsDevice){
- this.graphicsDevice = graphicsDevice;
- }
-
- public event EventHandler DeviceCreated;
- public event EventHandler DeviceDisposing;
- public event EventHandler DeviceReset;
- public event EventHandler DeviceResetting;
-
- public GraphicsDevice GraphicsDevice{
- get { return graphicsDevice; }
- }
- }
There is nothing special about the drawing method below, just that it took me some time to figure out how to properly get the vertex and index buffer from a ModelContent which is almost, but not quite, entirely unlike the ModelMesh.
- private void Draw(GraphicsDevice device, Effect effect,
- ModelContent model){
- effect.Begin();
- foreach (EffectPass pass in effect.CurrentTechnique.Passes) {
- pass.Begin();
- foreach (ModelMeshContent mesh in model.Meshes) {
- if (mesh.VertexBuffer == null)
- continue;
- VertexBuffer vertexBuffer = new VertexBuffer(
- device,
- typeof(byte),
- mesh.VertexBuffer.VertexData.Length,
- BufferUsage.None);
- vertexBuffer.SetData<byte>(mesh.VertexBuffer.VertexData);
-
- int[] ib = new int[mesh.IndexBuffer.Count];
- mesh.IndexBuffer.CopyTo(ib, 0);
- IndexBuffer indexBuffer = new IndexBuffer(
- device,
- sizeof(int) * mesh.IndexBuffer.Count,
- BufferUsage.None,
- IndexElementSize.ThirtyTwoBits);
- indexBuffer.SetData<int>(ib);
- device.Indices = indexBuffer;
-
- foreach (ModelMeshPartContent part in mesh.MeshParts) {
- VertexElement[] vertexElements =
- part.GetVertexDeclaration();
- VertexDeclaration vertexDeclaration =
- new VertexDeclaration(
- device, part.GetVertexDeclaration());
- device.VertexDeclaration = vertexDeclaration;
- device.Vertices[0].SetSource(
- vertexBuffer,
- part.StreamOffset,
- VertexDeclaration.GetVertexStrideSize(
- vertexElements, 0));
- device.DrawIndexedPrimitives(
- PrimitiveType.TriangleList,
- part.BaseVertex,
- 0,
- part.NumVertices,
- part.StartIndex,
- part.PrimitiveCount);
- }
- }
- pass.End();
- }
- }
And now the proof. This is the original mesh in Blender, notice the little axis icon on the bottom right corner - the Z is up, make sure the FBX exporter rotates the model to match the XNA coordinate system.

Now the processor options and the generated heightmap:


If you create a terrain using this heightmap and everything goes well you should get something very similar with your original model.

"The resemblance is striking"
Here is a tip, if you experience a stair stepping effect (you'll know when you see it) try to reduce the resolution of the image when creating the heightmap texture (see this
explanation).
I think this processor could be used for some other things, like rendering sprites or UI elements, pretty much anything that you would model in 3d and then export as image would be a good use case if the rendering quality is not terribly important.
That's it for now, I hope this was useful. Questions? Suggestions? Go ahead, ask and suggest...