Just a quick update.
Today/ tonight i made quite a bit of progress with my Direct3D game engine frame work. I tidied it up a lot and now have all my other projects integrated and have begun seperating them into seperate classes so they can be used to create a game with. For example I seperated my mesh animation code into an animated mesh class and and a meshAllocation class. One loads, stores and plays the animation whilst the other is the actual storage allocation for the frames themselves.
Another advancement I made is I have now added a simple version of an inputDevice class which is now a variable within the app itself so it can be called to implement keyboard functions/ gamepad functions a lot easier and cuts down reuse of similar keyboard functions. Kind of useful when creating games using an engine I think.
Finally, I have begun to work on basic display and sprite work for my game engine. This will be useful when making a game HUD hopefully.
anyways, thats all for now.
Thursday 11 February 2010
Wednesday 10 February 2010
Semester 2 - General Update
As of semester 2 which we are now in the 3rd week of, I will be studying 4 modules. Applied Game Development, Interactive 3D Programming, Console Development, and Mobile Devices.
Applied Game Development
My team has now been decided and I'm on the pink team! We are using Gamebryo and have been given an initial prototype of a racing game which I add was rather broken when we were given it and a bit rubbish too. We have been given the task to develop a game based on some guidelines and using this initial prototype. Working with my team and this kind of task based environment will be interesting and hopefully give me a taster of what the industry will be like when I get there.
Interactive 3D Programming
In this module we are learning some DirectX 9, and using the Gamebryo engine again in order to develop our own small game engine. Currently we are expected to have some custom vertex shapes drawing on screen with some basic diffuse colouring and rotatings/ translations occuring. As well as this we are expected to have a cube drawn with normal attached in order for basic lighting to be created using the inbuilt gouraud shader. I have used directional, point and spotlights, as well as including specular lighting into my code. Althought at the moment this specular light is only compatible with this vertex buffer shape forms. To get it working with my meshes I will have to write some extra code for it to work properly, otherwise it looks blotchey and a bit naff! Basic texturing is something we have touched on in lessons as well.
On top of this I have looked into more advanced lighting techniques in Direct3D, alpha blending techniques to get a glass like transparent effect and more advanced texturing methods, such as filtering calls. These just expand a bit on what I have already been asked to look at.
We have also looked at loading the inbuilt windows meshes for now, but I have also written the code to load .X files into my game engine, and also the code to load, store and run the animations that meshes may have attached to their .X files. This is something I had to go and research myself in order to find out how to do it, and it took me a little while to get it working nicely as well.
Overall I am enjoying this module so far and I am going to continue building on my game engine when I can.
Console Development
This is a module that we began last semester as its a double module. However, this semester so far we have looked at the floating point and how it is created using the IEEE standard. I was tasked to create my own floating point based on this standard and I have done so to a basic level, although there is a lot more to it then first meets the eye, such as accuracy problems. It has taught me why float points are much more computationally expensive then other storage types. We are not beginning to think about out project which should be interesting.
Mobile Devices
This is my first experience with Java and to be honest it's much much easier then any other language I have used so far. I guess the hard thing is taking into account the much smaller resources available to mobile device applications. However, I have got a grip of this very quickly and intend to begin my main project soon with the intention to have it completed much earlier then the others.
Conclusion
Overall, so far this semester is looking up and I have got knuckled down much earlier then last semester and I am finding it a lot less tedious. Although last semester was tough, the things I learnt are now making a lot more sense.
Applied Game Development
My team has now been decided and I'm on the pink team! We are using Gamebryo and have been given an initial prototype of a racing game which I add was rather broken when we were given it and a bit rubbish too. We have been given the task to develop a game based on some guidelines and using this initial prototype. Working with my team and this kind of task based environment will be interesting and hopefully give me a taster of what the industry will be like when I get there.
Interactive 3D Programming
In this module we are learning some DirectX 9, and using the Gamebryo engine again in order to develop our own small game engine. Currently we are expected to have some custom vertex shapes drawing on screen with some basic diffuse colouring and rotatings/ translations occuring. As well as this we are expected to have a cube drawn with normal attached in order for basic lighting to be created using the inbuilt gouraud shader. I have used directional, point and spotlights, as well as including specular lighting into my code. Althought at the moment this specular light is only compatible with this vertex buffer shape forms. To get it working with my meshes I will have to write some extra code for it to work properly, otherwise it looks blotchey and a bit naff! Basic texturing is something we have touched on in lessons as well.
On top of this I have looked into more advanced lighting techniques in Direct3D, alpha blending techniques to get a glass like transparent effect and more advanced texturing methods, such as filtering calls. These just expand a bit on what I have already been asked to look at.
We have also looked at loading the inbuilt windows meshes for now, but I have also written the code to load .X files into my game engine, and also the code to load, store and run the animations that meshes may have attached to their .X files. This is something I had to go and research myself in order to find out how to do it, and it took me a little while to get it working nicely as well.
Overall I am enjoying this module so far and I am going to continue building on my game engine when I can.
Console Development
This is a module that we began last semester as its a double module. However, this semester so far we have looked at the floating point and how it is created using the IEEE standard. I was tasked to create my own floating point based on this standard and I have done so to a basic level, although there is a lot more to it then first meets the eye, such as accuracy problems. It has taught me why float points are much more computationally expensive then other storage types. We are not beginning to think about out project which should be interesting.
Mobile Devices
This is my first experience with Java and to be honest it's much much easier then any other language I have used so far. I guess the hard thing is taking into account the much smaller resources available to mobile device applications. However, I have got a grip of this very quickly and intend to begin my main project soon with the intention to have it completed much earlier then the others.
Conclusion
Overall, so far this semester is looking up and I have got knuckled down much earlier then last semester and I am finding it a lot less tedious. Although last semester was tough, the things I learnt are now making a lot more sense.
Unreal Total Conversion
As i mentioned earlier in my blog posts, I was working towards making an Unreal Total Conversion for a Famous Five book. I completed this before christmas and kept meaning to write a quick post about but never got round to it.
This Total Conversion was based on taking a game engine and creating a game that it wasn't specifically designed for. In my case an adventure / puzzle game. My game incorporates 6 characters including my own, where you control Dick in mine, and the other characters are AI. The AI is quite extensive in mine, and it took me a long time to get them to nearly work properly but sometimes they glitch up because its hard to get them to follow you without running places they can get stuck. I put blocking volumes everywhere to try prevent this but sometimes it still happens. I also wrote my own chat system where I can talk to all my AI characters at certain points of the game. They run to their spots when hitting a trigger point and the chats become available when they reach their position. This was developed to allow for as many chat responses as I liked which I thought was pretty neat.
My game world was quite large and it took me a while to build in the Unreal Editor. I used quite a lot of special effects including their special lighting techniques and player shadows. I also incorporated alot of meshes and terrain textures to make the world seem a little more realistic. There are also a fair few fire particle emitters around the level which added a more sinister feel to the game.
Sound plays a big part in my game as it sets the mood for different areas. I added foley to all sorts of things such as player footsteps. I also used a fair bit of ambient sound which made the different areas seem a little more unique and set the scene a bit more. On top of this I also had different theme tunes which my brother kindly made for me.
I used a fair few cutscenes which are short but make each part of the game seem a bit more exciting. I also used a lot of different lights throughout my map such as trigger lights and flashing lights.
The thing I think I am most proud about in my game is the HUD of the game, and the fact there are so many quests. We were asked to do a short 10 minutes level but mine took me 18 minutes from beginning to end when I knew exactly what to do. The HUD is interactive, there is a map for players to see where themselves and the AI are in comparison to different level areas, there is loads of feedback when quests are gained, completed and handed in etc... It all made the game look a little more interesting and I feel it probably made thing make a little more sense.
Overall I think that my game went well, although there were quite a few bugs at the end of the demo I ran out of time so had to make a few shortcuts in order to have a complete game. Disregarding this, my game was playable from beginning to end and it has a variety of mini games and objectives.
A short video will follow soon in another post.
This Total Conversion was based on taking a game engine and creating a game that it wasn't specifically designed for. In my case an adventure / puzzle game. My game incorporates 6 characters including my own, where you control Dick in mine, and the other characters are AI. The AI is quite extensive in mine, and it took me a long time to get them to nearly work properly but sometimes they glitch up because its hard to get them to follow you without running places they can get stuck. I put blocking volumes everywhere to try prevent this but sometimes it still happens. I also wrote my own chat system where I can talk to all my AI characters at certain points of the game. They run to their spots when hitting a trigger point and the chats become available when they reach their position. This was developed to allow for as many chat responses as I liked which I thought was pretty neat.
My game world was quite large and it took me a while to build in the Unreal Editor. I used quite a lot of special effects including their special lighting techniques and player shadows. I also incorporated alot of meshes and terrain textures to make the world seem a little more realistic. There are also a fair few fire particle emitters around the level which added a more sinister feel to the game.
Sound plays a big part in my game as it sets the mood for different areas. I added foley to all sorts of things such as player footsteps. I also used a fair bit of ambient sound which made the different areas seem a little more unique and set the scene a bit more. On top of this I also had different theme tunes which my brother kindly made for me.
I used a fair few cutscenes which are short but make each part of the game seem a bit more exciting. I also used a lot of different lights throughout my map such as trigger lights and flashing lights.
The thing I think I am most proud about in my game is the HUD of the game, and the fact there are so many quests. We were asked to do a short 10 minutes level but mine took me 18 minutes from beginning to end when I knew exactly what to do. The HUD is interactive, there is a map for players to see where themselves and the AI are in comparison to different level areas, there is loads of feedback when quests are gained, completed and handed in etc... It all made the game look a little more interesting and I feel it probably made thing make a little more sense.
Overall I think that my game went well, although there were quite a few bugs at the end of the demo I ran out of time so had to make a few shortcuts in order to have a complete game. Disregarding this, my game was playable from beginning to end and it has a variety of mini games and objectives.
A short video will follow soon in another post.
Monday 1 February 2010
Renderer Video
Here as I promises is the final renderer video. It is a scripted demo and for this demo I also added in a moving near plane when the camera moves which gives a cool effect.
3D Renderer - Final Submission
So, the first thing I have to say is its been a very long time since I'v posted anything on here. The reason being that not long after my last post I really knuckled down on my projects and sort of put this blog to one side. But its a new term, and I have my last terms projects ready to show. As you may have seen below, my last post showed my renderer up to the point of adding light types onto an inbuilt rasterizing class. I have added many features since then as listed below. I won't be talking about the old features again but I will shortly explain and show my new features.
Features already explained:
- Model Wireframes
- Backface Culling
- Polygon Sorting (Quicksort method)
- GDI+ rasterization (flat shading)
- Directional lights
- Point lights
- Spot Lights
- Specular lighting
New features added:
- Phong lighting model
- Custom rasterizer (flat shading)
- Gouraud shading
- Texturing
- Texture Perspective Correction
- Z-Buffer
- Clipping/ extensive Culling
- Camera frustum
- Concave frustum lens effect
- Model animation
- Vert Manipulation
- Extensive Optimisation
Phong Lighting Model
This isn't something I will show because its already been seen in my last post. All this is, is I have encapsulated the ambient, diffuse and specular light into a seperate light class so that each light can control its own components.
Custom rasterizer (flat shading)
This is kind of an important step for 3D rendering as it's not really possible to go any further until you get to grips with the idea of interpolation and scan lines over polygons. Up until now I have used the inbuilt GDI+ library rasterizer but I'm not going to be taking this any further because it is interpolation that opens up the gateways to a lot of features later on.
What I did is I took each polygon in my model and saved its vertices into temp verts. I then calculated the highest and lowest in each polygon so later on it wouldn't search through the whole models vert set. This is quite good for optimisation. The idea behind interpolation is that it takes two points pre-specified and then calculates all the in between values on a per-pixel basis. In the case of a custom rasterizer for flat shading, it takes each edge of the polygon (being 3 as they are triangles), and uses the two (x, y) components that join these edges and calculates the in-between values. This gives a triangle but it isn't filled in as yet. This is where scan lines come into play. Scan lines are simply the lines that go between the edges of a polygon, going from the min X value to the max. X. There is a line for each Y value based within the highest and lowest vertices on the screen. The calculations are actually quite simple and really, once the concept is learnt it makes a lot of sense why it works. I calculated the scan lines in an almost identical way to that of the edges of the polygon, except I replaced the vert points with the newly calculated min/max X values along the polygon edges.
Now when I first implemented this is was rather slow, but with some extensive optimisation I managed to get it a little faster.
Gouraud shading
This is really just an extension to the custom rasterizer explained before but it is quite tricky to get it working properly. In effect, gouraud shading is quite fast in comparison to the likes of phong-shading (a complete per-pixel shaded method), but gives much better results then that of flat shading.
In terms of implenting it, all it really involves is using the same method with interpolation as used to create my custom rasterizer, however it interpolates the Red, Green and Blue values along the edges and then the scan lines as well. The tricky part really is getting all the float to integer conversions right otherwise the model can do some fairly odd things such as flickering.
My customer rasterizer
Gouraud Shading:
Texturing
This is self explanatory from the name, and it adds a new concept to 3D graphics rendering. Texturing is just placing an image over a model, like a skin for example. In 3D graphics the coordiates (U, V) are used to determine locations on a texture. These are just replacements for (X, Y), but we already use these. For most of texturing it is identical to gouraud shading, however it requires a small change. You need a way to access the UV points and place them at the right place on the model.
When loading the model from a file, it checks to see if the texture file is 256 x 256 (the standard MD2 texture size). It then loads all the possible colours into a texture palette memory location. I also had to modify my vertex class in order for it to be able to store UV coordinates relating to the models verts in its original state.
Calculating the UV points is the same as calculating gouraud shading, except the UV points are interpolated. The only other difference is that texturing then has to use these interpolated values for each scan line and then access the palette using the calculation (textureWidth * V) + U. This determines the point on the texture and returns the value which can be used to access the colour palette stored before. This if all is correct will map the texture value onto the model, giving a textured model.
Texture Perspective Correction
This is a simple extension to texturing that stops the textures distorting as they go to the side of models. It uses simple calculations that combine the UV values with the depth value. the main concept is using, U / Z, V/ Z and then 1/ Z in order to calculate the interpolated values. The rest of texturing is really the same.
My renderer with textured models is shown below:
with light modulation:
Z-Buffer
This again is the same process as gouraud shading except it takes a 2D array the size of the screen (1 value for each pixel on the screen), and then saves the depths of these pixels. Once a model value has been assigned to a pixel, its depth can be saved in the array and then used later on when other models could potentially be drawn here in the same rendering loop. If the new potential value is closer to the camera then the old one, it replaces the old value and is drawn as usual, but if its further away then the original value, it is discarded and not drawn.
This is particularly useful when rendering multiple objects and shows objects in order of there depth.
Camera frustum
This is simply a frustum shaped area that is located in front of camera and determines where models can be processed. If polygons are based outside of it, they are culled and not processed. It is a neat but effective optimisation method and cuts out the 'behind the camera' problem when rendering polygons.
Clipping/ extensive Culling
For this I created culling based on the edges of the screen and cut out polgyon calculations throughout my rendering pipeline in order to increase my frame rate. I also inplemented clipping on the near plane of my camera frustum which gives a much smoother look to objects being discarded by the camera frustum. I have implemented both the 1 vertex and 2 vertex outside the plane methods which account for all possible situations of clipping. If all 3 points are outside the plane they are just not processed.
The 2 vertex outside method simply takes the 2 points and moves them onto the plane, making the polygon drawn a little smaller.
The 1 vertex out method however is a little trickier and involves moving the 1 point onto the plane and then creating a new polygon from this point to account for the gap created by this.
Concave frustum lens effect
This is a neat little effect I stumbled across that begins to darken and stretch the models as they reach the edge of the camera frustum. This gives a kind of concave effect to the lens of my camera.
Model animation
For this I had to alter the .MD2 loader file in order to access, store and load the animation key frames. Before they were only storing the first one, so I had to make it access all of them. After this I just had to increment the keyframe over time in order to rasterize the new set of points associated with each keyframe, giving an animation to the model. With more time interpolation could be used to create smoother animations here, but for the purposes of my renderer, it works ok as it is.
Vert Manipulation
This I created by accident but it looks cool. At the end of my renderer demo I made it so cloud from FF7 would explode every other animation loop and look all messed up. He then goes back to normal in the loop after. I thought it was quite funny when I slowed down the swords animation cycle so it was out of synchronisation with the cloud animation cycle. It kind of looks like he's being hit and mashed up by his sword but hey, for all intents and purposes, a more exciting finish to the demo.
A final demo video will be up in a separate post soon.
Monday 9 November 2009
Update - 3D Renderer
So to begin with I haven't posted in a while so this will be a long post!
AS it is, my 3d renderer has come along quite nicely, especially this week where I reckon I'v spent in the midst of around 50 hours on it, but I am quite pleased with the results so far.
Intro to 3D graphics was probably the module I was dreading the most to begin with because I didn't know how I would cope with it. However, now we are a few weeks in I am a lot more confident then I was, especially now I have a renderer up and running with some nice added elements. This module has also showed me the purpose of why we were put through all those maths lectures last year! Without matrix's and vectors say, we wouldn't be able to create a camera class! This has also led me to understand much better why we require the maths and their uses in 3D graphics.
Before I could render anything on screen I had to create a Vector and Matrix class with all their mathematical uses in order to be able to create a camera class where I create the camera View matrix, Projection matrix, and the screen matrix. There's many transforms that have to be undergone in order to get a model rendering on screen.
It begins by loading the local verts of the model into world space where I transform them using the combination of my XYZ rotation matrix's and the translation matrix, along with a scaler matrix in case I want to scale the model at all. Next I take these transformed verts and transform them again using a flip matrix so the model shows the right way up, otherwise it shows upside down. It is after this transform and before the view matrix that I place my vert Sort() function, my FindPolygonNormal() and my BackCulling() function. This happens in world space because it saves having to transform all the verts in the BackCulling() function which makes it more efficient for later on, especially when we are trying to optimize our renderers!
After this point I then calculate my XYZ rotation matrix using the inverse XYZ matrix's and then multiply this by my inverse Translation matrix which gives me my final view matrix. This matrix puts the object in the view of the camera as the origin so other transforms can be applied. The next of which is the projection matrix that gives the camera a field of view, distance and width of view. I have also applied an aspect ration of 1.33x to the y scaler component of this matrix because when converting later to a rectangular monitor it stops my image from being squashed on the Y axis.
Now before I can convert my transformed verts onto the screen one key thing has to be done to the current verts which is de homogenising them. This is the process that divides the verts by their 'w' component which we add in order to give them a point. doing this brings both the 'w' and 'z' values to 1.0 which then allows us to bring the model from 3D space with an x, y and z component to 2D space with just a X and Y component. This is where the screen transform matrix comes in. This transforms the verts by half the screen width and height in both the X and Y scaler and translation components. However, the Y scaler component must be made the inverse because the Y axis on a screen faces downwards, this allows us to move our model into the 2D screen space so we can see it.
Once this has been applied, all thats left to be done is to draw a background using the Gdiplus Clear() function with a Gdiplus::Color and then rasterize the model on top. My 3D renderer began with a wire frame .MD2 format model, namely in these examples a South Park Cartman model which I obtained from the net.
So below shows the wire frame model I got after using all the camera transforms stated above.
As you can also see, my renderer has back culling on it where it cuts out the verts from behind, giving a much more solid look to the object being rasterized. However, the arms arn't affected by this at the moment because of the way they are attached to the model so the red lines on the back arm shows that it is currently behind the object. I have used the in built C++ quick sort
method to sort the verts in order of their z axis values so it draws the furthest away verts to begin with, giving the effect of the further away verts being behind the ones in front. This makes the object look much more realistic.
The next stage was to make it fill in the polygons of my model and make the image a solid colour. This is shown below as I used the Gdiplus FillPolygon() function.
The next stage for me was to add directional light to my object. Now originally this would have been just directional light but in my updated version of my renderer I have the ambient, diffuse and specular components all attached to each created light. Now I do have the options to cut out the ambient and specular light in order to show just the diffuse light, but it looks much nicer with all the components on it. Of course at the moment this is using flat shading so it gives a very polygonal sense to the object. This is something I am going to be working on upgrading over the coming weeks to Gourand and possibly Phong shading methods. Below is an image showing my directional light on the object from the left hand side using an intensity of 2x with its current light factors. My current light factors are ambient light(R = 10, G = 10 and B = 10), diffuse light(R = 50, G = 50 and B = 100) and specular light(R = 200, G = 100 and B = 50) with a shine factor of 5.
As it can be seen the light is quite full on the left side as this is where the light is located. Directional lights don't take into account light attenuation over distance, the reason it shows the effect of shading is because it uses the polygon normals so the values will differentiate and give different resultant light colours. The specular light can also be seen vaguely as this is because the specular light factor is quite low. This is deliberate so I can show the effects of multiple lights later.
The next type of light I implemented was the point light which does attenuate over distance, meaning it loses its light as it travels through space. This gives a different type of light effect to the directional light previously seen which doesn't attenuate over distance. Below is an image showing my point light on the object from the right hand side using an intensity of 2x with its current light factors. My current light factors are ambient light(R =15, G = 15 and B = 15), diffuse light(R = 60, G = 30 and B = 160) and specular light(R = 200, G = 100 and B = 50) with a shine factor of 5.
As it can be seen, the light is quite strong at the top but fades quickly. This example isn't brilliant because the point light is quite close to the object, however it shows that polygons further away are slightly darker then those close up since the specular light hasn't changed from that of the directional light before.
The final type of light I have implemented is the spot light. This light was a pain to get to work and took me a while and a lot of frustration to get to work properly, but I got there in the end. This light works just light a point light except it contains the light within a cone using a direction vector. This cone specifies the intensity of the light as well, which is at its most intense when the sample point is nearest to the direction vector. Below is an image showing my point light on the object from the camera view point using an intensity of 2x with its current light factors. My current light factors are ambient light(R =60, G = 20 and B = 100) and diffuse light(R = 0, G = 180 and B = 0). For this example I have disabled my specular light so it is easier to see the fade out of the spotlight.
As it can be seen, the light is confined to the centre of the model as this is where I have set my direction vector too, and it is at its most intense at the centre of the light which is where the direction vector will be. If I was to decrease my falloff exponent it would make the spotlight wider and thus making less of the model be shown in shadow, and vice versa if I was to increase the falloff exponent. I could also increase the cone exponent which would make the light more intense towards the middle of the cone and it would fade off quicker giving the same size spotlight but with a more intense centre of light.
To finish off I will show all of the 3 previous lights together with the same colour components but there intensity's are set to (Directional - 1x, Point - 0.8x, Spot = 0.5x). The alpha of the image is also 120 to make it semi-transparent.
It shows what only a few lights can do and the sort of effects you can get by adjusting just a few properties connected to each light. Of course this is still only in flat shading so fairly computationally cheap by comparison to present techniques, but even in with this shading technique it looks pretty good with some lighting.
For now that's all, but an update shall come a lot quicker this time as the renderer improves.
Tuesday 20 October 2009
UnrealScript can be FRUSTRATING!
I'v just managed to complete this weeks tutorial on UnrealScript and I'v been sitting her for over an hour trying to work out why I couldn't change the background image from what I originally set it, only to find i made something a capital letter rather then a lower case, arghh the joys.
However, it's all up and running now so I'm happy about this. Been trying to mess around with a few bits here and there but I'll probably leave this until tommorow or wesnesday I guess.
I'v looked into quite a lot of the GUI class source codes and just by doing this I'v already got to grips with a fair bit of the code I'm seeing so this is a relief. Examples of this code I'm now quite happy to see are the variable declarations:
and just the general way functions are syntaxed and linked together which is very much like C# a language I'm already familiar with:
In the coming weeks I'm sure the pain will become even worse and even more frustrating! but I am enjoying the challenge, so bring it on Unreal!
However, it's all up and running now so I'm happy about this. Been trying to mess around with a few bits here and there but I'll probably leave this until tommorow or wesnesday I guess.
I'v looked into quite a lot of the GUI class source codes and just by doing this I'v already got to grips with a fair bit of the code I'm seeing so this is a relief. Examples of this code I'm now quite happy to see are the variable declarations:
and just the general way functions are syntaxed and linked together which is very much like C# a language I'm already familiar with:
In the coming weeks I'm sure the pain will become even worse and even more frustrating! but I am enjoying the challenge, so bring it on Unreal!
Subscribe to:
Posts (Atom)