Saturday, January 30, 2010

Agent Rotation: Groundwork

After I figured out the two-dimensional rotation math, I changed my agent's "direction" arrow.  Initially, it was a collection of lines (GL_LINES) which were rotated around the z axis by the required angle, using the OpenGL rotated function.  Instead of using my home-grown rotation math to determine the new vertices for each endpoint of each of the arrow's lines, I decided to keep it simple.

I still need to update the direction vector manually.  However, I can use OpenGL's functions to rotate as many points at a time as I require (that is, the entire agent 3D object).  In order to determine that my vector rotation and the OpenGL rotation were in-sync, I decided to draw a representation of the direction vector in the world coordinate space (translated to the origin of the agent object) as a simple line, and to draw a more complex agent object in its own coordinate space (rotated with OpenGL).  Here are the steps for each frame:
  • Initialize the canvas
  • Translate to set the camera's position
  • Draw the board, using world coordinates
  • Push the OpenGL transformation matrix (this allows us to draw each agent individually, without regards to the others)
  • For each agent...

    • Translate to the agent's position
    • Push the transformation matrix again (this allows us to draw the direction vector after the object itself has been drawn)
    • Rotate around the z axis, to put OpenGL into the agent's coordinate space
    • Draw the agent object
    • Pop the previously-pushed transformation matrix, to return to world coordinates (centered at the origin of the current agent)
    • Draw the direction vector
    • Pop the transformation matrix again, to center once again at the world's origin (permitting the next Translate to move the coordinate system center to the origin of the next agent)
What I initially found was that the direction vector and the object did not stay aligned.  The reason for this is that OpenGL's rotation functions are calculated in degrees, while the standard library sin() and cos() functions work with radians.  I converted the angle in my own rotation equations from degrees into radians, in an attempt to keep my OpenGL code as clean as possible.  The resulting conversion ensured that my OpenGL-rotated object remains aligned with the object I rotated myself.

Tuesday, January 26, 2010

3D Rotation: Theory (Part II)

In the previous post, we found the equations for aligning an arbitrary vector with the z axis, in order to perform a rotation around that vector.  What we failed to determine, by the end of the discussion, was how to return the scene to its original coordinate space, so that only the object we rotated around the axis vector appeared to move.  We mentioned "undoing" the axis-alignment rotations, but didn't go into detail.

It turns out to be pretty simple.  Recall that we first made a clockwise rotation around the z axis, followed by a clockwise rotation around the y axis.  In order to return the scene to its original coordinate space, we need to first make a counter-clockwise rotation around the y axis, followed by a counter-clockwise rotation around the z axis.  Intuitively, the rotation angles are the same as those used for the axis-alignment rotations.  That is, to "undo" the rotation around the z axis, you must simply rotate in the opposite direction, through the original angle (only backwards).

It is possible to calculate new values of (x, y, z) for every point we want to rotate around the vector, using combinations of (and variations on) the equations from the previous post.  A given point's coordinates would have to be applied to each equation in turn, however, unless we could find some easy way to combine all of the equations, and apply that result to each of the points we want to rotate.

I mentioned before that the equations can be represented by matrices.  It turns out that matrices are ideal for these kinds of chain-transformations.  For our purposes, we could use 3x3 matrices for all of our rotational needs.  However, if we wished to represent any translation transformations in matrix form, a 3x3 matrix is not sufficient.   Consider the z component of a translation: z` = z + zt.  In a 3x3 matrix representation, it is impossible to represent z except in terms of x and y.  With a 4x4 matrix, however, the translation is easy to find:
That bottom row deals with something called homogeneous coordinates, which basically means we're representing our three-dimensional world in a four-dimensional space, with a fixed fourth coordinate (much like how we moved our rotations into a two-dimensional problem within three-dimensional space).  For our purposes, we'll avoid tweaking that row as much as possible.

It is also easy to find the matrices for rotation around each of the standard axes, using the two-dimensional rotation equations we found previously:
We use the same equations for rotation around the y axis, substituting z for x and x for y:
The same technique applies to rotation around the x axis, substituting y for x and z for y:
Remember that, in order to perform a clockwise rotation (which is how we'll be moving our axis vector) the sign of each of the sin terms will need to be switched.

In the previous post, we found the angles of rotation (in terms of u, v and w) required to align the axis vector with the z axis.  With a minor variation, we can plug those values into the (inverted) rotation matrices:

Here's the variation:

Now that we've got our matrices defined, we need to understand how to use them.  If Rz is the matrix representing the rotation around the z axis, and P is a point we're rotating (represented as a 4x1 matrix), then RzP is the product of the two matrices (resulting in P`--our new point).  We can continue the chain, to rotate around the y axis (using the rotation matrix around y, Ry): RyP` = Ry(RzP).

One property of matrices is that they're associative in multiplication, so Ry(RzP) = (RyRz)P.

The matrix multiplication RyRz is the transformation to align our axis vector with the z axis.  We can take the result of that transformation, and add on the rotation about the axis vector, R: (RRyRz)P.

Once again, we have reached the point at which we need to "undo" our initial rotation transformations, in order to return the scene to its original coordinate space.  The last rotation we performed which we want to undo is the rotation around the y axis, Ry.  The transformation to undo that rotation, we'll call Ry-1.  Appending that operation gives us (Ry-1RRyRz)P.  Likewise, the transformation to undo the original rotation, we'll call Rz-1, so (Rz-1Ry-1RRyRz)P.  So the full transformation we need to apply to each point we're rotating is Rz-1Ry-1RRyRz.

We're using a common matrix notation to denote an inverse transformation.  In the case of a rotation, an inverse is simply the rotation through the same angle, though in the opposite direction (so we switch the signs of each sine in our matrix).  It turns out that the matrix for the opposite rotation is the inverse matrix for the original rotation.  Mathematically, A-1A = AA-1 = I, where A is an invertible square matrix, and I is the identity matrix.  There are many matrix tutorials online, so I won't bother to explain these terms.

For our inverted rotation matrices, Rz-1 and Ry-1,  we can plug in the values for the rotation angles, using a counter-clockwise rotation (remember we used a clockwise rotation above).

Once we have all of our matrices figured out, we can multiply them all together (in sequence--order is important) to obtain a single transformation matrix, T.  We can multiply each point we wish to rotate (every vertex of a complex object, for example) by T to obtain their new positions in the original coordinate space.

Wednesday, January 20, 2010

3D Rotation: Theory (Part I)

Rotating a set of points around an arbitrary axis in three dimensions takes a few steps, and can therefore be a little intimidating to grasp the concept. The main idea, though, is that the scene can be transformed so that the arbitrary axis becomes the z-axis.  At that point, the rotation can be considered a two-dimensional rotation (with the z-component of the rotating points remaining unchanged).  It then suffices to undo the original axis-alignment transformation, to put the arbitrary axis back in its original location.

If your arbitrary axis is part of an orthogonal set (in three dimensions, that is three axis at right angles to one another), I believe you can perform a simple change-of-basis transformation.  I haven't done the math on that, though, since it's a special case of what I'm attempting generally.

The two-dimensional rotation equations can be applied in any of the three basic planes (xy, xz, yz).  Given the x, y and z components of the axis vector (which we'll call u, v and w, respectively), we can determine the angles to use in the 2D rotations.  We'll first rotate the scene around the z axis so that the axis vector lies in the xz plane.  Next, we'll rotate the scene around the y axis so that the axis vector lies along the positive z axis.

The angle of rotation around the z axis (α) is found in terms of u and v (the lengths of the components orthogonal to the z axis):


We can apply the angle (α) to the two-dimensional rotation equation.  We must be careful to note, however, that the rotation in this case is clockwise (the previously-derived equations are for a counter-clockwise rotation).  We can simply use a negative angle, noting that cos(-θ) = cos(θ) and sin(-θ) = -sin(θ):

These equations, like those for the two-dimensional rotation, can be represented in matrix form.

The next step is to rotate the resulting vector around the y axis, until it lies along the z axis.  Looking from the positive y axis, the rotation onto the z axis is another clockwise rotation.  The components we'll use to find the angle of rotation (β), are a little more complicated this time.  Along the z axis is w (simple enough), but along the x axis, the value is the length of the original vector's projection into the xy plane ().

As before, we find the angle of rotation around the axis:


Once again, we apply this angle to the two-dimensional rotation equations, noting that we're using different axes.  What was originally x is now z, and y has become x.


Applying these equations, the axis of rotation will become the new z axis.  As was stated previously, the problem can be treated at this point like a two-dimensional rotation in the xy plane.  After that, these transformations to rotate the axis vector must be undone.  Matrix math can help significantly with that, and will be explored in the next post.

Monday, January 18, 2010

2D Rotation

I've got a vector, which represents the direction an agent is "facing" within a maze's frame of reference.  OpenGL will allow me to draw the agent's rotation by rotating its own frame of reference--but the agent's "facing" vector remains unchanged within the agent's frame of reference.  I am unable to determine the vector's value in the maze's frame of reference from OpenGL.

What I need, then, is to do the rotation myself, and pass the result into OpenGL (without using OpenGL to rotate the agent's frame of reference).  In this case, I will be rotating a single vector value around the z-axis, so it looks just like a two-dimensional rotation.

If I rotate the vector represented by the red line by θ (theta), the result will be the vector represented by the green line.  α (alpha), is the angle between the x-axis and the vector.  We'll call the original vector V0 (with components x0, y0), and the vector resulting from the rotation V1 (with x1, y1).  The length of the vector is r.

x0 and y0 are found by simple trigonometric functions on α:
x0 = r∙cos(α)
y0 = r∙sin(α)

Likewise, x1 and y1 are found in terms of α and θ:
x1 = r∙cos(α + θ)
y1 = r∙sin(α + θ)
 Since cos(a+b) = cos(a)∙cos(b) - sin(a)∙sin(b), and sin(a+b) = sin(a)∙cos(b) + cos(a)∙sin(b):
x1 = r( cos(α)∙cos(θ) - sin(α)∙sin(θ) )
x1 = r∙cos(α)∙cos(θ) - r∙sin(α)∙sin(θ)
x1 = x0∙cos(θ) - y0∙sin(θ)

y1 = r( sin(α)∙cos(θ) + cos(α)∙sin(θ) )
y1 = r∙sin(α)∙cos(θ) + r∙cos(α)∙sin(θ)
y1 = y0∙cos(θ) + x0∙sin(θ)
y1 = x0∙sin(θ) + y0∙cos(θ)
These equations can be represented in matrix form, but for now they suffice to rotate the agent's "facing" vector.

Saturday, January 16, 2010

Need More Maths

I've added an agent to the maze--for now, a simple white arrow.  Its intention is to point in the direction that a more complicated model would be facing (i.e. going "forward" will put a force on the agent in the arrow's direction).  I've mapped keys for moving forward and backward, and rotating clockwise and counter-clockwise (in the x/y plane).

It's a simple enough matter to use OpenGL to display a rotated arrow.  So far, the rendering sequence is:
  1. Translate the scene according to the camera position.
  2. Draw the maze in its own coordinate space.
  3. Translate the scene according to the agent position.
  4. Rotate the scene according to the agent rotation.
  5. Draw the agent in its own coordinate space.
So far, I have not been able to determine a standard way in OpenGL to determine the vector resulting from a rotation.  It appears I will have to implement my own rotation functionality, which means adding a matrix class (with all the appropriate bells and whistles).  With as much math as I'm re-implementing, I'm sure it would be easier to find and use someone else's math library.  I'm taking it as a personal challenge, however, to really understand what I'm doing--and that means I have to understand the math I'm using.  I have found that there's no better way to understand a concept than to implement it.

Thursday, January 14, 2010

Roll Call

If you're a returning reader, leave a comment and let me know why you decided to come back.

That is, what are you hoping to find here?

Initial Movement

The "world" I had created wasn't all that exciting.  It just sat there and waited for the user to hit a key, so it could create a new maze.  The next logical (to me) step was the addition of movement.  I changed the key-handling code a little to make special cases for the arrow keys (Qt::Key_Left, etc.), which would increment or decrement x and y translation values.  This had the effect of making the camera "jump" from one cell in the maze to its neighbor.

I wasn't thrilled with the results.  Ideally, movement within any world would be smooth.  I decided that the camera's movement (and ultimately that of the various entities within the maze) should be built upon some basic laws of physics.  Pressing an arrow key should exert a force on the object, which should cause its velocity to change.  The object's velocity should, in turn, influence its position.

I came across a few problems, at that point:
  1. I was suddenly thinking about the program as a 3D world, but I hadn't implemented any of the building blocks a 3D world requires (3D points and vectors, and all the math associated with them).
  2. The window refresh would have to use a better mechanism.  Up until now, the widget had only been refreshed whenever the maze was re-created, or an arrow key had been pressed.
  3. I wasn't sure how much physics I remembered from my high school and college classes.
The next step, it seemed, was to get the widget refreshing at regular intervals.  Using the Tao framework, this would be accomplished using glutIdleFunc, and the method specified would be called whenever the system wasn't doing something else.  In Qt, the suggestion is to set up a repeating QTimer to fire at the desired interval (I used 4ms--mainly because the example I found also used 4ms).  When that timer's timeout signal is connected to a slot, it will call the slot method at regular intervals.  It's possible to connect the timer directly to the widget's updateGL slot, but it seems a bit cleaner to create your own slot to do some pre-rendering steps, and then call updateGL() explicitly.

In order to test that the widget was correctly andimated, I decided I would rotate the scene around the z-axis by a small amount each frame.  I set a static double to zero, and incremented it a little within my animate slot.  In paintGL, I used glRotated to rotate the scene around the z-axis by the continually-increasing amount.  Once it was clear that I had a properly-spinning maze (and a little bit of vertigo), I pulled out the test code, leaving me with a fully animated (though immobile) world.

In order to make the camera move, I had to make it somewhat self-aware (though not in the creepy, humanity-destroying AI kind of way).  I moved the existing x and y position values into a 3D point class, and gave the camera an instance of the class to mark its position.  I also gave it instances of a vector class, to mark its velocity and acceleration.  (I also gave it rotation information, but I'm not yet sure how to use it.)

During each call to animate(), the camera object determines its new velocity from its acceleration.  This is where I had to return to my high school physics lessons, though I took some liberties for my final calculations.  If an object is accelerating at 9.8 m/s2 for 0.004 s, the velocity increases by 9.8 * 0.004 = 0.0392 m/s. At a steady velocity of 0.0392 m/s, the position changes by 0.0392 * 0.004 = 0.0001568 m.

At that rate, it looks like it will take forever for the camera to move anywhere.  If the force was only applied for 0.004 seconds, it would take quite a while.  As the force is continually applied, however, the velocity quickly increases, and the camera really starts moving.  Even using an acceleration of only 1.0 m/s^2, it doesn't take long to get cruising.

I'm not aware of any excruciatingly easy way to query which keys are being held at any given time in Qt.  Qt does, however, offer some simple methods which are fired off when it receives any key press or release event.  On my QGLWidget sub-class, I added a member: QHash<int, bool> mPressedKeys to keep track of which keys are being pressed.  This makes keyPressEvent and keyReleaseEvent pretty short and simple:
void GLWidget::keyPressEvent( QKeyEvent *e )
{
    mPressedKeys[ e->key() ] = true;
}
void GLWidget::keyReleaseEvent( QKeyEvent *e )
{
    mPressedKeys[ e->key() ] = false;
}

This allows me to query the state of the keys I'm interested in during each frame.  If any of the arrow keys are being pressed, I add an acceleration vector in the desired direction to the camera's internal acceleration vector.  After these vectors are all accounted for, the camera does its own physics calculations, and then the scene gets rendered.  Since the acceleration vector is re-calculated on each frame, I reset it to (0, 0, 0) when I'm done using it to get the new camera position.

I was thrilled to see a smooth camera translation when I pressed any of the arrow keys.  I was somewhat disappointed by two things, though:
  1. Releasing the arrow key did not cause the camera's movement to slow.
  2. Continually holding the arrow key caused the camera to speed up to no discernible limit.
I decided to put a velocity cap of 1.0 m/s on the camera.  If the velocity vector grew larger than that, I would simply normalize it.  If the cap had been something other than 1.0 m/s, I would have multiplied the normalized vector by the cap value.

In order to get the camera to slow down when no force was applied, I decided I had to introduce some sort of friction.  This is where my grasp of high school physics pretty much failed me, so my solution is likely not very true to life.  I ultimately decided that, in order to slow the object, I needed a force in the opposite direction from its velocity.  I introduced what I'm calling a friction constant to multiply by the inverse velocity's normalized vector.  In the case that no acceleration vector exists at the time of the velocity calculation, the friction vector is assigned to the acceleration vector.

This appeared to work pretty well; the camera came to a near-stop as quickly as it started moving.  I say "near-stop," however, because it never quite came to a stand-still.  I introduced another variable, mVelocityThreshold.  When the velocity vector's length is less than this value, I set the velocity vector to zeros, which finally causes the object to completely stop.

***

This is where the program is, today.  So now I get to discuss some of the concerns I have with the current implementation, as well as what I intend to work on next.
  • The velocity threshold, while apparently useful, does not completely solve the problem.  If the camera is traveling in the x direction, and I start applying only a y directional force, the velocity's x component does not reach zero--much like it didn't before I introduced the threshold.  I think I need to investigate a per-axis velocity threshold.
  • I'm not convinced that the frame rate is completely consistent at 0.004 ms, or 250 fps.  Since the physics calculations (loosely) depend on this, I feel like I should set up a timer to query on each frame, and determine the number of milliseconds since the previous calculation.
  • The friction vector doesn't take into account forces like gravity.  Perhaps friction will have to be calculated on a per-component basis as well.
  • The camera will likely have to be completely reconsidered.  We intend to give this game a third-person, over-the-shoulder type of view, so the arrow keys will need to move the agent within the maze instead of the camera.  In addition to this, the up arrow will need to accelerate the agent along an arbitrary vector (and not just along the y-axis).  This implies that the agent will need, along with position, velocity and acceleration, a rotation variable.  I'm not yet confident as to how to use rotation effectively in three-space.
That last point is likely where I will spend my brain power.  The first two probably aren't as interesting, but hopefully they won't take much energy to implement, either.  The third point is a bridge I'll cross when I come to it.

Tuesday, January 12, 2010

Sidenote

I'm a little humbled that, so shortly after starting this project, I've already had visitors.  It occurs to me that, without context, this site likely looks like some random ramblings, and probably isn't very interesting.  In fact, I can't imagine it being very interesting even after it really gets rolling, except perhaps to a few fellow geeks.

I have decided that I want to become, if not an expert, somewhat renown when it comes to using OpenGL with Qt.  The combination of two projects (the meta-project you're reading now, and the game I'm attempting to develop) are the means I have selected to attempt to accomplish that probably-lofty goal.

By the end of the next post, I will probably be caught up to the present state of development.  I imagine the posts after that will be somewhat more interesting, as I intend to discuss the questions I have and the problems I'm trying to solve, as well as the answers as I come upon them.  My hope is that these future posts will be significantly more detailed than what I've written so far.

I also hope to revisit the steps I've already written about, at some point, to give a clearer and more helpful explanation (should anyone ever, for some unknown reason, wish to follow my footsteps).  I consider myself a newbie in this whole arena, and I hope this site will be ultimately helpful to other newbies.

That being said, I welcome any feedback.  I intend to keep the comments open and (mostly) not moderated.  If you have a suggestion, please let me know.  Even if you're not at all technically-minded, but stumble across my writings, feel free to leave a note.

I look forward to hearing what my readers have to say.

Migration to Visual Studio

Through my employment, I've become quite comfortable using Microsoft Visual Studio.  There are times when the intellisense completely fails, but for the most part it's more than sufficient.  The feature I really rely on, though, is the debugger.

The Qt Creator IDE has debugging functionality, but I didn't find it anywhere near as intuitive as Visual Studio's.  After developing with Creator for a while, I got frustrated enough to try to return to my comfort zone.  I don't intend to go out of my way to make this thing multi-platform, so I wasn't too concerned about  compiler differences, etc.

I did a quick search for "qt in visual studio" and came across (I believe) these step-by-step explanations.  Between the two, I was able to get Qt building for Visual Studio.  As mentioned in both how-to's, it took quite a while, so have a good book on hand if you give it a try.

I'm afraid that, writing this in retrospect, I'm entirely too unclear on the details to be very helpful.  Perhaps the next time I re-install Windows, I'll pay better attention.  I'm sure there were other settings and options I had to tweak in order to import the Qt project into Visual Studio, but I don't remember how that worked.

Monday, January 11, 2010

Reboot Unbeknownst

When I started thinking about how to randomly generate a maze, I had no idea that its implementation would become the basis for the rest of my project.  I only knew that I had come up with an algorithm, and that I wanted to "put it to paper."  As with the map-builder program, I initially envisioned a two-dimensional display that would simply display the maze.  Pressing any key would cause the maze to be re-generated.

I had used Qt 3 with C++ pretty extensively in a previous employment, and decided I would like to see what Nokia's acquisition and the latest version had to offer.  I toyed with the idea of keeping the entire application two-dimensional, but ultimately decided that I needed more OpenGL practice if I was ever going to get the game to a playable state.

I downloaded the Qt SDK, and started up the Qt Creator IDE.  In no time at all, I had a blank QGLWidget running in my new application.  A few moments after that, I had a simple red triangle showing brightly against the black.  (I don't recall referring to any particular OpenGL tutorial for the simple display, but it's possible I didn't figure it out on my own.)

It was finally time to begin implementing my maze-building algorithm.  I decided to start with an 11x11 array of type Direction, a simple enumeration to define which compass directions were not blocked from a given cell.  I figured that, with a maze of that size, I could determine whether the algorithm worked before extending it to an arbitrary size.

The algorithm was built similar to a simple stack-based maze solver.  I designated one cell as the start, and a set of cells as candidates for the end.  I also designated a set of bounce-back cells, which consisted of the outer border of the maze.  First, a path from the start to one of the end cells was found, randomly stepping from one cell to a neighbor.  Each newly-found cell would be added to the set of bounce-back cells.  If the path stepped into a bounce-back cell, that step would be popped off of the path's stack, and a different step would be attempted.  Likewise, if no valid step could be taken, the step to get to the dead-end cell would be popped off the stack.  Eventually, the winding path would find its way to one of the designated end cells.  At that point, the maze array was updated to keep track of the valid directions available along the path.

A new start point would then be chosen at random from among those cells which had no Direction data, since they were not among the previously-found path cells.  The cells that did contain Direction data were designated in the set of new end cells.  Once again, the outer border would be included in the bounce-back cell set.  The new path would then be determined, just like the initial path had.  This process would repeat until all of the cells in the array had Direction data.

With the maze model now complete and presumed working, I started to work on getting it to display.  I decided that the x/y plane in the QGLWidget would be the ideal place for drawing the maze, using simple lines.  I envisioned a maze centered at (0, 0, 0), with the camera sitting above, looking straight down.  Using GL_LINES, it was simple enough to get the maze drawn, with each cell being a 1x1 square.  I had to go back to some OpenGL tutorials, though, to figure out how to get the "camera" working correctly.

OpenGL doesn't really have a concept of a camera.  It is implied that the scene is viewed from the (0, 0, 0) position, looking along the Z axis, so to see the scene from different positions, the entire scene must be transformed.  In my case, I set up a perspective projection (as opposed to orthographic), with a view angle of 45 degrees in the y direction.  A little bit of trigonometry showed me how far along the Z axis I would have to transform my scene in order to see it.

Digging through the Qt API documentation a little, I found what I needed for rebuilding the maze: keyPressEvent.  I reimplemented the function in my QGLWidget sub-class, and accepted any key press as a trigger.  With the maze newly rebuilt, I didn't notice any change, however.  I needed to tell the widget to repaint itself.  I created a signal on my maze class (and made it a QObject subclass), which I emitted when the maze had finished rebuilding.  I connected that signal to the updateGL slot on the QGLWidget, and found that the display updated correctly each time the maze was rebuilt.

It was a simple matter to enter the third dimension.  I decided that the units I would be using were meters, so that each of the cells in the maze were a m2.  I changed GL_LINES to GL_QUADS, added a few more vertices between glBegin and glEnd, and changed the lines into walls, half a meter tall.


Sunday, January 10, 2010

Out Like A Lightbulb

My wife and I have really enjoyed playing the classic Bomberman games together.  We've looked for games in a similar style, and one day I decided I could build one.  We spent several hours over several days envisioning and discussing our new game and its game-play.  The basic idea was that each player would control a cartoony penguin, with the intent of finding a way through a maze as quickly as possible.  We elaborated on the maze idea, adding blocks which could be moved by either player to interfere with the progress of the opponent.

My wife sketched out a few mazes, and that led me to consider how to get the paper versions into a digital format.  I decided a map builder program would be ideal, with a two-dimensional display of the maze next to a separate toolbar.  The user could click on various points within the display, and alter the layout of the map.  The idea was to implement various tools, the selection of which would cause different layout changes to the map--walls and hallways; movable blocks and other obstacles; player starting positions; etc.

I had a little bit of OpenGL experience from a class I had taken, and decided I would like to learn more about the API.  I had used C# and the Tao framework for my class project, so was somewhat familiar with it.  Enough, I figured, to be able to get a good start.

I loaded up Visual Studio, started a new C# project, and included the Tao.FreeGlut and Tao.OpenGL references, as well as the freeglut.dll file.  During my class, I had made the mistake of not including the .dll file with my executable, so I couldn't get my program to run on others' machines.  This time, I made sure the .dll was exported along with the executable.

It turns out to have been a pretty futile exercise.  Soon after I got the program compiling (and not doing much else), I started thinking about how I would randomly generate an arbitrary-sized maze.  I came up with an algorithm, and started planning a new side-project, which (at the time) was to be entirely unrelated to the game.  Before too long, though, I had abandoned the map-building project altogether.