Viewing Pipeline
Due midnight, 3 March 2011
The goal of this week's lab is to get a working visualization pipeline. Next week you'll integrate it with your data manipulation.
Tasks
The result of your work this week should be a ViewRef class that implements a 3D viewing transformation, integrated with your visualization program so that you can translate, rotate, and scale a set of axes and a simple data set. The 3D pipeline should work equally well for 2D by limiting the user to rotations about the zaxis only.

In a new file, create a new class called ViewRef (you are welcome to
use any name you like). The ViewRef class should have the following
felds and default values, which should be set in the __init__
method. You should probably create a reset method that is
called from the __init__ method so that you can easily reset
to the default view.
Create the following fields
 vrp: a NumPy matrix with the default value [0.5, 0.5, 0]
 vpn: a NumPy matrix with the default value [0, 0, 1]
 vup: a NumPy matrix with the default value [0, 1, 0]
 u: a NumPy matrix with the default value [1, 0, 0]
 extent: a list with the default values [1, 1, 1]
 view: a list with the default value [400, 400]
 view offset: a list with the default value [20, 20]
The ViewRef class also needs a function build that computes the view matrix and returns it. The process to execute in the build function is as follows.

Generate a 4x4 identity matrix, which will be the basis for the view
matrix. For example:
m = numpy.identity( 4, float )

Generate a translation matrix to move the VRP to the origin and then
premultiply m by the translation matrix. For example:
t1 = numpy.matrix( [[1, 0, 0, self.vrp[0, 0]], [0, 1, 0, self.vrp[0, 1]], [0, 0, 1, self.vrp[0, 2]], [0, 0, 0, 1] ] ) m = t1 * m

Calculate the view reference axes tu, tvup, tvpn.
 tu is the cross product of the vup and vpn vectors.
 tvup is the cross product of the vpn and tu vectors.
 tvpn is a copy of the vpn vector.
 Normalize the view axes tu, tvup, and tvpn. You probably want to write a normalize function to handle this. Make sure you do not include the homogeneous coordinate in the normalization process.
 Copy the orthonormal axes back to self.u, self.vup and self.vpn.

Use the normalized view reference axes to generate the rotation matrix
to align the view reference axes and then premultiply M by the
rotation. For example:
# align the axes r1 = numpy.matrix( [[ tu[0, 0], tu[0, 1], tu[0, 2], 0.0 ], [ tvup[0, 0], tvup[0, 1], tvup[0, 2], 0.0 ], [ tvpn[0, 0], tvpn[0, 1], tvpn[0, 2], 0.0 ], [ 0.0, 0.0, 0.0, 1.0 ] ] ); m = r1 * m

Translate the lower left corner of the view space to the origin. Since
the axes are aligned, this is just a translation by half the extent of
the view volume in the X and Y view axes.
Note that the rest of the project description will use shorthand, rather than writing out the matrices explicitly. Conceptually, the translation is represented as:
m = T( 0.5*extent[0], 0.5*extent[1], 0 ) * m

Use the extent and screen size values to scale to the screen.
m = S( view[0] / extent[0], view[1] / extent[1], 1.0 / extent[2] ) * m

Finally, translate the lower left corner to the origin and add the
view offset, which gives a little buffer around the top and left edges
of the window.
m = T( view[0] + viewoffset[0], view[1] + viewoffset[1], 0 ) * m
If your code is working properly, then using the default parameters you should get the matrix below.
[[400. 0. 0. 420.] [ 0. 400. 0. 420.] [ 0. 0. 1. 0.] [ 0. 0. 0. 1.]]
You'll want to import your ViewRef class into your display application.

The next step is to create a set of axes in your visualization
program.
A simple method of implementing axes is to create and store a numpy matrix with the axis endpoints. You should have six endpoints, and the length of each axis should be 1 (don't assume that one of the endpoints will always be zero). You'll also want a list to hold the actual graphics objects (the lines) that instantiate them on the screen.
Make a function (e.g. buildAxes()) that builds the view transformation matrix [VTM], multiplies the axis endpoints by the VTM, then creates three new line objects, one for each axis. Store the three line objects.
Note that the VTM is assuming that the points are in columns. If you are representing the axis points as a set of rows in a matrix, you will need to do the following.
pts = (vtm * self.axes.T).T
The above transposes the axis points so each point is a column, multiplies it by the VTM, and then takes the transpose so that the pts matrix has the axis endpoints as rows again.
Make another function (e.g. updateAxes()) that executes the following algorithm.
# build the VTM # multiply the axis endpoints by the VTM # for each line object # update the coordinates of the object
Note that the original axis endpoints do not, in general, change. If you normalize your data sets prior to running them through the view pipeline, which you should do, then axes of unit length should be appropriate.

Bind a function (eg. handleButton1) to the button 1 motion event. The
standard button 1 event should store the user's click into a variable
(e.g. baseClick1). The button 1 motion function should implement the
following algorithm.
# Calculate the differential motion since the last time the function was called # Divide the differential motion (dx, dy) by the screen size (view X, view Y) # Multiply the horizontal and vertical motion by the horizontal and vertical extents. # Put the result in delta0 and delta1 # The VRP should be updated by delta0 * U and delta1 * VUP (this is a vector equation) # call updateAxes()
Test your translation. See what happens if you put some multipliers on delta0 and delta1 to slow down or speed up the motion.

Button 3 motion should implement scaling. The scaling behavior should
act like a vertical lever. The button 3 click should store a base
click point that does not change while the user holds down the mouse
button. It should also store the value of the extent in the view
space when the user clicked. This is the original extent.
The button 3 motion should convert the distance between the base click and the current mouse position into a scale factor. Keep the scale factor between 0.1 and 3.0. You can then multiply the original extent by the factor and put it into the ViewRef object. Then call updateAxes().
Test out this capability. If you click in the window, as you move above your original click point, the scene should zoom in. As you move below your original click point, the scene should zoom out. As you come back to your original click point, the scene should go back to its original scale.

Pick a rotation method. You could choose to rotate about the VRP.
The method described below rotates about the center of the view
volume.
Make a method in your ViewRef class called rotateVRC that takes two angles as arguments, in addition to self. The two angles are how much to rotate about the VUP axis and how much to rotate about the U axis. The process is as follows.
 Make a translation matrix to move the point ( VRP + VPN * extent[Z] * 0.5 ) to the origin. Put it in t1.
 Make an axis alignment matrix Rxyz using u, vup and vpn.
 Make a rotation matrix about the Y axis by the VUP angle, put it in r1.
 Make a rotation matrix about the X axis by the U angle. Put it in r2.
 Make a translation matrix that has the opposite translation from step 1.
 Make a numpy matrix where the VRP is on the first row, with a 1 in the homogeneous coordinate, and u, vup, and vpn are the next three rows, with a 0 in the homogeneous coordinate.

Execute the following: tvrc = (t2*Rxyz.T*r2*r1*Rxyz*t1*tvrc.T).T
Then copy the values from tvrc back into the VPR, U, VUP, and VPN fields and normalize U, VUP, and VPN.

Add a button 2 motion function and binding. The button 2 click should
store the button click location into a variable like baseClick2. The
rotations will be differential, like the translations, so each time
through your button motion function you will need to update
baseClick2.
Calculate delta0 and delta1 as the pixel motion differences in x and y divided by a constant (e.g. 200) and multiplied by pi (math.pi). Think of it as how many pixels the user must move the mouse to execute a 180 degree rotation. Pass the delta values into the rotateVRC function. Then call updateAxes.
See if your function does the right thing. You probably need to negate the delta1 (vertical motion) to get the proper behavior.
 Have your program read this dataset and plot it, enabling all of the interactive functionality. The data set is purely numeric and already in the range [0, 1] in all three axes.
Extensions
 Use your DataSet class to do the last task.
 Demonstrate your visualization on your own data set.
 Add interface elements to control the user interface. For example, add hot keys to put the view in particular viewing conditions.
 Start working on legends or other annotations in the visualization. Color the axes, for example, and provide a legend specifying which color is which data axis.
 Give the user the ability to click on a data point and then pop up a dialog with the raw data for that point.
 Experiment more with matplotlib.
 Create custom dialog boxes to give the user more control over the visualization.
Writeup
For this week's writeup, describe your ViewRef class, with brief descriptions of all the functions, their inputs, outputs, and purpose. Explain how each interface motion connects to changes in the view reference coordinates.
Include 23 images of your axes and the simple data in different configurations, explaining the user motions that generated the view.
Handin
Once you have written up your assignment, give the page the label:
cs251s11project4
Put your code in the COMP/CS251 folder on fileserver1/Academics. Please make sure you are organizing your code by project.