2012Fall7646 Project 3

From Quantwiki
Jump to: navigation, search

Important Updates

You are allowed to use NumPy libraries for the linear regression component of this project.

You are NOT allowed to use other peoples code to implement Random Forests. You must write that yourself.

Overview

You are to implement and evaluate a Random Forest learner class, (similar interface as for your linear regression and KNN learners). Your implementation should be based on the method described by Cutler (and by Tucker in class). See the link to the paper below.

Your class should implement the following functions/methods for the Random Forest learner:

learner = RandomForestLearner(k = 3)
learner.addEvidence(Xtrain, Ytrain)
Y = learner.query(Xtest)

Where "k" is the number of trees to create in your forest. Xtrain and Xtest should be ndarrays (numpy objects) where each row represents an X1, X2, X3... XN set of feature values. The columns are the features and the rows are the individual example instances. Y and Ytrain are single dimensional lists that indicate the value we are attempting to predict with X. For an example of how to code an object, take a look at QSTK/qstklearn/kdtknn.py

We are considering this a regression problem (not classification). So the goal is to return a continuous numerical result (not a discrete numerical result).

Some external resources that might be useful for this project:

Simplifications & Clarifications

For reference/discussion , assume that Xij is the value of feature i at row j of the data, and Yj is the target value to learn. So our data looks like:

X11 X21 X31 Y1
X12 X22 X32 Y2
X13 X23 X33 Y3
...

In order to make the project a bit easier, we will make the following simplifications to Cutler's algorithm:

  • For choosing feature to split on: Select randomly
  • For choosing SplitVal: Choose two random rows in the data (two separate j's) then take the mean of the two Xs. As an example, suppose we're splitting on feature 2, and our random rows are 6 and 8. SplitVal = (X26 + X28) / 2
  • When you train, you should send 60% of the data to addEvidence() for the random forest. It should then select 60% of the data randomly with replacement for each tree (bootstrap aggregation, or bagging).
  • As questions arise, I will add answers here -- (Tucker)

Recommendations on How Implement This

First create a single function/method that reads a tree in from a CSV file, and then allows you to query the tree. Make sure you have this component rock solid before you start work on the other components. Use the representation for trees that I have discussed in class: Four columns where the columns represent:

  • Feature number (0 and greater, or -1 if it is a leaf)
  • SplitVal (or leaf value if it is a leaf)
  • Index/pointer to root of left tree
  • Index/pointer to root of right tree

Next, build a function that can learn a tree from data. Feed it some very simple examples and make sure it generates correct trees.

Next, build on these components to create a learner that learns multiple trees (a forest).

Your addEvidence() function should train multiple trees (K of them in fact); each tree with a random selection of 60% of the data.

The Data

We will use the same data as in the KNN assignment. You will find these two files in the Examples/KNN directory:

  • data-classification-prob.csv
  • data-ripple-prob.csv

Each data file contains 3 columns: X1, X2, and Y. Please don't be misled by the name "data-classification-prob.csv" in to treating the problem as classification. It's regression.

The columns X1,X2 and Y in data-classification-prog.csv represent one set of paired data and data-ripple-prob.csv represents another set. One intent of the assignment is for us to assess how well the learning algorithm works on these different data sets. In each case you should use the first 60% of the data for training, and the second 40% for testing.

Software to write

  • Create a python object called RandomForestLearner in a file named RandomForestLearner.py that implements the methods described above.
  • Reuse your LinRegLearner and KNN learner.
  • Reuse your python program called testlearner.py that evaluates your each of your learners in the following manner:
    • Selects the first 60% of the data for training (e.g., feed to addEvidence().
    • Use the remaining 40% for testing (e.g., query).
    • testlearner.py should evaluate the following for each learner:
      • Time required for training (average seconds per instance)
      • Time required for query (average seconds per instance)
      • RMS error (root mean square difference between "correct" Y and the result provided by query()). See http://en.wikipedia.org/wiki/Root_mean_square for example of how to calculate RMS error.
      • Correlation coefficient of the response from the learner versus the correct response (using the 40% out of sample data)

Experiments to run

  • Run your test on both data sets and report: avg train time per instance, avg query time per instance, correlation for each data set. USE K=3 trees for this evaluation.
  • For each data set test values of K from K=1 to ? the find the value of k that provides the best correlation coefficient.
  • Plot correlation versus K for each data set and report K and the correlation for the best solution.

Deliverables

Submit files (attachments) via t-square

  • Your code in RandomForestLearner.py, and testlearner.py
  • Plot the correlation coefficient for data-classification-prob.csv and data-ripple-prob.csv, as you vary K from 1 to a large number, combine those plots into your report.
  • Report (in a pdf file, report.pdf):
    • In a table with columns named KNN Ripple (K=best), KNN Classification (K=best), LinReg Ripple, LinReg Classification, RF Ripple (K=3), RF Ripple (K=Best), RF Classification (K=3), RF Classification (K=best) include the following data:
      • The best K for each dataset (the one that maximizes correlation) and
      • the corrcoef for each dataset,
      • the RMS error for each dataset
      • the timing of training
      • timing of query
    • the two plots
  • Preferred, but not required: Scatter plots for each experiment that show predicted Y versus actual Y (within the report pdf).
  • Important: Disclose and cite any code or ideas you drew from others.

How to submit

Go to the t-square site for the class, then click on the "assignments" tab. Click on "add attachment" to add your N files. Once you are sure you've added the files, click "submit."

Extra Credit

Write additional code to implement the following additional features for Random Forests

  • Add a parameter to the constructor called leafsize that causes that many elements to be combined into the leaves.
  • Implement Linear Regression learners at the leaves.