Random Sample Consensus, or RANSAC, one of the most commonly used algorithms in Computer Vision. As a result, much research has gone into making RANSAC extensions and variants that increase the efficiency or accuracy of the estimation. We have implemented a templated class that makes using RANSAC for estimation extremely easy as well as simple to extend.
This module can be included in your code with:
#include <theia/ransac.h>
NOTE: For the descriptions below, we often use the term “RANSAC” to mean the general strategy of model estimation via sample consensus. Most of the time, “RANSAC” refers to RANSAC and the variants we have implemented.
The following RANSAC methods are implemented in Theia:
The basic method for using RANSAC (and its variants) is to specify the class corresponding to the algorithm you will use (e.g. RANSAC, PROSAC, etc.) and the method for estimating a model from data points. The interface to do the latter requires you implement derived class of the Estimator class.
template <class Datum, class Model>
class Estimator {
public:
Estimator() {}
virtual ~Estimator() {}
virtual bool EstimateModel(const std::vector<Datum>& data,
std::vector<Model>* model) const = 0;
virtual double Error(const Datum& data, const Model& model) const = 0;
// Functions to optionally implement.
virtual bool EstimateModelNonminimal(const std::vector<Datum>& data,
std::vector<Model>* model) const;
virtual bool RefineModel(const std::vector<Datum>& data, Model* model) const;
virtual bool ValidModel(const Model& model) const;
// Helper methods implemented in base class.
virtual std::vector<double> Residuals(const std::vector<Datum>& data,
const Model& model) const;
std::vector<bool> GetInliers(const std::vector<Datum>& data,
const Model& model,
double error_threshold) const;
int GetNumInliers(const std::vector<Datum>& data,
const Model& model,
double error_threshold) const;
};
The only methods that are required to be implemented are the Estimator::EstimateModel() and Estimator::Error() methods. These methods specify how the model is estimated from the data provided, and how the error residuals are calculated from a given model. All other methods are optional to implement, but will only enhance the output of RANSAC.
In order to make our RANSAC classes consistent and extendible we specify an interface as a SampleConsensusEstimator class. All of the RANSAC variants in Theia are derived from this class, so they are all guaranteed to have the same interface. When using a RANSAC (or RANSAC-variant) class, you simply need to create a ransac object, set up the parameters you want to use, and then call the Estimate method.
This is the main (and often the only) method you use when performing RANSAC (or a variant). It computes a model given the data and the Estimator class that you have specified for your problem. It returns true (and sets the best_model parameter) upon success, and false (with best_model having undefined behavior) upon failure.
The other main component of using one of the RANSAC methods is to set up the RansacParameters used for the RANSAC scheme. RansacParameters is a struct that holds several crucial elements to deciding how the RANSAC scheme performs.
We will illustrate the use of the RANSAC class with a simple line estimation example.
// Our "data". struct Point { double x; double y; }; // Our "model". struct Line { double m; double b; }; // Estimator class. class LineEstimator: public Estimator<Point, Line> { // Estimate a line from two points. bool EstimateModel(const std::vector<Point>& data, std::vector<Line>* models) const { Line model; model.m = (data[1].y - data[0].y)/(data[1].x - data[0].x); model.b = data[1].y - model.m*data[1].x; models->push_back(model); return true; } // Calculate the error as the y distance of the point to the line. double Error(const Point& point, const Line& line) const { return point.y - (line.m*point.x + line.b); } };
Specifying an Estimator is that easy! Now lets look at how to actually use a RANSAC method to use the LineEstimator.
int main (int argc, char** argv) { // Generate your input data using your desired method. // We put pseudo-code here for simplicity. std::vector<Point> input_data; // Add 700 inliers. for (int i = 0; i < 700; i++) { input_data.push_back(inlier_point); } // Add 300 outliers. for (int i = 0; i < 300; i++) { input_data.push_back(outlier_point); } // Specify RANSAC parameters. double error_threshold = 0.3; int min_num_inliers = 600; int max_iters = 1000; // Estimate the line with RANSAC. LineEstimator line_estimator; Line best_line; // Create Ransac object, specifying the number of points to sample to // generate a model estimation. Ransac<Point, Line> ransac_estimator(2); // Set the ransac parameters. RansacParameters params; params.error_thresh = 0.1; // Initialize must always be called! ransac_estimator.Initialize(params); ransac_estimator.Estimate(input_data, line_estimator, &best_line); LOG(INFO) << "Line m = " << best_line.m << "*x + " << best_line.b; return 0; }
There you have it. With just a few lines of code we can use RANSAC to estimate the best fitting line. You could easily swap the Ransac class with any of the RANSAC variants implemented in Theia without having to change anything else in the code.
Theia has implemented several RANSAC methods as derived classes of the SampleConsensusEstimator class. The typical use case is still to call the Estimate() method, but each method is likely to have a different constructor. The constructors for each method are specified as follows
The standard RANSAC implementation as originally proposed by Fischler et. al. [Fischler]
Progressive Sampling Consensus as originally proposed by [Chum]. Input data is assumed to have a quality to it, which can then be exposed in your sampling strategy by smartly sampling the high quality data points first, then progressively sampling the rest of the data set. In the worst case, this algorithm degenerates to RANSAC, but typically is significantly faster.
A generalization of RANSAC that chooses to maximize the likelihood of an estimation rather than the inlier count. Proposed by [Torr] et. al.
min_sample_size: The minimum number of samples needed to estimate a model
Adaptive Real-Time Consensus is a method proposed by [Raguram] that utilizes pre-emptive techniques to perform a partially depth-first evaluation of many generated hypotheses at once. This allows for a bounded running time while pursuing only the models which are most likely to lead to high quality results. This results in a very fast method which can be used for real-time applications.
min_sample_size: The minimum number of samples needed to estimate a model.
max_candidate_hyps: Maximum number of hypotheses in the initial hypothesis set
block_size: Number of data points a hypothesis is evaluated against before preemptive ordering is used.
NOTE: This method works for all the unit tests currently in Theia, but needs to be tested further to ensure correctness. Use with caution.
The SampleConsensusEstimator class consists of two main items: a Sampler and a QualityMeasurement. These two members specify the most important aspects of most RANSAC techniques: how the data is sampled (Sampler) and how the model quality (or, conversely, error) is measured (QualityMeasurement). Adjusting the Sampler is how techniques such as PROSAC achieve success. Adjusting the measurement of model quality from the trivial method (e.g. counting inliers) is how methods such as MLESAC achieve good results. Both the Sampler and QualityMeasurement classes are pure virtual classes that must be derived for all RANSAC methods. Further, the Estimate() method implemented in the SampleConsensusEstimator base class performs a typical RANSAC style routine, sampling according to the Sampler and QualityMeasurement specified.
To implement a new RANSAC method, you should create a class derived from SampleConsensusEstimator. Most methods will probably involve simply using a new sampler or quality measurement class, as the Estimate() function will not change and can simply be inherited from the SampleConsensus class. In those cases, you can follow the model of the Ransac class to specify your new RANSAC-variant class:
template <class Datum, class Model> class Ransac : public SampleConsensusEstimator<Datum, Model> { public: explicit Ransac(const int min_sample_size) : SampleConsensusEstimator<Datum, Model>(min_sample_size) {} virtual ~Ransac() {} // Initializes the random sampler and inlier support measurement. bool Initialize(const RansacParameters& ransac_params) { Sampler<Datum>* random_sampler = new RandomSampler<Datum>(this->min_sample_size_); QualityMeasurement* inlier_support = new InlierSupport(ransac_params.error_thresh); return SampleConsensusEstimator<Datum, Model>::Initialize( ransac_params, random_sampler, inlier_support); } };
This is all that the Ransac class needs to specify, and the Estimate() function implemented in the base class (SampleConsensusEstimator) will use the RandomSampler to randomly sample the data, and InlierSupport to calculate inliers. Of course, RandomSampler and InliersSupport are derived classes of Sampler and QualityMeasurement respectively. See the code for more details.
If you want to create a new RANSAC method that involves changing the way estimation happens, your class can override the Estimate() method. For our implementation, Arrsac does this. See the code for those classes for a good example on how you should override the Estimate() method.