# Modular Greedy¶

An approach that approximates a submodular function as modular.

This approach approximates the submodular function by using its modular upper-bounds to do the selection. Essentially, a defining characteristic of submodular functions is the diminishing returns property where the gain of an example decreases with the number of selected examples. In contrast, modular functions have constant gains for examples regardless of the number of selected examples. Thus, approximating the submodular function as a modular function can serve as an upper-bound to the gain for each example during the selection process. This approximation makes the function simple to optimize because one would simply calculate the gain that each example yields before any examples are selected, sort the examples by this gain, and select the top $$k$$ examples. While this approach is fast, this approach is likely best paired with a traditional optimization algorithm after the first few examples are selected.

param self.function:
A submodular function that implements the _calculate_gains and _select_next methods. This is the function that will be optimized.
type self.function:
base.BaseSelection
param epsilon:The inverse of the sampling probability of any particular point being included in the subset, such that 1 - epsilon is the probability that a point is included. Default is 0.9.
type epsilon:float, optional
param random_state:
The random seed to use for the random selection process.
type random_state:
int or RandomState or None, optional
param self.verbose:
Whether to display a progress bar during the optimization process.
type self.verbose:
bool
self.function

A submodular function that implements the _calculate_gains and _select_next methods. This is the function that will be optimized.

Type: base.BaseSelection
self.verbose

Whether to display a progress bar during the optimization process.

Type: bool
self.gains_

The gain that each example would give the last time that it was evaluated.

Type: numpy.ndarray or None