# Stochastic Greedy¶

The stochastic greedy algorithm for optimization.

The stochastic greedy algorithm is a simple approach that, for each iteration, randomly selects a subset of data and then finds the best next example within that subset. The distinction between this approach and the sample greedy algorithm is that this subset changes at each iteration, meaning that the algorithm does cover the entire data set. In contrast, the sample greedy algorithm is equivalent to manually subsampling the data before running a selector on it. The size of this subset is proportional to the number of examples that are chosen and determined in a manner that results in the same amount of computation being done no matter how many elements are selected. A key idea from this approach is that, while the exact ranking of the elements may differ from the naive/lazy greedy approaches, the set of selected elements is likely to be similar despite the limited amount of computation.

param self.function:
A submodular function that implements the _calculate_gains and _select_next methods. This is the function that will be optimized.
type self.function:
base.BaseSelection
param epsilon:The inverse of the sampling probability of any particular point being included in the subset, such that 1 - epsilon is the probability that a point is included. Default is 0.9.
type epsilon:float, optional
param random_state:
The random seed to use for the random selection process.
type random_state:
int or RandomState or None, optional
param self.verbose:
Whether to display a progress bar during the optimization process.
type self.verbose:
bool
self.function

A submodular function that implements the _calculate_gains and _select_next methods. This is the function that will be optimized.

Type: base.BaseSelection
self.verbose

Whether to display a progress bar during the optimization process.

Type: bool
self.gains_

The gain that each example would give the last time that it was evaluated.

Type: numpy.ndarray or None