Uses of Class
de.jstacs.algorithms.optimization.EvaluationException

Packages that use EvaluationException
de.jstacs.algorithms.optimization Provides classes for different types of algorithms that are not directly linked to the modelling components of Jstacs: Algorithms on graphs, algorithms for numerical optimization, and a basic alignment algorithm.
de.jstacs.classifier.scoringFunctionBased.cll Provides the implementation of the log conditional likelihood as an OptimizableFunction and a classifier that uses log conditional likelihood or supervised posterior to learn the parameters of a set of ScoringFunctions 
de.jstacs.classifier.scoringFunctionBased.logPrior Provides a general definition of a parameter log-prior and a number of implementations of Laplace and Gaussian priors 
 

Uses of EvaluationException in de.jstacs.algorithms.optimization
 

Methods in de.jstacs.algorithms.optimization that throw EvaluationException
static double[] Optimizer.brentsMethod(OneDimensionalFunction f, double a, double x, double b, double tol)
          Approximates a minimum (not necessary the global) in the interval [lower,upper].
static double[] Optimizer.brentsMethod(OneDimensionalFunction f, double a, double x, double fx, double b, double tol)
          Approximates a minimum (not necessary the global) in the interval [lower,upper].
static int Optimizer.conjugateGradientsFR(DifferentiableFunction f, double[] currentValues, Optimizer.TerminationCondition terminationMode, double eps, double linEps, StartDistanceForecaster startDistance, SafeOutputStream out, Time t)
          The conjugate gradient algorithm by Fletcher and Reeves.
static int Optimizer.conjugateGradientsPR(DifferentiableFunction f, double[] currentValues, Optimizer.TerminationCondition terminationMode, double eps, double linEps, StartDistanceForecaster startDistance, SafeOutputStream out, Time t)
          The conjugate gradient algorithm by Polak and Ribiere.
static int Optimizer.conjugateGradientsPRP(DifferentiableFunction f, double[] currentValues, Optimizer.TerminationCondition terminationMode, double eps, double linEps, StartDistanceForecaster startDistance, SafeOutputStream out, Time t)
          The conjugate gradient algorithm by Polak and Ribiere called Polak-Ribiere-Positive.
 double OneDimensionalSubFunction.evaluateFunction(double x)
           
abstract  double OneDimensionalFunction.evaluateFunction(double x)
          Evaluates the function at position x.
 double NegativeOneDimensionalFunction.evaluateFunction(double x)
           
 double OneDimensionalFunction.evaluateFunction(double[] x)
           
 double NegativeOneDimensionalFunction.evaluateFunction(double[] x)
           
 double NegativeFunction.evaluateFunction(double[] x)
           
 double NegativeDifferentiableFunction.evaluateFunction(double[] x)
           
 double Function.evaluateFunction(double[] x)
          Evaluates the function at a certain vector (in mathematical sense) x
 double[] NumericalDifferentiableFunction.evaluateGradientOfFunction(double[] x)
          Evaluates the gradient of function at a certain vector (in mathematical sense) x numerically.
 double[] NegativeDifferentiableFunction.evaluateGradientOfFunction(double[] x)
           
abstract  double[] DifferentiableFunction.evaluateGradientOfFunction(double[] x)
          Evaluates the gradient of function at a certain vector (in mathematical sense) x
static double[] Optimizer.findBracket(OneDimensionalFunction f, double lower, double startDistance)
          This method returns a bracket containing a minimum.
static double[] Optimizer.findBracket(OneDimensionalFunction f, double lower, double fLower, double startDistance)
          This method returns a bracket containing a minimum.
 double[] OneDimensionalFunction.findMin(double lower, double fLower, double eps, double startDistance)
          This method returns a minimum x and the value f(x), starting the search at lower.
protected  double[] DifferentiableFunction.findOneDimensionalMin(double[] current, double[] d, double alpha_0, double fAlpha_0, double linEps, double startDistance)
          This method is used to find an approximation of an onedimensional subfunction.
static double[] Optimizer.goldenRatio(OneDimensionalFunction f, double lower, double upper, double eps)
          Approximates a minimum (not necessary the global) in the interval [lower,upper].
static double[] Optimizer.goldenRatio(OneDimensionalFunction f, double lower, double p1, double fP1, double upper, double eps)
          Approximates a minimum (not necessary the global) in the interval [lower,upper].
static int Optimizer.limitedMemoryBFGS(DifferentiableFunction f, double[] currentValues, byte m, Optimizer.TerminationCondition terminationMode, double eps, double linEps, StartDistanceForecaster startDistance, SafeOutputStream out, Time t)
          The Broyden-Fletcher-Goldfarb-Shanno version of limited memory quasi Newton methods.
static int Optimizer.optimize(byte algorithm, DifferentiableFunction f, double[] currentValues, Optimizer.TerminationCondition terminationMode, double eps, double linEps, StartDistanceForecaster startDistance, SafeOutputStream out)
          This method enables you to use all different implemented optimization algorithms by only one method.
static int Optimizer.optimize(byte algorithm, DifferentiableFunction f, double[] currentValues, Optimizer.TerminationCondition terminationMode, double eps, double linEps, StartDistanceForecaster startDistance, SafeOutputStream out, Time t)
          This method enables you to use all different implemented optimization algorithms by only one method.
static int Optimizer.quasiNewtonBFGS(DifferentiableFunction f, double[] currentValues, Optimizer.TerminationCondition terminationMode, double eps, double linEps, StartDistanceForecaster startDistance, SafeOutputStream out, Time t)
          The Broyden-Fletcher-Goldfarb-Shanno version of quasi Newton method.
static int Optimizer.quasiNewtonDFP(DifferentiableFunction f, double[] currentValues, Optimizer.TerminationCondition terminationMode, double eps, double linEps, StartDistanceForecaster startDistance, SafeOutputStream out, Time t)
          The Davidon-Fletcher-Powell version of quasi Newton method.
static int Optimizer.steepestDescent(DifferentiableFunction f, double[] currentValues, Optimizer.TerminationCondition terminationMode, double eps, double linEps, StartDistanceForecaster startDistance, SafeOutputStream out, Time t)
          The steepest descent.
 

Uses of EvaluationException in de.jstacs.classifier.scoringFunctionBased.cll
 

Methods in de.jstacs.classifier.scoringFunctionBased.cll that throw EvaluationException
 double NormConditionalLogLikelihood.evaluateFunction(double[] x)
           
 double[] NormConditionalLogLikelihood.evaluateGradientOfFunction(double[] x)
           
 

Uses of EvaluationException in de.jstacs.classifier.scoringFunctionBased.logPrior
 

Methods in de.jstacs.classifier.scoringFunctionBased.logPrior that throw EvaluationException
abstract  void LogPrior.addGradientFor(double[] params, double[] vector)
          Adds the gradient of the log-prior using the current parameters to a given vector.
 double SeparateLaplaceLogPrior.evaluateFunction(double[] x)
           
 double SeparateGaussianLogPrior.evaluateFunction(double[] x)
           
 double[] LogPrior.evaluateGradientOfFunction(double[] params)